id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2307.13361
Of Mice and Pose: 2D Mouse Pose Estimation from Unlabelled Data and Synthetic Prior
Numerous fields, such as ecology, biology, and neuroscience, use animal recordings to track and measure animal behaviour. Over time, a significant volume of such data has been produced, but some computer vision techniques cannot explore it due to the lack of annotations. To address this, we propose an approach for estimating 2D mouse body pose from unlabelled images using a synthetically generated empirical pose prior. Our proposal is based on a recent self-supervised method for estimating 2D human pose that uses single images and a set of unpaired typical 2D poses within a GAN framework. We adapt this method to the limb structure of the mouse and generate the empirical prior of 2D poses from a synthetic 3D mouse model, thereby avoiding manual annotation. In experiments on a new mouse video dataset, we evaluate the performance of the approach by comparing pose predictions to a manually obtained ground truth. We also compare predictions with those from a supervised state-of-the-art method for animal pose estimation. The latter evaluation indicates promising results despite the lack of paired training data. Finally, qualitative results using a dataset of horse images show the potential of the setting to adapt to other animal species.
Jose Sosa, Sharn Perry, Jane Alty, David Hogg
2023-07-25T09:31:55Z
http://arxiv.org/abs/2307.13361v1
# Of Mice and Pose: 2D Mouse Pose Estimation from Unlabelled Data and Synthetic Prior ###### Abstract Numerous fields, such as ecology, biology, and neuroscience, use animal recordings to track and measure animal behaviour. Over time, a significant volume of such data has been produced, but some computer vision techniques cannot explore it due to the lack of annotations. To address this, we propose an approach for estimating 2D mouse body pose from unlabelled images using a synthetically generated empirical pose prior. Our proposal is based on a recent self-supervised method for estimating 2D human pose that uses single images and a set of unpaired typical 2D poses within a GAN framework. We adapt this method to the limb structure of the mouse and generate the empirical prior of 2D poses from a synthetic 3D mouse model, thereby avoiding manual annotation. In experiments on a new mouse video dataset, we evaluate the performance of the approach by comparing pose predictions to a manually obtained ground truth. We also compare predictions with those from a supervised state-of-the-art method for animal pose estimation. The latter evaluation indicates promising results despite the lack of paired training data. Finally, qualitative results using a dataset of horse images show the potential of the setting to adapt to other animal species. Keywords:Self-Supervised Pose Estimation Synthetic Mouse. ## 1 Introduction The study of neurodegenerative human diseases, such as Alzheimer's disease [15, 51], Parkinson's disease [28], and Amyotrophic Lateral Sclerosis (ALS) [46], usually involves using animal models. Mice are the preferred and most extensively utilised animals for such studies because of their genomics similarity with humans and the accumulated knowledge on manipulating their DNA [11]. Due to this tight relationship between mice and the ongoing research on neurodegenerative human diseases, developing tools to observe, describe, and measure mouse behaviour has become crucial [37]. Some years ago, prior to the adoption of computer vision techniques, making such measurements meant tons of manual labour [42, 43]. For example, if someone wanted to measure the position of the mouse's limbs. It implies recording the animal, looking at each video frame, and manually identifying each required body part. Then, it is evident that manual inspections on large videos can be time-consuming and lead to observation errors. Early computational approaches attempt to minimise human intervention in analysing animal recordings. Some tools involve placing physical markers on the animal's body or require painting the body parts to track [2, 56]. Apparent limitations of these techniques are that the physical markers can interfere with the animal's behaviour, and the information that can be extracted is inherently limited by the positioning of the markers or the painted areas. Other approaches use sophisticated and expensive equipment to acquire particular images, which results in costly experiments and problems for deployment and replication [6, 26, 12]. Newer computer vision tools for tracking body parts of animals 1 become less dependent on physical markers, i.e. markerless. Unfortunately, these tools still needed considerable human intervention for pre-processing and post-processing video data. Supervised deep learning approaches have recently become state-of-the-art for pose estimation and tracking of humans and animals [36, 44, 14]. Performance of these techniques often depends on the amount and variability of annotated data for training, which is hard to obtain for some animal species. Thus, there remains an urgent need to develop methods for tracking animal pose that require minimal human effort in training for a new animal domain and operational use. This can be achieved by reducing the need for manual pose annotation of images. Footnote 1: [https://mousespecifics.com/digiait/](https://mousespecifics.com/digiait/) In this paper we tackle the challenging task of predicting 2D mouse poses from unlabelled images. Different from previous deep learning approaches that generally rely on fully supervised frameworks, we adopt a self-supervised 2D pose estimator from the human domain [18]. This method utilises a GAN architecture to learn 2D human poses. During training, it assumes the availability of unlabelled images and an unpaired prior of 2D pose annotations, generally from the same dataset. Our proposal relaxes much more the assumptions about data by building the needed prior of 2D poses using data generated from a 3D model of a generic mouse [4]. Evidently, incorporating synthetic data also provides more flexibility to train the model with entirely unlabelled datasets, which is common for many animal recordings outside of computer vision. Furthermore, our method shows promising results in generating 2D poses for other types of animals, e.g. horses. This demonstrates the viability of our approach to be rapidly deployed to different domains without the burden of annotating data. ## 2 Related Work ### Deep Learning Methods for Animal Pose Estimation Analogous to the definition of human pose estimation [32], animal pose estimation refers to the task of estimating the geometrical configuration of body parts of an animal. This problem has gained increasing attention because of research applications in many different disciplines, including Biology, Zoology, Ecology, Biomechanics [53] and Neuroscience [37]. Compared with human pose estimation, it is still relatively under-explored, principally due to the variability of animal species, and the need for species-specific labelled datasets. Nevertheless, a lot of effort has gone into developing and adapting deep learning models to estimate 2D and 3D animal pose, exploiting similarities between many species of animal. For example, monkeys [59; 40; 1] share similar skeletal structure with humans. Large quadrupeds, such as farm animals [8; 35; 50; 48] and dogs [3; 54; 24] present similarities between their skeletal forms. Automatic 2D pose estimation has also been applied successfully on smaller animal species such as mice. As with larger animals, deep learning methods for pose estimation have been based mostly on supervised methods developed for human pose estimation. Their performance is therefore limited by the availability and correctness of annotated data. For example, DeepLabCut (DLC) [36] adapts a pretrained ResNet with deconvolutional layers [17] to estimates the 2D pose of small animals under laboratory conditions, such as mice and flies. LEAP [44] also uses an earlier model from the human pose estimation domain [57] to solve the same task. DeepPoseKit [14] employs a similar method to estimate 2D animal pose. It uses a network architecture that improves the processing speed based on fully convolutional densenets [16; 19] and stacked hourglass modules [41]. More recently, OptiFlex [31] exploits the temporal information in video data by incorporating flowing convnets [45] into their network architecture. They report similar performance to previous methods [36; 44; 14] on estimating the pose of small animals, e.g. mice, fruit flies, and zebrafish. Perhaps the most popular of these approaches is DeepLabCut. Many subsequent methods adopt it to estimate not only mouse pose, but also pose for a wide variety of other animal species [52; 23; 47; 27; 58; 30; 10]. A common feature of DeepLabCut, DeepPoseKit, LEAP, and OptiFlex is their reliance on manual annotation of pose in multiple video frames. Even though they normally provide a Graphical User Interface (GUI) for doing the annotation, the process is still time consuming, error prone, and requires specialised knowledge to infer pose correctly. Futhermore, the number of frames to annotate for good generalisation is hard to predict and therefore ultimately determined empirically. In contrast, through adapting a recent self-supervised approach from the human domain, we completely remove the need for manual annotation, making training and testing more straightforward. ### Animal Pose Estimation with Synthetic Data One alternative to avoid manual annotation for training deep learning methods for animal pose estimation is the use of synthetic data. Using an artificial animal model allows producing many synthetic images and their corresponding annotations with less time and effort than manually annotating actual data [4]. In this context, Mu et al. [38] proposes a semi-supervised pose-estimation framework trained in a supervised fashion using synthetically rendered images and ground truth pose annotations from 3D Computer-Aided Design (CAD) models. Then, they perform self-supervised domain adaption with a small portion of actual data to minimise the domain gap. They successfully estimate 2D poses for large animals with similar skeletal structures, such as tigers, horses, and dogs. Some other works relying on synthetic data also focus on the domain adaptation process after learning the animal pose with synthetic data under supervised paradigms [29, 20]. We adopt a related approach to [38] by using an existing 3D geometric mouse model [4], except that we do not use rendered images as in supervised settings. We only utilise the synthetically generated 2D poses as a prior for training the method. In particular, we use this prior on 2D poses within a GAN framework that allows our whole model to learn poses not necessarily appearing in the prior, eliminating the need for domain adaptation as in [38, 29, 20]. Synthetic data also plays a significant role in learning more complex forms of 3D animal poses. For instance, [62] inspired by the success of human shape models, SMPL [33] generates data from toy figurines of animals to learn statistical shape models (SMAL). Later, [61] propose SMALR, which is an extension of the previous SMAL model. It introduces a regularisation for the deformation of the animal shape to make it appear more detailed and realistic. Subsequent work has [3, 49, 60] adapted the SMAL model to work with particular animal species like dogs and zebras. In contrast to learning to fit 3D shape models from 3D scans, other approaches explore the possibility of learning 3D animal models from less complex representations, like multi-view 2D images, or user-clicked 2D images [7, 13, 22]. However, the final shape representation of those models is less realistic and detailed than those produced using SMAL or SMALR. These methods have produced 3D shape models for various animal species, typically focused on large quadrupeds like tigers, dogs, and zebras. Unfortunately, creating sophisticated models for all animal species is still impractical. Bolanos [4] has taken inspiration from previous synthetic models of large animals to develop a similar model for mice. This 3D CAD model simulates semi-random behavioural patterns from real mice and incorporates the 3D structure of bones and joints. The model has successfully created training data for famous supervised 2D and 3D mouse pose estimation approaches [36, 39]. Nevertheless, there is still an unexplored opportunity to utilise the same model to generate data for training pose estimation models with lower levels of supervision. We demonstrate this by relying on a recent self-supervised method that learns to estimate 2D human poses solely from unlabelled images and a prior on unpaired 2D poses. We follow the same idea, but instead of taking the unpaired pose annotations from the dataset to build the prior, we generated them with a 3D mouse model [4]. Note that we do not utilise paired synthetic images and pose annotations like in previous works [38, 29, 20, 4], we discard the synthetic images and only use synthetic 2D poses. This means that our model is trained using actual unlabelled images and a smaller set of artificially generated 2D poses. ## 3 Method Our method produces a mapping from full body images to the 2D pose of a mouse, as shown in Fig.1. The pose is represented as an articulated tree structure of 2D line segments corresponding to the parts of the body such as snout, tail, hind limbs, and forelimbs. The method extends the self-supervised approach of [18], which estimates human 2D pose. This 2D pose estimator learns from unlabelled images and uses a set of unpaired 2D poses as an empirical prior. This removes any dependence on paired annotated data. However, the method requires a set of manual 2D pose annotations for a subset of images from the dataset, albeit the pairing is discarded. We adapt this approach by changing the pose topology to a mouse model. We also generate an empirical prior for 2D mouse pose by projecting from an existing 3D mouse model, which removes the need for manual pose annotation altogether. The pose-estimator is obtained by training a conditional auto-encoder to map from an image \(x\), depicting a mouse, to a reconstructed image \(x^{\prime}\) that is as similar as possible. The synthesis of the output image is conditioned on an auxiliary mouse image \(y\) depicting a fixed pose. The auto-encoder has a bottleneck that encodes the 2D pose as a set of joint positions \(v\). Once trained, our pose predictor is the initial encoder from this network, which maps from an input image to a 2D pose. This mapping is in two steps, consisting of a Convolutional Neural Network (CNN) \(\Phi\) mapping from the image \(x\) to a skeleton image \(s\), followed by a second CNN \(\eta\) mapping from the skeleton image \(s\) to the 2D pose \(v\). The decoder mapping from the 2D pose \(v\) to the output image \(x^{\prime}\) is also in two steps, consisting of a differentiable function \(\beta\) which maps the 2D pose \(v\) to a skeleton image \(s^{\prime}\); and a CNN \(\Psi\) mapping from the skeleton image \(s^{\prime}\) to the output \(x^{\prime}\). The second mapping \(\Psi\) takes an auxiliary image \(y\) as an additional input to compensate for the missing appearance information in \(s^{\prime}\). Figure 1: 2D pose estimator. We use a self-supervised 2D pose estimator from the human domain, which we adapt to work with mice. Differently to the original implementation, we build prior of 2D poses using synthetic data from a 3D model of a generic mouse. We train the model with a dataset of images \(\{x_{1}\cdots x_{N}\}\), depicting mice in different poses, and our empirical prior of 2D poses. We use a similar loss function as in [18], which contains three terms. The first penalises the difference between the generated image \(x^{\prime}\) and the input \(x\) via a perceptual loss. The second term is a regression loss to evaluate the mapping from skeleton image \(s\) to the 2D joint positions in \(v\). The third term is an adversarial loss to assess the authenticity of the skeleton images generated in the encoder. In the following sections we provide details on the components of the model, the empirical prior, loss function, and training. The whole pipeline for the conditional auto-encoder is as follows: \[x=\Psi(\beta(v)\circ\eta(s)\circ\Phi(x),y) \tag{1}\] We can see the mapping as an autoencoder from input image \(x\) to output image \(x^{\prime}\) in which the 2D pose \(v\) emerges as an intermediate representation. In training the network, a perceptual loss [21] compares each input image \(x\) with the reconstructed image \(x^{\prime}\): \[\mathcal{L}_{perc}=\frac{1}{N}\sum_{i=1}^{N}\|\Gamma(x^{\prime}_{i})-\Gamma(x _{i})\|_{2}^{2} \tag{2}\] where \(\Gamma\) is a pre-trained VGG network [55] with the classification stage removed to utilise the final feature encoding. A CNN serves as the discriminator network \(D\), which outputs a probability that an input skeleton image comes from the prior distribution of skeleton images. Thus, \(D\) measures the extent to which a skeleton image \(s\) looks like an authentic skeleton image from the empirical prior distribution. Note that contrary to [18], our prior \(\{\hat{v}_{j}\}_{j=1}^{M}\) is synthesised by projecting from a 3D mouse model and does not require manual annotation of poses. We obtain the skeleton images \(\{\hat{s}_{j}\}_{j=1}^{j=M}\) via \(\beta\), i.e. \(\{\hat{s}_{j}=\beta(\hat{v}_{j})\}_{j=1}^{M}\), then we compare this distribution \(p_{data}(\hat{s})\) with the distribution \(p_{data}(s)\) from the predicted skeleton images \(\{s_{i}=\Phi(x_{i})\}_{i=1}^{N}\) by means of the adversarial loss [34]: \[\mathcal{L}_{D}=\frac{1}{M}\sum_{j=1}^{M}D(\hat{s}_{j})^{2}+\frac{1}{N}\sum_{ i=1}^{N}(1-D(s_{i}))^{2} \tag{3}\] Finally, we derive a loss from \(\eta\) and \(\beta\), which combines the 2 terms as follows: \[\mathcal{L}_{\eta}=\|\eta(\hat{s})-\hat{v}\|^{2}+\lambda\|\beta(\eta(s))-s\|^ {2} \tag{4}\] The first term uses unpaired 2D poses from the prior, while the second one utilises the pose on the predicted skeleton image \(s\). The last term ensures that the network learns poses that appear on the training images but not necessarily on the prior. The balancing coefficient \(\lambda\) is set to 0.1 in our experiments. 2D synthetic prior. We entirely generate the 2D pose prior required for the discriminator \(D\) using synthetic data. In particular, we adopt a synthetic 3D model of a mouse [4]. This animated mouse model simulates synthetic behavioural data using animation and semi-random joint movements. We keep the original joint-constrained movements of the freely moving mouse model. We animate and render 6 the different scenes with the synthetic model and extract the 2D coordinates of 18 joints on the body of the mouse: Snout, Vertebral column base and end (VB and VE), three points located along the tail (TB, TM, and TE), left/right elbows (LE and RE), left/right knees (LK and RK), and two points (tip and top) for each left/right fore and hind limbs (LFP\({}^{-/+}\), RFP\({}^{-/+}\), LHP\({}^{-/+}\), RHP\({}^{-/+}\)). Note that this notation will be used through the paper. Finally, we use those joint positions to create their respective skeleton image, as shown in Fig.1. Overall, our prior consists of 15,408 different 2D poses transformed into skeleton images. Footnote 6: We use Blender to make the videos and extract the 2D poses from the mouse model. #### 3.2.2 Training. Following [18] we use a perceptual loss \(\mathcal{L}_{perc}\) (2), an adversarial loss \(\mathcal{L}_{D}\) (3), and a regression loss (4) in training the convolutional networks \(\Phi\), \(\eta\), and \(\Psi\). Note that \(\beta\) is not a learnable function. The overall loss \(\mathcal{L}\) is given by: \[\mathcal{L}=\mathcal{L}_{D}+\mathcal{L}_{\eta}+\mathcal{L}_{perc} \tag{5}\] We train the pose estimator using unlabelled images. In particular, each batch is formed by randomly sampling images \((x,y)\), and a random sample \(\hat{v}\) from the synthetic 2D poses, which is then transformed to skeleton image \(\hat{s}\). The input images \(x\) and \(y\) were resized to \(128\times 128\) pixels. We set the batch size to 32 and use Adam optimiser [25] with a learning rate of \(2\times 10^{-4}\), \(\beta_{1}=0.5\), and \(\beta_{2}=0.999\). Unlike [18] who use a pretrained \(\eta\), we train all the neural networks \(\Phi\), \(D\), \(\eta\), and \(\Psi\) from scratch by optimising the loss function in Equation 5. During testing, we only rely on the trained networks \(\Phi\) and \(\eta\), to map from an input image to a 2D pose. Specifically, we input the image \(x\) through \(\Phi(x)\) to obtain the skeleton image \(s\), and then use this with \(\eta(s)\) to get the final 2D pose \(v\). ## 4 Experiments **Dataset.** Our dataset contains images from 40 videos of rodent models of ALS of different genotypes7. Each video has around \(13,120\) frames/images, with an original size of \(658\times 190\) pixels. We use half of the available videos to get the training images, and reserve the other half for evaluation purposes. Footnote 7: All the mice appearing in the recordings were bred and maintained at the Univeristy of Tasmania. **Acquisition details.** The recordings were made using the Digigait\({}^{\rm TM}\) apparatus, which consists of a transparent treadmill and a camera placed underneath. Mice at both 4 and 16 weeks of age were first acclimatis then encouraged to run on the treadmill at \(10cm/s\), \(20cm/s\) and \(30cm/s\) for a minimum of 10 seconds. The camera captures the mice on video as they move on the treadmill. Mice were gently encouraged to run by taps to their rear by the experimenter if needed. At the end of the trial, the mice were returned to their home cage. The average duration of each video is 80 seconds, i.e. most of the mice ran for at least 20 seconds at each speed with 10 seconds transitions between speeds without running. #### 3.2.2 Results. Given an unlabelled image depicting a mouse, our trained model produces a 2D representation of the mouse pose composed of 18 joint positions. Fig.2 shows some of those predicted 2D poses. Since our dataset does not contain annotations for the joint positions, we manually annotated 2D poses for some images in the test videos to provide ground truth for a quantitative measure of prediction performance. We compare pose predictions with ground-truth on this test set using the Mean Per Joint Position Error (MPJPE). The first row of Table 1 shows the MPJPE in pixels between the predicted positions for each of the joints composing the mouse pose and their respective ground truth annotations. MPJPE is reported w.r.t to the original image dimensions: \(658\times 190\) pixels. In addition to the previous experiment, we train and evaluate our model using synthetic images and synthetic unpaired poses (SI + SP). Note that the synthetic 2D poses on the prior are not annotations of the training images. We train the model with different sequences of images synthetically generated from Figure 2: Estimated 2D poses using our method. During training we use real images and the synthetic pose prior: **RI + SP**. the 3D mouse model and test it using a different set of synthetic images. We use the 2D ground truth annotations for 18 joint positions extracted from the mouse model and compare them with the predicted poses. We report the MPJPE for each joint position in the second row of Table 1 and a few visualisations of results in Fig.3. #### 4.1.1 DeepLabCut comparison. In the absence of a more extensive set of annotated data for evaluating all our predictions, we also report on a quantitative comparison with the predictions from a state-of-the-art supervised method for animal pose estimation: DeepLabCut [36]. The motivation for performing this comparison is to show that our self-supervised approach can work similarly to this supervised method, removing the requirement to annotate 2D poses for training. To build the training set for DLC, we select a subset of 100 consecutive images from one video and label 18 joint positions in each one. We then use these images and their labelled 2D poses to train a DLC model in a supervised fashion. We follow the official implementation of DLC [36]. Using the trained DLC model, we then predict the pose for unseen videos. We compare the predictions of our method against the ones produced by DLC. Each estimated body joint position is represented as a \((x,y)\) pair of coordinates on the image plane. Fig.4 summarises the results of our comparison. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline **Joints** & \begin{tabular}{c} Snout \\ LE \\ \end{tabular} & \begin{tabular}{c} VCB \\ LFP\({}^{-}\) \\ \end{tabular} & \begin{tabular}{c} VCE \\ LFP\({}^{+}\) \\ \end{tabular} & \begin{tabular}{c} TB \\ RK \\ \end{tabular} & \begin{tabular}{c} TM \\ RHP\({}^{-}\) \\ \end{tabular} & \begin{tabular}{c} TE \\ RHP\({}^{+}\) \\ \end{tabular} & \begin{tabular}{c} RE \\ LK \\ \end{tabular} & \begin{tabular}{c} RFP\({}^{-}\) \\ LHP\({}^{-}\) \\ \end{tabular} & \begin{tabular}{c} RFP\({}^{+}\) \\ \end{tabular} & \begin{tabular}{c} **Avg.** \\ \end{tabular} \\ \hline \hline **RI + SP** & 13.4 & 8.0 & 5.6 & 15.2 & 17.8 & 31.8 & 14.7 & 15.8 & 14.8 & \\ & 12.7 & 10.5 & 14.2 & 7.6 & 21.1 & 11.5 & 14.9 & 11.7 & 11.9 & **14.1** \\ \hline **SI + SP** & 5.9 & 4.0 & 3.0 & 3.7 & 4.3 & 6.2 & 5.9 & 6.6 & 7.1 & \\ & 5.9 & 6.9 & 7.0 & 4.1 & 5.2 & 6.0 & 4.0 & 5.1 & 5.0 & **5.3** \\ \hline \end{tabular} \end{table} Table 1: MPJPE of predicted poses. **RI + SP** denotes use of the method trained with **R**eal **I**mages and **S**ynthetic **P**rior. **SI + SP** denotes use of the method trained with **S**ynthetic **I**mages and **S**ynthetic **P**rior. Figure 3: Predicted 2D poses using the model trained on synthetic images and synthetic prior **(SI + SP)**. Images rendered from synthetic model showing their respective predicted (purple dots) and ground truth 2D poses (green dots). Note that each graph contains our estimated positions (indicated with lines) for a given joint together with the ones estimated by DLC (indicated by dotted lines). In the inset legend, we use the label 'DLC' after the name of the joint to identify the predicted joint positions by DeepLabCut. The predictions of our method simply appear as the name of the joint. Finally, we assess quantitatively the predictions of DLC with the same ground-truth of that we used to evaluate predictions from the self-supervised method. Table 2 shows the MPJPE of DLC predictions and their respective ground truth. As expected, the overall MPJPE is lower for DLC. This may be explained in part by the use of supervision in training DLC, albeit on a limited dataset, and the consistency with which joint positions are manually located in producing ground-truth for the training and testing images. \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|} \hline **Joints** & Snout & VCB & VCE & TB & TM & TE & RE & RFP\({}^{-}\) & RFP\({}^{+}\) & **Avg.** \\ & LE & LFP\({}^{-}\) & LFP\({}^{+}\) & RK & RHP\({}^{-}\) & RHP\({}^{+}\) & LK & LHP\({}^{-}\) & LHP\({}^{+}\) & \\ \hline \hline **DLC** & 4.7 & 16.3 & 18.2 & 4.0 & 7.2 & 20.2 & 7.8 & 5.4 & 5.1 & \\ & 6.5 & 5.7 & 9.8 & 5.6 & 5.3 & 6.1 & 7.2 & 9.0 & 8.3 & **8.5** \\ \hline \end{tabular} \end{table} Table 2: MPJPE of predicted poses with DLC. Figure 4: Comparison of our predicted joint positions against the ones predicted by DLC. **A)** Predictions for RFP, LFP, RHP, and LHP. **B)** Predictions for snout, vertebral column, and tail. **C)**Visual comparison of predicted poses by DLC and our method. Green - ours; pink - DLC. Adaptation to other structures. We demonstrate that our experimental setting, i.e. using synthetic prior and actual data for training, is adaptable to other animal structures. We build a dataset of horse images using a combination of individual frames from YouTube videos depicting horses in motion and horse images from the TigDog dataset [9]. The 2D poses for the synthetic prior come from the synthetic horse model of [38]. We train the model using our dataset of approximately 30k horse images and a prior of 10k synthetic 2D poses. Once trained, we evaluate it utilising an out-distribution dataset [5] and some horse images from the videos and TigDog [9] that were excluded during training. In addition, we test our model's generalisation capacities using images depicting zebras [62]. Note that we use the same trained model for all the cases; surprisingly, the model still does well on the zebras, although the training set does not contain images of these animals. Fig. 3 shows the qualitative results of the 2D poses predicted by our trained model on different data. ## 5 Discussion and Conclusion Supervised methods learn from annotated poses on the training data, which makes them dependent on the quality of those annotations. Although some joint positions are easy to annotate, others require domain specialists to locate them. Contrary to the supervised methods, our approach is not dependent on the quality of the annotations since it learns from skeleton images generated from synthetic poses. The method produces similar 2D poses to those obtained using DLC (Section C of Fig.4), and its quantitative performance in terms of MPJPE against ground-truth annotations is not significantly different from DLC. According to the plots in Fig.4, despite some visible differences between our method and DLC for specific body parts, most graphs show smooth lines for our predictions. When comparing both methods against ground truth annotations, as expected, the overall performance of DLC is superior. This is probably due Figure 5: **First row:** Predictions using images from [5]. **Second row:** Predictions using images from [9] and [62]. in part to the consistency of manual annotation of ground-truth joint locations used in both training and testing of DLC. Our experiment using synthetic images and a synthetic pose prior demonstrates that accurate predictions can be made by matching the pose prior and image domains. In conclusion, we successfully adapted a self-supervised 2D human pose estimation method to a different animal domain, replacing an empirical prior associated with actual 2D poses with a synthetic prior. We have demonstrated that the approach produces promising results compared to a state-of-the-art supervised approach in the mouse domain. An important motivation for our work has been to explore an approach that can be rapidly deployed to other animal domains without requiring extensive annotation of images. We demonstrate the latter qualitatively using a dataset of horse images. Finally, we plan to use our estimated 2D poses to measure gait on genetically modified mice with different levels of ALS disease. These measures could help to identify and classify patterns related to the development of the disease. **Ethics statement:** This study was approved by the University of Tasmania Animal Ethics Committee (permit number A17008) and designed in accordance with the Australian Code of Practice for the Care and Use of Animals for Scientific Purposes. **Acknowledgments** Special thanks to Rebecca Stone and Mohammed Alghamdi from the School of Computing at the University of Leeds for great discussions and insightful feedback.
2306.13048
X-ray photodesorption of complex organic molecules in protoplanetary disks -- I. Acetonitrile CH3CN
X-rays emitted from pre-main-sequence stars at the center of protoplanetary disks can induce nonthermal desorption from interstellar ices populating the cold regions. This X-ray photodesorption needs to be quantified for complex organic molecules (COMs), including acetonitrile CH3CN, which has been detected in several disks. We experimentally estimate the X-ray photodesorption yields of neutral species from pure CH3CN ices and from interstellar ice analogs for which CH3CN is mixed either in a CO- or H2O-dominated ice. The ices were irradiated at 15 K by soft X-rays (400-600 eV) from synchrotron light (SOLEIL synchrotron). X-ray photodesorption was probed in the gas phase via quadrupole mass spectrometry. X-ray photodesorption yields were derived from the mass signals and were extrapolated to higher X-ray energies for astrochemical models. X-ray photodesorption of the intact CH3CN is detected from pure CH3CN ices and from mixed 13CO:CH3CN ices, with a yield of about 5x10^(-4) molecules/photon at 560 eV. When mixed in H2O-dominated ices, X-ray photodesorption of the intact CH3CN at 560 eV is below its detection limit, which is 10^(-4) molecules/photon. Yields associated with the desorption of HCN, CH4 , and CH3 are also provided. The derived astrophysical yields significantly depend on the local conditions expected in protoplanetary disks. They vary from 10^(-4) to 10(-6) molecules/photon for the X-ray photodesorption of intact CH3CN from CO-dominated ices. Only upper limits varying from 5x10^(-5) to 5x10^(-7) molecules/photon could be derived for the X-ray photodesorption of intact CH3CN from H2O-dominated ices. X-ray photodesorption of intact CH3CN from interstellar ices might in part explain the abundances of CH3CN observed in protoplanetary disks. The desorption efficiency is expected to vary with the local physical conditions, hence with the disk region.
R. Basalgète, D. Torres-Díaz, A. Lafosse, L. Amiaud, G. Féraud, P. Jeseck, L. Philippe, X. Michaut, J. -H. Fillion, M. Bertin
2023-06-22T17:17:00Z
http://arxiv.org/abs/2306.13048v1
# X-ray photodesorption of complex organic molecules in protoplanetary disks ###### Abstract Context:X-rays emitted from pre-main-sequence stars at the center of protoplanetary disks can induce nonthermal desorption from interstellar ices populating the cold regions of the disk. This process, known as X-ray photodesorption, needs to be quantified for complex organic molecules (COMs), including acetonitrile CH\({}_{3}\)CN, which has been detected in several disks. Aims:The purpose of this work is to experimentally estimate the X-ray photodesorption yields of neutral species from pure CH\({}_{3}\)CN ices and from interstellar ice analogs for which CH\({}_{3}\)CN is mixed either in a CO-dominated ice or in a H\({}_{2}\)O-dominated ice. Methods:The ices, grown in an ultrahigh vacuum chamber, were irradiated at 15 K by soft X-rays from synchrotron light (SOLEIL synchrotron) in the N K edge region (395 - 420 eV) and in the O K edge region (530 - 555 eV). X-ray photodesorption was probed in the gas phase via quadrupole mass spectrometry by monitoring the changes in the mass signals due to the X-ray irradiation of the ices. X-ray photodesorption yields were derived from the mass signals and were extrapolated to higher X-ray energies in order to provide astrophysical yields adapted to astrochemical models. Results:X-ray photodesorption of the intact CH\({}_{3}\)CN is detected from pure CH\({}_{3}\)CN ices and from mixed \({}^{13}\)CO:CH\({}_{3}\)CN ices, with an experimental yield of about \(5\times 10^{-4}\) molecules.photon\({}^{-1}\) at 560 eV. When mixed in H\({}_{2}\)O-dominated ices, X-ray photodesorption of the intact CH\({}_{3}\)CN at 560 eV is below its detection limit, which is \(10^{-4}\) molecules.photon\({}^{-1}\). Yields associated with the desorption of HCN, CH\({}_{4}\), and CH\({}_{3}\) are also provided. The derived astrophysical yields significantly depend on the local conditions expected in protoplanetary disks, that is, on the ice composition and on the local X-ray irradiation spectrum. They vary from \(\sim 10^{-4}\) to \(\sim 10^{-6}\) molecules.photon\({}^{-1}\) for the X-ray photodesorption of intact CH\({}_{3}\)CN from CO-dominated ices. Only upper limits varying from \(\sim 5\times 10^{-5}\) to \(\sim 5\times 10^{-7}\) molecules.photon\({}^{-1}\) could be derived for the X-ray photodesorption of intact CH\({}_{3}\)CN from H\({}_{2}\)O-dominated ices. Conclusions:X-ray photodesorption of intact CH\({}_{3}\)CN from interstellar ices might in part explain the abundances of CH\({}_{3}\)CN observed in protoplanetary disks. The desorption efficiency is expected to vary with the local physical conditions, hence with the disk region considered. ## 1 Introduction The detection of complex organic molecules (COMs) in protoplanetary disks at the very early stages of planet formation raises the question of their role in the emergence of life in nascent planets via prebiotic chemistry. Gaseous acetonitrile CH\({}_{3}\)CN, one COM, is detected in several disks (Oberg, K. I. et al. 2015; Bergner et al. 2018; Loomis et al. 2018). Its formation pathways in the interstellar medium (ISM) include both gas-phase reactions and energetic or nonenergetic ice chemistry. However, disk modeling studies that include gas-phase pathways alone fail to reproduce the observed abundances of CH\({}_{3}\)CN (Oberg, K. I. et al. 2015; Loomis et al. 2018). Instead, the models suggest an ice formation route and a subsequent delivery of CH\({}_{3}\)CN to the gas phase via nonthermal desorption processes. In particular, it is deduced from the models that gas-phase CH\({}_{3}\)CN should be dominantly present in the upper layers of the observed disks (Oberg, K. I. et al. 2015; Loomis et al. 2018), where photons emitted from the pre-main-sequence (PMS) star irradiate the ices. It is therefore expected that photon-induced desorption, known as photodesorption, should play an important role in explaining gas-phase CH\({}_{3}\)CN in disks. As mentioned in Oberg, K. I. et al. (2015) and Loomis et al. (2018), these photodesorption processes are poorly constrained experimentally. Recently, vacuum ultraviolet (VUV) photodesorption in the 7 - 13.6 eV range of intact CH\({}_{3}\)CN from interstellar ice analogs has been experimentally demonstrated (Basalgate et al. 2021c). However, the derived photodesorption yields (\(\sim 10^{-5}\) molecules.photon\({}^{-1}\)) are two orders of magnitude lower than the yield that was used to explain the column density of the observed CH\({}_{3}\)CN by disk modeling (Loomis et al. 2018). This may indicate that nonthermal desorption processes other than VUV photodesorption could be at play in protoplanetary disks. For instance, PMS stars can be strong X-ray emitters (Gudel & Naze 2009; Testa 2010; Feigelson 2010), and laboratory astrophysics experiments conducted in recent years have shown that X-rays can induce desorption from interstellar ice analogs (Dupuy et al., 2018; Jimenez-Escobar et al., 2018; Ciaravella et al., 2020; Dupuy et al., 2021; Basalgete et al., 2021, 2021). Additionally, in a recent modeling study of Notsu et al. (2021), it has been shown that X-ray photodesorption can have a significant influence on the gas-phase abundances of water outside the water snowlines of disks. This further encourages additional experimental studies of X-ray photodesorption from interstellar ices. In this study, we experimentally quantify X-ray photodesorption of neutral species from CH\({}_{3}\)CN-containing ices. X-ray photodesorption is studied as a function of the ice composition, first from pure ices of acetonitrile, and then from interstellar ice analogs for which CH\({}_{3}\)CN is mixed in CO-dominated or H\({}_{2}\)O-dominated ices. These mixed ices serve as model ices representing different cold regions of protoplanetary disks, namely the regions outside the H\({}_{2}\)O or the CO snowlines where the surface of the ice is expected to be mainly composed of H\({}_{2}\)O or CO, respectively, but can also contain small quantities of CH\({}_{3}\)CN. We restrict the studies to ices irradiated at 15 K for different mixtures in order to understand the effect of the ice composition alone, without the effect of the ice temperature, which varies with the disk region that is considered. The studies are conducted in the soft X-ray range, on the SEXTANTS beam line of the SOLEIL synchrotron facility. Two energy ranges were selected: (1) the 395 - 420 eV range, referred to as the N K edge region, where the photoabsorption is dominated by N-bearing species, that is, CH\({}_{3}\)CN, and (2) the 525 - 560 eV range, referred to as the O K edge region, where the photoabsorption is dominated by O-bearing species, that is, H\({}_{2}\)O or CO. Consequently, selective photoexcitation of CH\({}_{3}\)CN, H\({}_{2}\)O, or CO enables us to study possible indirect desorption mechanisms that have been highlighted in previous studies (Basalgate et al., 2022). X-ray photodesorption yields extrapolated to the 0.4 - 10 keV range and averaged over different attenuated X-ray emission spectra of PMS stars, referred to as astrophysical yields, were derived in order to facilitate the implementation of X-ray photodesorption in astrochemical models. Section 2 describes the experimental procedure and the derivation of the yields. In Section 3 we present the results, and their astrophysical implications are discussed in Section 4. This is paper I of an experimental work dedicated to the study of the X-ray photodesorption of COMs from interstellar ice analogs. Paper II studies the X-ray photodesorption of formic acid HCOOH. ## 2 Experimental procedure ### Ice deposition, TEY, and synchrotron beam line Experiments were conducted using the surface processes and ices (SPICES) setup. It consists of an ultrahigh vacuum chamber (UHV) with a base pressure of \(\sim 10^{-10}\) Torr, equipped with a quadrupole mass spectrometer (QMS). At the center of the chamber, a rotatable copper substrate (polycrystalline oxygen-free high-conductivity copper) is mounted on a sample holder that can be cooled down to 15 K by a closed-cycle helium cryostat. The ices are formed on the substrate by injecting gas-phase molecules in the chamber via a tube that can be positioned a few millimeters in front of the substrate surface. Different injection gas lines enable us to deposit binary mixed ices, with dilution ratios that are controlled by adjusting the partial pressure associated with each species during deposition. Isotopologs are used to facilitate the analysis of the mass spectrometer data. Pure acetonitrile \({}^{12}\)CH\({}_{3}^{12}\)C\({}^{14}\)N (99% purity, Sigma-Aldrich) and \({}^{12}\)CH\({}_{3}^{13}\)C\({}^{15}\)N (99% isotopic purity, Sigma-Aldrich) ices were deposited and irradiated at 15 K. Mixed \({}^{13}\)CO:CH\({}_{3}\)CN ices (\({}^{13}\)CO from 99% \({}^{13}\)C purity Eurisotop) were deposited and irradiated at 15 K. Mixed H\({}_{2}\)O:CH\({}_{3}\)CN ices (H\({}_{2}\)O from liquid chromatography standard Fluka) were deposited at 90 K, cooled down to 15 K and irradiated at 15 K. This ensured that the resulting water ice is in its compact amorphous phase, referred to as compact amorphous solid water (c-ASW). The thickness of the grown ices is expressed in monolayers (ML), equivalent to a surface density of \(\sim 10^{15}\) molecules.cm\({}^{-2}\). Temperature-programmed desorption (TPD) experiments conducted prior to the presented studies enabled us to control the number of ML deposited with a precision of about 10% (see, e.g., Bertin et al. (2017) for TPD of acetonitrile). The substrate was electrically insulated from the sample holder by a Kapton foil. This enabled the measurement of the drain current generated by the escape of electrons from the ice into the vacuum after X-ray absorption. From this current, we derived the total electron yield (TEY), expressed in electrons per incident photon (e\({}^{-}\).photon\({}^{-1}\) for more simplicity), and measured as a function of the incident photon energy. The TEY is sensitive to the changes in the molecular composition near the ice surface with the ongoing irradiation, that is, with the photon fluence (expressed in photons.cm\({}^{-2}\)), and it can be assimilated to the X-ray absorption spectrum of the studied ices. The ice depth probed by the TEY measurements is estimated to be a few tens of ML based on studies of water ice (Timneanu et al., 2004) and of CO/N\({}_{2}\) ices (Basalgate et al., 2022). X-rays from the SEXTANTS beam line of the SOLEIL synchrotron facility at Saint-Aubin, France (Sacchi et al., 2013), were routed to the UHV chamber to irradiate the grown ices. Photons in the N and O K edge regions (395 - 420 eV and 525 - 560 eV, respectively) were used with different spectral width (namely 1.2 eV or 90 meV) and with a flux varying from \(10^{12}\) to \(10^{13}\) photons.s\({}^{-1}\), the latter was measured by a calibrated silicon photodiode mounted on the beam line. The beam was sent at a 47\({}^{\circ}\) incidence relative to the normal of the substrate surface, and the spot area at the surface was \(\sim 0.1\) cm\({}^{2}\). The calibration of the energy scale was performed similarly to what is described in Basalgate et al. (2022). Namely, in the N K edge region, a TEY was measured on a pure N\({}_{2}\) ice at 15 K, and the TEY feature corresponding to the N 1s \(\rightarrow\pi^{*}(\nu^{*}=0)\) transition of N\({}_{2}\) was set to 400.868 eV according to Chen et al. (1989). In the O K edge region, a TEY was measured on a pure CO ice at 15 K, and the TEY feature corresponding to the O 1s \(\rightarrow\pi^{*}\) transition of CO was centered at 534.4 eV according to Jugnet et al. (1984). ### Derivation of the X-ray photodesorption yields The X-ray photodesorption of neutral species was monitored in the gas phase of the UHV chamber during the X-ray irradiation of the ices and by means of the QMS equipped with an electron-impact (at 70 eV) ionization stage. The desorption intensities \(I_{X}(E)\) associated with a desorbing neutral species \(X\) for a photon energy \(E\) were derived by following the m/z signals of the QMS during the X-ray irradiation. Irradiation at fixed energy for a few tens of seconds results in a sudden increase and decrease in the mass signals that is associated with X-ray photodesorption. \(I_{X}(E)\) was then computed as the height of the signal increase in that case. Examples of these QMS signals are presented in Appendix A in Figure 1 for the mass signals m/z 27 from a pure CH\({}_{3}^{12}\)C\({}^{14}\)N ice and m/z 41 from a mixed \({}^{13}\)CO:CH\({}_{3}\)CN (10:1) ice. The QMS signals can also be monitored by continuously scanning the incident photon energy, resulting in signals similar to what is displayed in Figure 10 of Appendix A. In this case, the timescale was converted into an energy scale and the background level (mass signal without irradiation) was subtracted to derive \(I_{X}(E)\). After the attribution of the m/z channels to desorbing neutral species (see Section 3.2 and 3.3), the intensities \(I_{X}(E)\) were corrected for the fragmentation of these attributed species due to their ionization by electron impact. The fragmentation patterns were taken from the NIST database (Linstrom & Mallard 2022). The resulting intensities were then converted into X-ray photodesorption yields \(\Gamma_{X}(E)\), expressed in molecules desorbed per incident photon (simplified to molecules.photon\({}^{-1}\) in this study), using equation 1, \[\Gamma_{X}(E)=k_{X}\frac{I_{X}(E)}{\phi(E)}, \tag{1}\] where \(\phi(E)\) is the photon flux at \(E\), and \(k_{X}\) is a conversion factor associated with the neutral species \(X\). The coefficient \(k_{X}\) was calibrated on N\({}_{2}\). k\({}_{N_{2}}\) relates the QMS current to a calibrated number of N\({}_{2}\) molecules desorbed during TPD experiments (see Basalgate et al. (2022) for more details of the calibration procedure). The factor \(k_{X}\) associated with other neutral species was derived from \(k_{N_{2}}\) by taking into account (i) the relative differences in the electron-impact ionization cross sections between N\({}_{2}\) and the species \(X\) and (ii) the differences in the QMS apparatus function between m/z(N\({}_{2}\)) and m/z(X). Electron-impact ionization cross sections were taken from the literature for CH\({}_{3}\)CN (Zhou et al. 2019), HCN (Pandya et al. 2012), CH\({}_{4}\) (Tian & Vidal 1998) and CH\({}_{3}\) (Tarnovsky et al. 1996). ### Extrapolation to higher energies. Astrophysical yields X-ray photodesorption yields were derived in the soft X-ray range (\(<600\) eV), whereas X-rays emitted from PMS stars at the center of protoplanetary disks range from 0.1 to 10 keV. We then derived the X-ray photodesorption yields \(\Gamma_{astro}\) for mixed ices averaged in the 0.4 - 10 keV range by (i) extrapolating the experimental yields \(\Gamma_{X}\) up to 10 keV and (ii) considering the X-ray emission spectrum \(\phi_{local}\) of a typical T-Tauri star (from Nomura et al. (2007)), which we attenuated by using the photoelectric cross section of gas and dust in a typical T-Tauri protoplanetary disk (from Bethell & Bergin (2011)). This resulted in the following formula: \[\Gamma_{astro}=\frac{\int\Gamma_{X}(E)\ \phi_{local}(E)\ dE}{\int\phi_{local}(E)\ dE}. \tag{2}\] The attenuated X-ray emission spectra depend on the column density of gas and dust traversed by X-rays and are displayed in Figure 11 of Appendix B. The experimental yields \(\Gamma_{X}\) were extrapolated up to 10 keV by assuming that (i) the X-ray photodesorption yields follow the X-ray absorption profile of the ices, as shown in Section 3 for the N and O K edge regions, and (ii) the X-ray absorption of the ices above 560 eV follows the gas-phase core O 1s ionization cross section, which is similar for H\({}_{2}\)O and CO and was taken from Berkowitz (2002). Examples of extrapolated yields are given in Figure 11 of Appendix B. We assumed that the X-ray photodesorption yields per absorbed photon do not depend on the photon energy, as suggested in Dupuy et al. (2018) and Jimenez-Escobar et al. (2018). The values of the yields in units of absorbed photons are also provided in Section 4 based on a similar method as in Basalgate et al. (2022), assuming that up to 30 ML of the ice contribute to the desorption, and without taking the dilution of CH\({}_{3}\)CN into account. These yields can easily be extrapolated to environments other than protoplanetary disks. ## 3 Results ### TEYs with photon fluence The TEYs measured on the studied ices are displayed in Figure 1 for the 395 - 420 eV range. Their evolution with the photon fluence, expressed in photons.cm\({}^{-2}\), is also shown. Our TEYs of the pure CH\({}_{3}\)CN ice compare well with that of Parent et al. (2000). In this energy range, the X-ray photodabsorption is dominated by N-bearing species. As discussed in the experimental section, the ice depth probed by a TEY measurement is estimated to be a few tens of ML. The evolution of the TEY features with the photon fluence therefore provides information on the changes in the molecular composition near the ice surface, where "near the ice surface" refers to the first tens of ML of the ice. We first focus on the TEY feature near 400 eV, which dominates the TEYs for low photon fluences and for each studied ice. It is associated with the N 1s \(\rightarrow\pi^{*}\) core transition of CH\({}_{3}\)CN in the solid phase. Its intensity decreases with the photon fluence due to the photodissociation of CH\({}_{3}\)CN. In the case of the pure CH\({}_{3}\)CN ice and the mixed \({}^{13}\)CO:CH\({}_{3}\)CN (1 :1) ice (see the left panels of Figure 1), this feature still dominates the TEY for a high photon fluence (\(\sim\) 8 - 10 \(\times\) 10\({}^{16}\) photons.cm\({}^{-2}\)), meaning that CH\({}_{3}\)CN is still present near the ice surface in a significant amount for such fluences. In the case of the mixed H\({}_{2}\)O:CH\({}_{3}\)CN ices (1:1 and 10:1), the behavior of the CH\({}_{3}\)CN feature is very different, as shown in the right panels of Figure 1. Its decrease with the photon fluence is much faster than for the other studied ices, and it almost totally disappears for a photon fluence \(\sim\) 8 \(\times\) 10\({}^{16}\) photons.cm\({}^{-2}\). This indicates that the water ice provides reactive species, for instance, the OH radical, that increases the destruction kinetics of CH\({}_{3}\)CN in the case of the H\({}_{2}\)O:CH\({}_{3}\)CN ices compared to the case of the pure CH\({}_{3}\)CN and the \({}^{13}\)CO:CH\({}_{3}\)CN ices. This kinetic difference in the consumption of CH\({}_{3}\)CN has also been observed when irradiating pure CH\({}_{3}\)CN and mixed H\({}_{2}\)O:CH\({}_{3}\)CN ices with UV photons (with a broadband 7 - 10.2 eV hydrogen lamp) at 20 K in the study of Bulak et al. (2021). This behavior does not depend on the photon energy in our experiments because the TEYs displayed in Figure 1 were measured for ices that were irradiated both near the N and the O K edges. This is consistent with the fact that the chemistry is dominated by the secondary low-energy electrons created after X-ray absorption and does not depend on the primary photoexcitation or ionization. Among the possible species that formed during the X-ray irradiation, a new feature that appeared with the photon fluence near 401 eV suggests the accumulation of N\({}_{2}\) near the ice surface for each studied ice (at 15 K). This feature can be associated with the N 1s \(\rightarrow\pi^{*}\) core transition of N\({}_{2}\), as seen from TEY measurements of pure N\({}_{2}\) ices (Basalgate et al. 2022) and similar to K-shell photodosorption studies of gas-phase N\({}_{2}\) (Chen et al. 1989). The absence of this feature in the TEY of the pure CH\({}_{3}\)CN ice irradiated at 90 K supports its attribution to N\({}_{2}\) formation, as N\({}_{2}\) would thermally desorbs at this temperature. In the inset of the bottom right panel of Figure 1, the red curve clearly and definitively confirms this attribution. When the spectral resolution is high enough (in this case, 90 meV), the vibrational structure of the N (1s)\({}^{-1}\pi^{*}\) state of N\({}_{2}\) is resolved in the TEY, similarly to what has been observed for pure N\({}_{2}\) ice in Basalgete et al. (2022). Possible N\({}_{2}\) contamination from the UHV chamber or from the X-ray beam line that would deposit at the ice surface and significantly contribute to the measured TEY seems unlikely because the base pressure was kept at 10\({}^{-10}\) Torr during the experiments, excluding significant deposition of contaminants on the ice surface on the experimental timescale. Any possible N\({}_{2}\) contamination of the sample was searched for and ruled out, for example, via performing TEY measurements on fresh ice at the N-K edge of N\({}_{2}\), and on the bare copper substrate at low temperature. The solid N\({}_{2}\) TEY signal was found to be directly correlated to the photon irradiation (it depends on the fluence and photon energy) and also to the amount of condensed acetonitrile deposited for a given fluence condition. This gives us confidence that N\({}_{2}\) is indeed photoproduced from the solid CH\({}_{3}\)CN during irradiation. Surprisingly, N\({}_{2}\) is formed upon X-ray irradiation regardless of the ice composition. However, further investigations are needed to assess how its formation pathways and its formation kinetics depend on the ice composition. For the mixed ices, where CH\({}_{3}\)CN molecules are less likely to be spatially close to each other in the ice, the diffusion of N-bearing radicals and/or the formation of CH\({}_{3}\)CN clusters or islands during the ice deposition might partly explain the formation of N\({}_{2}\). For instance, Jimenez-Escobar et al. (2022) highlighted that the irradiation of interstellar ices by X-rays can induce the diffusion of species through hundreds of ML. Any photoproducts other than N\({}_{2}\) that are formed during the X-ray irradiation of each studied ice do not participate significantly in the photoabsorption in the 395 - 420 eV range because no significant new features other than that of N\({}_{2}\) appear in the TEY with the photon fluence. It is clear, however, that we do not have the full picture of the X-ray induced chemistry in the TEY measurements. In the literature, low-energy electron irradiation of pure acetonitrile ices (Ipolyi et al. 2007; Bass et al. 2012) suggests the formation of HCN and C\({}_{2}\)H\({}_{6}\). Abdoul-Carime et al. (2022) suggested the formation of CH\({}_{3}\)OH (detected by TPD) in mixed H\({}_{2}\)O:CH\({}_{3}\)CN ice irradiated by low-energy electrons. As the CH\({}_{3}\)OH absorption features overlap with that H\({}_{2}\)O in the TEYs near the O K edge, we cannot discuss its formation with our data set. In VUV irradiation experiments of H\({}_{2}\)O:CH\({}_{3}\)CN ices, the formation of larger COMs was reported (Bulak et al. 2021) even though VUV and X-ray photochemistry are not necessarily comparable. Figure 1: TEYs in the N K edge region of a pure CH\({}_{3}\)CN ice at 15 K (top left panel; the inset shows the region near the N 1s \(\rightarrow\pi^{*}\) resonance for an ice irradiated at 15 K and 90 K for the lower and upper curve, respectively; these curves are shifted vertically for more clarity), of a mixed H\({}_{2}\)O:CH\({}_{3}\)CN ice irradiated at 15 K with a dilution ratio of 1:1 and 10:1 (top and bottom right panel, respectively), and of a mixed \({}^{13}\)CO:CH\({}_{3}\)CN ice irradiated at 15 K with a dilution ratio of 1:1 (bottom left panel). The photon fluence received by the ice before each TEY measurement is also displayed. The spectral width of the beam was set to 1.2 eV for all the TEY measurements, except for the one corresponding to the red curve in the bottom right panel, for a H\({}_{2}\)O:CH\({}_{3}\)CN ice having received a photon fluence of 1\(\times\)10\({}^{17}\) photons.cm\({}^{-2}\) and for which the spectral width was 90 meV. The inset in the bottom right panel zooms into the TEY measured on the H\({}_{2}\)O:CH\({}_{3}\)CN (10:1) ice for a photon fluence of 10\({}^{17}\) photons.cm\({}^{-2}\), where the vibrational structure of the core hole state of N\({}_{2}\) formed near the ice surface can be seen near 401 eV. The ices have a total thickness of \(\sim\) 100 ML. The TEYs measured near the O K edge for mixed \({}^{13}\)CO:CH\({}_{3}\)CN and H\({}_{2}\)O:CH\({}_{3}\)CN ices are displayed in Appendix C (Figure C.1). The observed features are similar to the feature corresponding to pure H\({}_{2}\)O and pure CO ice that was studied in Dupuy et al. (2020) and Dupuy et al. (2021). The main feature for the \({}^{13}\)CO:CH\({}_{3}\)CN ice is associated with the O 1s \(\rightarrow\)\(\pi^{*}\) transition of \({}^{13}\)CO near 534.4 eV. The features associated with H\({}_{2}\)O are discussed in more detail in Dupuy et al. (2020). Significant modifications of these TEYs with the photon fluence are not observed, meaning that potential photoproducts that formed during the X-ray irradiation of the mixed ices do not significantly participate in the photoabsorption of the ices in the 530 - 555 eV range. ### X-ray photodesorption from pure CH\({}_{3}\)CN ice Pure acetonitrile ices were irradiated at 15 K in the N K edge region (395 - 420 eV). X-ray photodesorption was detected in several mass channels of the QMS. Isotopologs CH\({}_{3}^{13}\)C\({}^{14}\)N and CH\({}_{3}^{13}\)C\({}^{15}\)N were used to attribute desorbing neutral species to the mass channels. In Figure 2 we display the variations in the desorption intensities (divided by the photon flux) with the isotopolog for pure acetonitrile ices irradiated at 15 K and at 420 eV in the ionization region of the N 1s electron. At this point, the displayed intensities are not corrected for any possible fragmentation pattern of the desorbing species. The desorption intensities of the m/z 41 and 43 from pure CH\({}_{3}^{13}\)C\({}^{14}\)N and CH\({}_{3}^{13}\)C\({}^{15}\)N ices, respectively, are similar. This confirms the X-ray photodesorption of the intact acetonitrile molecule from the studied ices. The desorption intensity of the m/z 15 does not change significantly from one isotopolog to the next, indicating the X-ray photodesorption of the methyl group CH\({}_{3}\). The possible desorption of CH\({}_{4}\) that would contribute to the m/z 15 signal due to its fragmentation at the QMS entrance can be excluded because no desorption signal on the m/z 16 was detected at 420 eV for either isotopolog. Irradiation of pure acetonitrile ices at 12 K by UV photons and 0.8 MeV protons has revealed the formation of CH\({}_{4}\) in Hudson & Moore (2004), however. For the CH\({}_{3}^{13}\)C\({}^{15}\)N ice, the desorption intensity of the m/z 29 is \(\sim 9.0\times 10^{-25}\) As.photon\({}^{-1}\). A similar level of signal is observed on the m/z 27 for the CH\({}_{3}^{12}\)C\({}^{14}\)N ice. This indicates the X-ray photodesorption of HCN from the pure acetonitrile ices, explaining the signals on the m/z 29 (H\({}^{13}\)C\({}^{15}\)N) for the CH\({}_{3}^{13}\)C\({}^{15}\)N ice and on the m/z 27 (H\({}^{12}\)C\({}^{14}\)N for the CH\({}_{3}^{12}\)C\({}^{14}\)N ice. HCN formation has previously been suggested (by post-irradiation TPD experiments) in low-energy electron irradiation experiments of pure CH\({}_{3}\)CN ice at 35 K (Ipolyi et al. 2007). Additionally, low-energy electron-stimulated desorption of CH\({}_{2}^{-}\) from pure CH\({}_{3}\)CN ice at 30 K observed in the study of Bass et al. (2012) led the authors to suggest the formation of HCN after the dissociative electron attachment (DEA) of CH\({}_{3}\)CN into CH\({}_{3}^{-}\) and CN followed by H migration to CN. In our experiments, the X-ray induced chemistry is dominated by the cascade of low-energy secondary electrons after X-ray absorption. It is therefore expected that HCN formation as observed by Ipolyi et al. (2007) and Bass et al. (2012) and subsequent photodesorption can occur. The attribution of the m/z 30 and 28 is not clear due to the large error bars on the m/z 28 and the fact that the m/z 30 was not recorded for the CH\({}_{3}^{12}\)C\({}^{14}\)N ice. This complicates the interpretation of these signals. Additionally, several molecules can contribute to these two mass channels. The m/z 30 observed from the CH\({}_{3}^{13}\)C\({}^{15}\)N ice could correspond to C\({}_{2}\)H\({}_{6}\) and/or \({}^{15}\)N\({}_{2}\) desorption. The desorption of C\({}_{2}\)H\({}_{6}\) is supported by the studies of Ipolyi et al. (2007) and Bass et al. (2012), where its formation was suggested to occur via reaction between CH\({}_{3}\) radicals after DEA of CH\({}_{3}\)CN into CN\({}^{-}\) and CH\({}_{3}\). The desorption of N\({}_{2}\), which would contribute to the m/z 30 for the CH\({}_{3}^{13}\)C\({}^{15}\)N ice and to the m/z 28 for the CH\({}_{3}^{13}\)C\({}^{14}\)N ice, is supported by its formation near the ice surface, as seen in our TEY data (see Figure 1). As stated in Section 3.1, blank experiments on fresh ices or on the bare copper substrate allowed us to rule out any possible N\({}_{2}\) contamination from the experimental setup or the beam line. We therefore associate any N\({}_{2}\) detection with its X-ray induced formation and subsequent desorption from the ice. The fragmentation of desorbing C\({}_{2}\)H\({}_{6}\) at the QMS entrance could also contribute to the m/z 28 observed for both isotopologs. The m/z 28 observed from the CH\({}_{3}^{13}\)C\({}^{15}\)N ice could also have a contribution from des \begin{table} \begin{tabular}{l l} \hline \hline Species & Yield \\ \hline CH\({}_{3}\)CN & \(5.2\pm 1.5\times 10^{-4}\) \\ HCN & \(2.5\pm 0.3\times 10^{-3}\) \\ CH\({}_{3}\) & \(1.3\pm 0.7\times 10^{-3}\) \\ \hline \hline \end{tabular} 1 \end{table} Table 1: X-ray photodesorption yields in molecules desorbed per incident photon (molecules,photon\({}^{-1}\)) of CH\({}_{3}\)CN, HCN, and CH\({}_{3}\) from pure CH\({}_{3}\)CN ices irradiated at 15 K and at a photon energy of 420 eV. Figure 2: X-ray photodesorption intensities divided by the photon flux (in A.s.photon\({}^{-1}\)) at 420 eV of desorbing species from pure acetonitrile ices. The mass channels monitored during the experiments are indicated on the X-axis. The attribution of these mass channels to desorbing neutral species is discussed in the text. The different colors are associated with a natural CH\({}_{3}^{13}\)C\({}^{14}\)N (green) or an isotopic CH\({}_{3}^{13}\)C\({}^{15}\)N (orange) ice, irradiated at 15 K. The signals were obtained for a fluence \(<2\times 10^{16}\) photons.cm\({}^{-2}\), and they are not corrected for any possible fragmentation of desorbing species in the ionization stage of our QMS. orbing \({}^{13}\)C\({}^{15}\)N after DEA of CH\({}_{3}^{13}\)C\({}^{15}\)N, supported by the anion desorption of CH\({}_{3}^{-}\) observed by Bass et al. (2012). Finally, the entanglement is such that we cannot conclude about the attribution of the m/z 28 and 30 from the pure ices. After conversion of the desorption intensities to desorption yields, we display in Table 1 the X-ray photodesorption yields at 420 eV of the identified species (CH\({}_{3}\)CN, HCN, and CH\({}_{3}\)) from our experiments on pure acetonitrile ices. The displayed yields are taken as the average on the two isotopologs. We did not correct the HCN yield for the fragmentation of potentially desorbing C\({}_{2}\)H\({}_{6}\), which means that this yield might be overestimated (by \(\sim\)30% when we consider that the full intensity measured on the m/z 30 corresponds to C\({}_{2}\)H\({}_{6}\) desorption). The pure acetonitrile ices were also irradiated by varying the photon energy from 395 eV to 420 eV either at fixed energies or by continuously scanning the photon energy. X-ray photodesorption yields were then derived as a function of the photon energy. The resulting photodesorption spectra are shown in Figure 3 for the desorption of CH\({}_{3}^{12}\)C\({}^{14}\)N and H\({}^{12}\)C\({}^{14}\)N from a pure CH\({}_{3}^{12}\)C\({}^{14}\)N ice irradiated at 15 K. The spectra display the same energy dependence as the TEYs, with a dominant contribution at 400 eV, corresponding to the N 1s \(\rightarrow\pi^{+}\) resonance of CH\({}_{3}\)CN. This confirms that the X-ray photodesorption is well correlated to the X-ray photoabsorption of the ice. ### X-ray photodesorption from mixed ices X-ray irradiation experiments at 15 K in the N and O K edge regions were conducted when CH\({}_{3}\)CN was mixed in \({}^{13}\)CO or H\({}_{2}\)O ices with different dilution factors (with a total ice thickness of \(\sim\) 100 ML). In these experiments, the acetonitrile isotopologo we used was the natural one (\({}^{12}\)CH\({}_{3}\)\({}^{12}\)C\({}^{14}\)N). Tuning the photon energy to the N K edge region results in the dominant photoexcitation of CH\({}_{3}\)CN, whereas tuning the photon energy to the O K edge region results in the dominant photoexcitation of \({}^{13}\)CO or H\({}_{2}\)O. Examples of X-ray photodesorption spectra from the mixed ices are shown in Figure 4. In the top panels we display the X-ray photodesorption yields of the m/z 29, which we attribute to \({}^{13}\)CO desorption in the N K edge region (395 - 420 eV), and of the m/z 41, which we attribute to CH\({}_{3}\)CN desorption in the O K edge region (525 - 555 eV) from \({}^{13}\)CO:CH\({}_{3}\)CN ices. The variations in photodesorption yields with the photon energy follow that of the TEYs (displayed as dashed lines), that is, the photoabsorption spectrum of the ice. This behavior indicates an indirect photodesorption mechanism in the sense that the photodesorbing molecule is different from the photoexcited one. The top panels of Figure 4 clearly show that photoexciting CH\({}_{3}\)CN or \({}^{13}\)CO in the N and O K edge region induces the desorption of \({}^{13}\)CO and CH\({}_{3}\)CN, respectively, from the mixed \({}^{13}\)CO:CH\({}_{3}\)CN ices. Indirect desorption mechanisms induced by X-ray irradiation of ices have already been highlighted in similar experiments on methanol-containing ices (Basalgate et al. 2021a,b) and on CO/N\({}_{2}\) ices (Basalgate et al. 2022). In these studies, it was suggested that the indirect desorption is driven by the scattering of the Auger electrons and the subsequent low-energy secondary electrons toward the ice surface, following the Auger decay of the core hole excited or ionized state of the photoabsorbing molecule. This mechanism, known as X-ray induced electron stimulated desorption (XESD), was also proposed to occur for pure ices of H\({}_{2}\)O (Dupuy et al. 2018) and CO (Dupuy et al. 2021). We also expect this mechanism to explain the X-ray photodesorption of the neutral species detected in our experiments with acetonitrile-containing ices. Other indirect mechanisms could include the codesorption of molecules at the ice surface, which is not necessarily induced by the secondary electrons, but by the fate of the photoexcited molecule after Auger decay. Additionally, the scattering of the Auger and secondary electrons induces chemistry near the ice surface. The X-ray photodesorption of masses associated with photoproducts were observed during our experiments. Some examples are shown in the bottom panels of Figure 4 for the X-ray photodesorption signals on the m/z 44 and m/z 28 from a \({}^{13}\)CO:CH\({}_{3}\)CN ice (1:1) and a H\({}_{2}\)O:CH\({}_{3}\)CN ice (1:1), respectively. The fact that the X-ray photodesorption spectra of these masses follow the TEY indicates that these photoproducts originate from the chemistry induced by the low-energy electrons. More globally, when a photodesorption signal was clearly detected during the experiments, the corresponding photodesorption spectrum was found to follow the TEY of the ice. Many mass channels displayed a desorption signal during the experiments performed on mixed ices. In order to discuss Figure 3: X-ray photodesorption yields of CH\({}_{3}^{12}\)C\({}^{14}\)N and H\({}^{12}\)C\({}^{14}\)N from a pure CH\({}_{3}^{12}\)C\({}^{14}\)N ice irradiated at 15 K as a function of the incident photon energy. The solid noisy lines are associated with the desorption yield derived from a continuous irradiation from 395 to 420 eV, whereas the squares with error bars result from the desorption measurements corresponding to irradiation at fixed energies for a few tens of seconds. The TEY measured simultaneously during the continuous irradiation are shown as dashed red lines in arbitrary units. the attribution of the neutral species to these signals, we show in Figure 5 the X-ray photodesorption intensities associated with the mass channels we monitored. The displayed intensities are not corrected for any possible fragmentation pattern of the desorbing species. They were derived at 560 eV energy, at which the photoabsorption is dominated by the core ionization of O-bearing species, with a similar cross section for \({}^{13}\)CO and H\({}_{2}\)O. For similar dilution ratios, potential differences observed in the desorption intensities at 560 eV can therefore be solely attributed to differences in the ice composition. Additionally, the intensities displayed in Figure 5 were obtained for a low photon fluence (\(<2\times 10^{16}\) photons.cm\({}^{-2}\)) in order to limit the destruction effects of CH\({}_{3}\)CN before the measurements as much as possible. This is particularly efficient in H\({}_{2}\)O-dominated ices, as explained in Section 3.1. The m/z 16 and 15 signals depend on the ice composition. For the \({}^{13}\)CO-mixed ices, the m/z 16 intensity increases with the amount of \({}^{13}\)CO that is initially deposited and it is below our detection limit after correcting it for the fragmentation of desorbing \({}^{13}\)CO into atomic O at the QMS entrance for both mixtures. Consequently, we attribute the intensities we observed on the m/z 15 on the \({}^{13}\)CO-mixed ices to CH\({}_{3}\) desorption. For the H\({}_{2}\)O-mixed ices, the intensities observed on the m/z 16 and 15 are consistent with the desorption of CH\({}_{4}\), which should produce a similar signal on these mass channels due to its fragmentation at the QMS entrance. After the cracking of m/z 16 (CH\({}_{4}\)) into m/z 15 (CH\({}_{3}\)) was corrected for, the intensities on the m/z 15 from the H\({}_{2}\)O-mixed ices were not high enough to consider a significant desorption of CH\({}_{3}\). We therefore conclude that the m/z 16 and 15 signals from the H\({}_{2}\)O-mixed ices are solely due to CH\({}_{4}\) desorption. The X-ray photodesorption of the m/z 28 and 30 significantly depends on the ice composition. No desorption signal was detected on the m/z 30 from the H\({}_{2}\)O:CH\({}_{3}\)CN ices. For the \({}^{13}\)CO-mixed ices, isotopic impurities present in our \({}^{13}\)C\({}^{16}\)O gas sample contribute to the desorption signals observed on the m/z 28 and 30. Mass signals on the m/z 28 and 30 were found in the mass spectrum of our \({}^{13}\)C\({}^{16}\)O gas sample, which was measured after the synchrotron experiments. These mass signals originate from \({}^{12}\)C\({}^{16}\)O and \({}^{12}\)C\({}^{15}\)O in the gas sample. It was estimated from the mass spectrum that \(\sim\) 1% of \({}^{12}\)C\({}^{16}\)O and \(\sim\) 0.5% of \({}^{12}\)C\({}^{18}\)O relative to \({}^{13}\)C\({}^{16}\)O were present in our sample. In the X-ray photodesorption experiments from the \({}^{13}\)CO-mixed ices, a significant desorption signal was detected on the m/z 29, which is attributed to the X-ray photodesorption of \({}^{13}\)C\({}^{16}\)O. The intensities measured on the m/z 29 and 30 from the \({}^{13}\)CO-mixed ices both increase from the 1:1 to the 7:1 mixtures. Moreover, the ratio of the m/z 30 intensity to that of the m/z 29 was found to be \(\sim\) 0.5% for both mixtures, which is similar to the estimated amount of \({}^{12}\)C\({}^{18}\)O impurities present in our \({}^{13}\)C\({}^{16}\)O gas sample. We therefore conclude that the desorption signal observed on the m/z 30 from the \({}^{13}\)CO-mixed ices solely originates from Figure 4: X-ray photodesorption spectra from mixed ices. Top panels: X-ray photodesorption yields of \({}^{13}\)CO near the N K edge and of CH\({}_{3}\)CN near the O K edge (squares with error bars) from a mixed \({}^{13}\)CO-CH\({}_{3}\)CN ice at 15 K, with dilution ratios of 1:1 (blue) and 7:1 (red). The yields were measured for a fresh ice at 15 K having received a photon fluence \(<1\times 10^{16}\) photons.cm\({}^{-2}\). The TEYs, measured at higher fluences (5 - \(10\times 10^{16}\) photons.cm\({}^{-2}\)), are displayed as dashed lines. Bottom panels: X-ray photodesorption spectra (blue) of the m/z 44 from a \({}^{13}\)CO:CH\({}_{3}\)CN (1:1) ice near the N K edge (left panel) and of the m/z 28 from a H\({}_{2}\)O:CH\({}_{3}\)CN (1:1) ice near the O K edge (right panel). In red we also display the TEYs. The photodesorption signals and the TEYs were measured simultaneously at 15 K. The Y-scale of the bottom panels is in arbitrary units. \({}^{12}\)C\({}^{18}\)O X-ray photodesorption due to the isotopic impurity deposited with the \({}^{13}\)C\({}^{16}\)O matrix. Unlike the m/z 30, the desorption intensities measured on the m/z 28 from the \({}^{13}\)CO-mixed ices cannot be only attributed to the desorption of the \({}^{12}\)C\({}^{16}\)O isotopic impurity deposited in the \({}^{13}\)C\({}^{16}\)O matrix. The part of the m/z 28 desorption intensity attributed to the \({}^{12}\)C\({}^{16}\)O impurity desorption is estimated as 1% of the \({}^{13}\)C\({}^{16}\)O (m/z 29) desorption intensity, and it is displayed in the striped rectangles in Figure 5. A significant part of the m/z 28 intensity should therefore originate from the desorption of photoproducts. For both the \({}^{13}\)CO-mixed and H\({}_{2}\)O-mixed ices, the high desorption intensity as compared to the other mass channels indicates photoproducts with a high desorption efficiency, such as \({}^{12}\)C\({}^{16}\)O or N\({}_{2}\). Their X-ray photodesorption yields from their pure respective ice are found to be similar, and they are \(\sim\) 0.1 molecules.photon\({}^{-1}\) (at 420 eV for N\({}_{2}\) and 560 eV for CO; Dupuy et al. (2021); Basalge et al. (2022)). The accumulation of CO near the ice surface, which should be visible in the TEY data near 534.4 eV, is not observed in the mixed H\({}_{2}\)O:CH\({}_{3}\)CN ices (see the top panel of Figure 1), whereas N\({}_{2}\) accumulation is clearly seen in the TEY near 401 eV, even in the low-fluence regime at which the intensities were derived (see the right panels of Figure 1). This might indicate that the desorption of the m/z 28 from mixed H\({}_{2}\)O:CH\({}_{3}\)CN ices is dominated by the X-ray photodesorption of N\({}_{2}\), although a contribution of CO desorption cannot be totally ruled out. For the \({}^{13}\)CO:CH\({}_{3}\)CN ices, the accumulation of N\({}_{2}\) is also visible in the TEY near the N K edge, but the formation of \({}^{12}\)CO cannot be discussed regarding the TEY near the O K edge because \({}^{13}\)CO and \({}^{12}\)CO core absorption features should be similar in that energy range. Therefore, we cannot clearly attribute desorbing species to the m/z 28 signals for the mixed ices. It is unclear why this signal increases with the dilution of CH\({}_{3}\)CN in the \({}^{13}\)CO-mixed ices but is similar in the H\({}_{2}\)O-mixed ices for both dilution ratios. The m/z 27 intensities observed for mixed \({}^{13}\)CO:CH\({}_{3}\)CN and H\({}_{2}\)O:CH\({}_{3}\)CN ices decrease when CH\({}_{3}\)CN is more diluted. This indicates the X-ray photodesorption of HCN. This decrease is somewhat consistent with the fact that this molecule might desorb following the dissociation of CH\({}_{3}\)CN at the ice surface, for example, via DEA due to low-energy electrons (Bass et al. 2012). This monomolecular process should not be hindered by the surrounding \({}^{13}\)CO or H\({}_{2}\)O molecules, but the associated desorption signal should be less detectable when fewer CH\({}_{3}\)CN molecules are present at the ice surface. A lower desorption signal of HCN for the case of H\({}_{2}\)O:CH\({}_{3}\)CN ices compared to the \({}^{13}\)CO:CH\({}_{3}\)CN ices with similar dilution ratios might be due to (i) a higher desorption barrier to overcome in the case of mixed H\({}_{2}\)O:CH\({}_{3}\)CN ices, or (ii) the fact that CH\({}_{3}\)CN is more rapidly consumed hence present in smaller numbers at the ice surface for the case of mixed H\({}_{2}\)O:CH\({}_{3}\)CN ices, even for the low-fluence regime in which the signals were measured. The desorption intensities of the m/z 41 are attributed to CH\({}_{3}\)CN X-ray photodesorption. In mixed \({}^{13}\)CO:CH\({}_{3}\)CN ices, it decreases when CH\({}_{3}\)CN is more diluted. The signals in the H\({}_{2}\)O:CH\({}_{3}\)CN ices are lower than in the \({}^{13}\)CO:CH\({}_{3}\)CN ices for similar dilution ratios, indicating that mixing CH\({}_{3}\)CN with water tends to hinder its X-ray photodesorption. As for the case of HCN, this could be due to either a difference in adsorption energies between CH\({}_{3}\)CN and the water or the \({}^{13}\)CO matrix, or it could be due to a rapid consumption of CH\({}_{3}\)CN by X-ray induced chemistry when it is mixed with water. Desorption signals on higher m/z than 41, namely 42, 43, and 44, were observed from the mixed ices. The desorbing species cannot be clearly attributed to these mass channels because these mass channels can correspond to \({}^{12}\)CNO, H\({}^{12}\)CNO, and \({}^{12}\)CO\({}_{2}\) for the m/z 42, 43, and 44, respectively, or they can originate from the fragmentation of molecules of higher mass at the QMS entrance. The relative intensity of these signals between the \({}^{13}\)CO-dominated and H\({}_{2}\)O-dominated ice indicates a different X-ray induced chemistry. H\({}^{12}\)CNO desorption from the H\({}_{2}\)O-mixed ices and further fragmentation at the QMS entrance might explain the signals observed on the m/z 42 and 43. The relative intensities of these signals are not consistent with the mass spectrum measured in Hand & Bogan (1971), but the large uncertainties make it difficult to firmly exclude HCNO. Its formation in our H\({}_{2}\)O-mixed ices is supported by the detection of its conjugated base, OCN\({}^{-}\), in 0.8 MeV proton and far-UV irradiated H\({}_{2}\)O:CH\({}_{3}\)CN ices in the study of Hudson & Moore (2004). In the presence of water, taking inspiration from the H\({}^{+}\) assisted hydrolysis of nitrile, the induced chemistry might result in the conversion of the nitrile function into an amide group R-CO-NH\({}_{2}\). Part of the m/z 44, whose global intensity is doubled in the case of the mixed H\({}_{2}\)O:CH\({}_{3}\)CN ices compared to the mixed \({}^{13}\)CO:CH\({}_{3}\)CN ices, can be attributed to the carbamoyl-ionized radical [H\({}_{2}\)N-CO]\({}^{+}\), for example, resulting from the cracking of acetamide CH\({}_{3}\)-CONH\({}_{2}\). Another possibility to explain the m/z 44 signal from the H\({}_{2}\)O-mixed ices would be the desorption and further fragmentation of formamide HCONH\({}_{2}\). Formamide desorption should produce a signal on the m/z 45 that is higher than the m/z 43 and 44 according to the NIST database (Linstrom & Mallard 2022), however. As we did not detect a desorption signal on the m/z 45 from the H\({}_{2}\)O-mixed ices, we might exclude formamide desorption even if the large uncertainties prevent us from concluding this definitively. Both acetamide and formamide have been Figure 5: X-ray photodesorption intensities divided by the photon flux (in A.s.photon\({}^{-1}\)) at 560 eV of desorbing masses from mixed \({}^{13}\)CO:CH\({}_{3}\)CN and H\({}_{2}\)O:CH\({}_{3}\)CN ices, irradiated at 15 K. The attribution of the desorbing species to the mass channels is displayed for the m/z 41, 27, 16, and 15. For the other mass channels, this attribution depends on the ice composition. This is discussed in the text. The signals were obtained for a fluence \(<2\times 10^{16}\) photons.cm\({}^{-2}\) and are not corrected for any possible fragmentation of desorbing species in the ionization stage of our QMS. The striped rectangles displayed on the m/z 28 for the \({}^{13}\)CO-mixed ices correspond to the contribution of desorbing \({}^{12}\)C\({}^{16}\)O originating from the isotopic impurities in our \({}^{13}\)CO gas sample, as estimated in the text. proposed as possible photoproducts in the VUV photolysis of H\({}_{2}\)O:CH\({}_{3}\)CN ises (Bulak et al. 2021). After conversion of the desorption intensities into desorption yields, we display in Table 2 the X-ray photodesorption yields at 560 eV that are associated with the identified species from the mixed ices, with dilution ratios that are the most representative of interstellar ices, that is the higher ratios. ## 4 Astrophysical yields and discussion The most important finding of this study is that X-ray photodesorption of intact CH\({}_{3}\)CN from interstellar ices is a possible process and might partly explain the observation of CH\({}_{3}\)CN in protoplanetary disks (Oberg, K. I. et al. 2015; Bergner et al. 2018; Loomis et al. 2018). According to our experimental results, the efficiency of this process should depend on the ice composition and hence on the disk region that is considered. Namely, in regions in which CH\({}_{3}\)CN is mixed in CO-dominated ices at the ice surface, the X-ray photodesorption yield of CH\({}_{3}\)CN is expected to be higher than the one corresponding to the regions in which CH\({}_{3}\)CN is mixed in H\({}_{2}\)O-dominated ices at the ice surface. X-ray photodesorption of photofragments, for instance, HCN, CH\({}_{4}\), and CH\({}_{3}\), should also enrich the gas phase of disks with these molecules. The effect of the ice temperature on the X-ray photodesorption yields remains to be studied because it varies with the regions of the disk that are considered, with H\({}_{2}\)O ices being warmer than CO ices. Although it was shown that X-ray irradiation alone can promote diffusion of species in ices (Jimenez-Escobar et al. 2022), the increase in ice temperature should also favor the diffusion of photoproducts and might influence the photodesorption. Due to the indirect desorption processes observed in our experiments, most probably mediated by the Auger scattering and the subsequent cascade of low-energy secondary electrons, the X-ray photoabsorption of any subsurface molecule in interstellar ices should induce the desorption of surface molecules. The ice depth involved in this indirect process is expected to be a few tens of ML in the soft X-ray range, based on similar X-ray experiments (Basalgetic et al. 2022). In order to provide quantitative data that could be easily implemented in astrochemical models, we derive in Table 3 astrophysical yields according to the method described in Section 2, that is, by extrapolation of the experimental yields in the 0.4 - 10 keV and by averaging them on the estimated local X-ray spectrum, which depends on the column density of gas and dust \(N_{H}\) traversed by the stellar X-rays. The X-ray emission spectrum is that of a typical T-Tauri star, taken from Nomura et al. (2007), and the attenuation cross section of gas and dust is taken from Bethell & Bergin (2011). We consider in Table 3 the case where CH\({}_{3}\)CN is diluted in a CO-dominated or H\({}_{2}\)O-dominated ice, which corresponds to the higher dilution ratios studied in our experiments (7:1 for the CO-dominated ice and 10:1 for the H\({}_{2}\)O-dominated ice). For a given ice composition, the astrophysical yields vary by two orders of magnitude, depending on the local X-ray spectrum and hence on the disk region that is considered. For regions in which hard X-rays (\(>\) 1 keV) dominate the spectrum (see Figure B.1 of Appendix B), the photodesorption yields are lowest. This is due to our extrapolation that results in yields that are several orders of magnitude lower for hard X-rays than for soft X-rays. This extrapolation represents a non-negligible uncertainty on the yields displayed in Table 3, and additional experiments should be conducted in the hard X-ray range to estimate its accuracy. It might indeed be expected that for these energies, the X-ray photoabsorption results in the scattering of both the primary ionized 1s electron and the Auger electron toward the ice surface, inducing desorption. As the kinetic energy of the 1s electron increases with the X-ray energy, a deviation of the photodesorption yields from the photoabsorption cross section might be observed for hard X-rays. For CO-dominated ices and for a similar attenuation of the X-ray irradiation spectrum, the estimated astrophysical X-ray photodesorption yields of CH\({}_{3}\)CN are found to be in the same order of magnitude than that of another important COM in astrochemistry, methanol CH\({}_{3}\)OH (Basalgetic et al. 2021b). Moreover, the X-ray photodesorption behavior of the intact COM is found to be similar between CH\({}_{3}\)CN-containing ices and CH\({}_{3}\)OH-containing ices (Basalgetic et al. 2021b). The X-ray photodesorption yield of the intact COM, either CH\({}_{3}\)CN or CH\({}_{3}\)OH, is estimated to be lower in the case of H\({}_{2}\)O-dominated ices than in the case of CO-dominated ices. For both COMs, this is assumed to be due to a difference in the X-ray induced chemistry between H\({}_{2}\)O-dominated and CO-dominated ices: in the water matrix, chemical reactions between the intact COM and most probably OH radical tend to increase the destruction kinetic of the COM, which competes with its intact desorption. Interestingly, experiments in the VUV range on CH\({}_{3}\)CN-containing ices and CH\({}_{3}\)OH-containing ices display a different behavior of the intact COM desorption. For CO-dominated ices, the VUV photodesorption of CH\({}_{3}\)CN (\(\sim\) 10\({}^{-5}\) molecules.photon\({}^{-1}\) at 10.5 eV, from Basalgetic et al. (2021c)) is found to be at least an order of magnitude higher than that of CH\({}_{3}\)OH (only an upper limit of \(\sim\) 10\({}^{-6}\) molecules.photon\({}^{-1}\) has been derived in Bertin et al. (2016)). Additionally, the VUV photodesorption of CH\({}_{3}\)CN is found to be independent of the studied ice composition (pure CH\({}_{3}\)CN ices, CO:CH\({}_{3}\)CN ices, or H\({}_{2}\)O:CH\({}_{3}\)CN ices; Basalgetic et al. (2021c)), in contrast to what is observed in the X-ray range in our study. This shows that X-ray and VUV photodesorption of COMs should not be treated similarly in astrochemical models as very different physical-chemical mechanisms are expected to be at play for these processes. Finally, it is not straightforward to conclude on the dominant role of either VUV photons or X-rays for the photodesorption of COMs in protoplanetary disks by solely considering our experimental data. Our X-ray yields are found to significantly depend on the disk region considered, and they are found to be either superior or inferior to the the VUV yields \begin{table} \begin{tabular}{l l l} \hline \hline Species & CO:CH\({}_{3}\)CN - 7:1 & H\({}_{2}\)O:CH\({}_{3}\)CN - 10:1 \\ \hline CH\({}_{3}\)CN & 4.3 \(\pm\) 1.2 & \(<\) 1 \\ HCN & 8.3 \(\pm\) 2.1 & 3.1 \(\pm\) 2.1 \\ CH\({}_{4}\) & \(<\) 2 & 6.8 \(\pm\) 2.7 \\ CH\({}_{3}\) & 3.9 \(\pm\) 2.3 & \(<\) 2 \\ \hline \hline \end{tabular} 1 \end{table} Table 2: X-ray photodesorption yields in molecules desorbed per incident photon (in 10\({}^{-4}\) molecules.photon\({}^{-1}\)) of CH\({}_{3}\)CN, HCN, CH\({}_{4}\), and CH\({}_{3}\) at 560 eV from mixed \({}^{13}\)CO:CH\({}_{3}\)CN and H\({}_{2}\)O:CH\({}_{3}\)CN ices irradiated at 15 K. by an order of magnitude. For CH\({}_{3}\)CN, the maximum astrophysical yield derived, which is \(\sim\) 10\({}^{-4}\) molecules.photon\({}^{-1}\), is still an order of magnitude lower than what is used in the study of Loomis et al. (2018). In order to easily extrapolate our experimental results to environments other than protoplanetary disks, we also provide in Table 4 the X-ray photodesorption yields in units of absorbed photons by using the yields at 560 eV from Table 2. ## 5 Conclusion X-ray photodesorption of neutral species from CH\({}_{3}\)CN-containing ices was studied in the soft X-ray range in the N and O K edge regions (395- 420 eV and 530 - 555 eV, respectively). X-ray photodesorption yields of CH\({}_{3}\)CN, HCN, CH\({}_{4}\), and CH\({}_{3}\) were derived for pure CH\({}_{3}\)CN ice, \({}^{13}\)CO:CH\({}_{3}\)CN ices, and H\({}_{2}\)O:CH\({}_{3}\)CN ices. The yields were found to depend on the photon energy and on the ice composition. Indirect desorption processes, induced by photoexcitation of either CH\({}_{3}\)CN, \({}^{13}\)CO, or H\({}_{2}\)O, and most probably mediated by the Auger scattering and the subsequent cascade of low-energy electrons, were observed. The X-ray photodesorption yield at 560 eV of the intact CH\({}_{3}\)CN was estimated to be higher by at least half an order of magnitude when CH\({}_{3}\)CN is mixed in CO-dominated ices compared to the case where it is mixed in H\({}_{2}\)O-dominated ices. X-ray photodesorption of intact CH\({}_{3}\)CN from interstellar lines might partly explain the abundances of gas-phase CH\({}_{3}\)CN observed in protoplanetary disks. The desorption efficiency depends on the local X-ray irradiation spectrum and on the ice composition and hence on the disk region that is considered. In order to facilitate the implementation of X-ray photodesorption in disk modeling, we derived astrophysical yields, averaged in the 0.4 - 10 keV range, as a function of the local conditions expected in disks. For the desorption of the intact CH\({}_{3}\)CN, these astrophysical yields vary from \(\sim\) 10\({}^{-4}\) molecules.photon\({}^{-1}\) to \(\sim\) 10\({}^{-6}\) molecules.photon\({}^{-1}\) from CO-dominated ices. Only upper limits, from \(\sim\) 5 \(\times\) 10\({}^{-5}\) molecules.photon\({}^{-1}\) to \(\sim\) 5 \(\times\) 10\({}^{-7}\) molecules.photon\({}^{-1}\), could be derived for the X-ray photodesorption of CH\({}_{3}\)CN from H\({}_{2}\)O-dominated ices. ###### Acknowledgements. This work was carried out with financial support from the Region Ile-de-France DIM-ACACV + program; the Sorbonne Universite "Emergence" program; the ANR PIXyES project, Grant No. ANR-20-CE30-0018 of the French "Agence Nationale de la Recherche"; and the Program National "Physique et Chimie Millen Interallealle" (PCMI) of CNRS/INSU with INC/INP cofunded by CEA and CNES. We would like to acknowledge SOLEIL for the provision of synchrotron radiation facilities under Project No. 20210142 and N. Jaouen, H. Popescu, and R. Gaudemer for their help on the SEXTANTS beam line.
2304.05053
Density Estimation on the Binary Hypercube using Transformed Fourier-Walsh Diagonalizations
This article focuses on estimating distribution elements over a high-dimensional binary hypercube from multivariate binary data. A popular approach to this problem, optimizing Walsh basis coefficients, is made more interpretable by an alternative representation as a "Fourier-Walsh" diagonalization. Allowing monotonic transformations of the resulting matrix elements yields a versatile binary density estimator: the main contribution of this article. It is shown that the Aitchison and Aitken kernel emerges from a constrained exponential form of this estimator, and that relaxing these constraints yields a flexible variable-weighted version of the kernel that retains positive-definiteness. Estimators within this unifying framework mix together well and span over extremes of the speed-flexibility trade-off, allowing them to serve a wide range of statistical inference and learning problems.
Arthur C. Campello
2023-04-11T08:21:54Z
http://arxiv.org/abs/2304.05053v1
# Density Estimation on the Binary Hypercube using Transformed Fourier-Walsh Diagonalizations ###### Abstract This article focuses on estimating distribution elements over a high-dimensional binary hypercube from multivariate binary data. A popular approach to this problem, optimizing Walsh basis coefficients, is made more interpretable by an alternative representation as a "Fourier-Walsh" diagonalization. Allowing monotonic transformations of the resulting matrix elements yields a versatile binary density estimator: the main contribution of this article. It is shown that the Aitchison and Aitken kernel emerges from a constrained exponential form of this estimator, and that relaxing these constraints yields a flexible variable-weighted version of the kernel that retains positive-definiteness. Estimators within this unifying framework mix together well and span over extremes of the speed-flexibility trade-off, allowing them to serve a wide range of statistical inference and learning problems. density estimation binary hypercube Walsh basis Aitchison and Aitken kernel diagonalization positive-definite kernel ## 1 Introduction Though less intuitive than continuous data, high-dimensional binary data appear ubiquitously in modern statistical learning and artificial intelligence. In medicine, critical pieces of information ranging from ocular features [1] to drug trial data to gene expression [2] take binary forms. Other common binary data types include connectivity and node activation in social and epidemiological networks, survey data, and multivariate binary time series [3]. In machine learning and artificial intelligence (AI) applications, binary data often contain important feature information gleaned from more complex data sets. Binary features have long been used in word image retrieval [4] and are used in learning models for facial recognition [5] and fall detection [6]. Progress in drawing insights from binary data advances both areas dealing with direct and intermediate-form binary information. This article addresses the estimation of multivariate binary densities from independent observations of \(n\)-dimensional binary variables whose support is the \(\{-1,1\}^{n}\) hypercube. These densities elucidate complex dependencies across variables and inform conditional probabilities that directly serve statistical learning applications. While estimating all \(2^{n}\) probabilities over the hypercube becomes prohibitively expensive for large \(n\), small subsets can still provide strong insights about a dataset. From only two density elements, for example, one can find the expected value of a binary response variable conditional on a specific input from multiple regressor variables. In applied settings, the effectiveness of a density estimation scheme can be assessed by three metrics: speed, interpretability, and flexibility. Although these qualities often trade off, practical estimators should run in reasonable time, make inferences in understandable ways, and adapt well to datasets of varying sizes, dimensions, and sparsities. An early triumph in binary density estimation with these attributes came from Aitchison and Aitken's (AA) kernel [1]. The AA kernel measures the proximity between two binary vectors by scaling and exponentiating the number of agreeing indexes between them. For dimension \(n\) and number of observations \(N\) The approach needs only \(O(nN)\) time to estimate a density component. Furthermore, the AA kernel function is shown to be a positive definite kernel over binary spaces, which means it is a reproducing kernel Hilbert space (RKHS) method [7] and therefore can leverage a representer theorem. The estimator has been shown to work well for sparse data [8], but suffers from a flexibility limitation because it has one smoothing parameter for all hypercube dimensions. This rigidity compromises the estimator in cases where densities depend more on agreements along some dimensions than others. An alternative approach to binary density estimation uses a weighted sum of orthogonal functions, typically Walsh functions, to find density components [9, 10]. This approach estimates coefficients of a density function's Fourier-Walsh expansion. Since \(2^{n}\) functions exist for \(n\) dimensions, optimizing every Walsh coefficient becomes impossible when \(n\) is large. At the expense of the method's high flexibility, one may estimate coefficients sparingly or in groups. A notable example of the latter involves using recursive block thresholding to find coefficients in probabilistic polynomial time under certain sparsity conditions [11]. Even at extremes of this trade-off, the Fourier-Walsh approach requires more computation than the AA kernel estimator, but proves more versatile due to its higher parameter count. While the Fourier-Walsh and AA kernel estimators appear fundamentally distinct, this article shows that a simple transformation of a restricted version of the former yields the latter. It generalizes such transformations with a guarantee of normalization, yielding a powerful binary density estimator whose parameterizations place it on various points of the speed-flexibility trade-off. The resulting estimator uses an interpretable kernel that measures similarity between two \(n\)-length binary vectors by a signed and weighted sum of \(2^{n}\) variable products acted upon by a monotonic function. Ways to enhance practical usage of Fourier-Walsh and AA kernel estimators also become apparent. By transparently matching Fourier-Walsh coefficients to corresponding variable products, the construction enables an interpretability-first approach to prioritizing Walsh coefficient optimizations. For the AA kernel, the general estimator's form elucidates an extension of the method to a more flexible dimension-weighted form without compromising its normalization or positive definiteness. The article is structured as follows: It first demonstrates an intuitive derivation of naturally-ordered Walsh matrices based on how they translate probabilities over the \(\{-1,1\}^{n}\) hypercube - mapped to \(2^{n}\)-length probability vector - onto expectation values of binary variables and products among them. It justifies using shrinkage coefficients when estimating these expectations from data and shows the proportionality of these to Walsh coefficients. This yields a matrix diagonalization formulation of the Fourier-Walsh estimator with interpretable Walsh coefficients as eigenvalues. From here the article presents an element-wise monotonic transformation of this diagonalization, using special Walsh matrix properties to guarantee the resulting estimator's normalization. This generalized form allows the estimator to incorporate the wide range of activation functions common in machine learning, by themselves or as mixtures. It is then shown that the exponential transformation case with restricted Walsh coefficients yields an AA kernel matrix. Relaxing constraints on the pre-transformation Walsh coefficients introduces a variable-weighted extension of the AA kernel that retains non-negativity and positive definiteness. Following this, the article compares the times required to evaluate to different leave-one-out cross-validation risk functions across variants of the general estimator. It concludes with a discussion of regimes where variants of the general estimator apply and future work invited by the presented estimation approach. ## 2 Estimation Theory Binary density estimators take in, as inputs, observed binary data points in \(n\) dimensions assumed to be i.i.d samples of a binary random variable \(X=(X_{1},\ldots,X_{n})\), \(X_{i}\in\{-1,1\}\). The data are used to estimate elements of the distribution of \(X\), consisting of \(2^{n}\) nonnegative probability values that sum to one. Even without calculating estimates for all density elements, an effective estimator should guarantee nonnegativity and normalization of its complete output. The naive approach to this problem simply estimates probabilities by their relative frequencies in the data. These estimates often severely overfit data and especially prove to be ineffective in high dimensions. Practical estimators instead use data to make extrapolative inferences on density elements beyond those corresponding to observations. To ensure reasonable density extrapolations, one can impose typical constraints of neutrality and symmetry. Neutrality means that the effects of new data on the density estimates are independent of existing data. Given two datasets \(D\) and \(D^{\prime}\), for example, an estimator \(\hat{f}_{X}^{D}\) with the property \[\hat{f}_{X}^{D\cup D^{\prime}}=\frac{1}{|D|+|D^{\prime}|}\left(|D|\hat{f}_{X}^ {D}+|D^{\prime}|\hat{f}_{X}^{D^{\prime}}\right) \tag{1}\] adheres to neutrality. Noting that dataset \(D\) could contain a single observation, it becomes clear that this property restricts the estimator to a discrete kernel form. The symmetry constraint means that, given two distinct points in the hypercube \(\{-1,1\}^{n}\), an observation at one point should affect the density estimate at the other in the same way as in the case with the points swapped. All estimators this article presents adhere to both constraints. ### Fourier-Walsh Estimation in Matrix Form More than a hundred years ago, Joseph L. Walsh cleverly devised a complete set of orthogonal functions \(\phi_{k}\) for \(k\in\mathbb{N}\) that yield basis functions in discrete spaces of size \(2^{n},\ n\in\mathbb{N}\)[12]. Specifically, Walsh functions allow one to represent a binary density \(f\) defined for \(x\in\{0,1\}^{n}\) as \[f(x)=\sum_{k\in\{0,1\}^{n}}c_{k}\phi_{k}(x),\qquad\phi_{k}(x)=(-1)^{\sum_{i}x_ {i}k_{i}}. \tag{2}\] Here, the coefficients \(c_{k}\) act similarly to the coefficients in a Fourier series in that each encodes information pertaining to the entire distribution rather than a single element. This means that estimating only a subset of the involved \(2^{n}\) Walsh coefficients can yield meaningful densities over the entire binary hypercube. For this reason, early and recent research on binary densities involves estimation methods using the Walsh basis [11, 9, 10]. This article refers to estimators in this class as "Fourier-Walsh" estimators. Although Walsh coefficients often appear as abstract parameters of equal intrinsic importance in literature [11] they carry interpretable information when the components of \(X\) are themselves meaningful. Through an intuitive re-derivation of the Walsh decomposition, it is shown that each corresponds to a product of elements in a unique subset of \(\{X_{1},\ldots,X_{n}\}\) when using the binary support \(X_{i}\in\{-1,1\}\). Thus, some Walsh coefficients carry meaning closely corresponding to input features, while others carry more contrived information encoding products of possibly many features. Suppose a random vector \(\mathbf{r}_{X}^{(n)}\in\{-1,1\}^{2^{n}}\) containing the products of all \(2^{n}\) subsets (including the empty set) of \(\{X_{1},\ldots,X_{n}\}\), generated recursively as \[\mathbf{r}_{X}^{(0)}=[1],\qquad\mathbf{r}_{X}^{(n+1)}=\begin{bmatrix}\mathbf{ r}_{X}^{(n)}\\ X_{n+1}\mathbf{r}_{X}^{(n)}\end{bmatrix}. \tag{3}\] From this construction, one can also recursively generate a matrix \(W^{(n)}\in\{-1,1\}^{2^{n}\times 2^{n}}\) whose columns contain the support of \(\mathbf{r}_{X}^{(n)}\). This takes the form \[W^{(0)}=[1],\qquad W^{(n+1)}=\begin{bmatrix}W^{(n)}&W^{(n)}\\ W^{(n)}&-W^{(n)}\end{bmatrix}\implies W^{(n)}=\bigotimes_{i=1}^{n}\begin{bmatrix} 1&1\\ 1&-1\end{bmatrix}. \tag{4}\] Interestingly, this matrix is a naturally ordered (or Haramard) Walsh matrix. It is symmetric and has the important property \(W^{(n)}\left[W^{(n)}\right]^{\intercal}=\left[W^{(n)}\right]^{2}=2^{n}I_{2^{ n}}\). It follows from these definitions that \(\mathbb{E}\left[\mathbf{r}_{X}^{(n)}\right]=W^{(n)}\mathbf{p}\), where \[p_{j}=\mathbb{P}\left\{\mathbf{r}_{X}^{(n)}=W_{:,j}^{(n)}\right\}. \tag{5}\] Noting that the construction of \(\mathbf{r}_{X}^{(n)}\) means \(\left[\mathbf{r}_{X}^{(n)}\right]_{2^{k-1}+1}=X_{k}\), one can also write \[p_{j}=\mathbb{P}\left\{\bigcap_{k=1}^{n}\left(X_{k}=W_{2^{k-1}+1,j}^{(n)} \right)\right\}. \tag{6}\] The vector \(\mathbf{p}\) encodes the probability distribution of \(X\) over the hypercube as a vector and must satisfy the constraints \(\mathbf{1}^{\intercal}\mathbf{p}=1\) and \(\mathbf{p}\succeq\mathbf{0}\). From here forward, \(\hat{\mathbf{p}}\) refers to the estimator of \(\mathbf{p}\). This vector mapping also defines a normalized "counts" vector \(\mathbf{p}_{k}\) that encodes observed instances of \(X\). Given these definitions, \(\left[W^{(n)}\mathbf{p}_{k}\right]_{j}\) encodes the sample mean of the product \(\left[\mathbf{r}_{X}^{(n)}\right]_{j}\) and \(\left[W^{(n)}\hat{\mathbf{p}}\right]_{j}\) the expectation of \(\left[\mathbf{r}_{X}^{(n)}\right]_{j}\) associated with the estimation of the binary density. For now, let \(\left[W^{(n)}\hat{\mathbf{p}}\right]_{j}\) be equal to \(\left[W^{(n)}\mathbf{p}_{k}\right]_{j}\) multiplied by a shrinkage factor \(b_{j}\in[0,1]\). This means \(W^{(n)}\hat{\mathbf{p}}=\mathrm{diag}(\mathbf{b})W^{(n)}\mathbf{p}_{k}\), where \(\mathbf{b}\in[0,1]^{2^{n}}\) is now a shrinkage vector and produces the estimator \[\hat{\mathbf{p}}=\frac{1}{2^{n}}W^{(n)}\mathrm{diag}(\mathbf{b})W^{(n)} \mathbf{p}_{k}. \tag{7}\] This form requires constraints on \(\mathbf{b}\) to ensure \(\hat{\mathbf{p}}\) is a true density. Rearranging Equation 7 gives \(\mathbf{b}=\mathrm{diag}^{-1}(W^{(n)}\mathbf{p}_{k})W^{(n)}\hat{\mathbf{p}}\). Since the first row of this matrix is trivially \(\mathbf{1}^{\intercal}\), the equality means \(\mathbf{1}^{\intercal}\hat{\mathbf{p}}=1\iff b_{1}=1\). For \(\hat{\mathbf{p}}\succeq\mathbf{0}\) to hold, \(\mathbf{b}\) must be a convex combination of the columns of \(\left[\mathrm{diag}^{-1}(W^{(n)}\mathbf{p}_{k})W^{(n)}\right]\). Geometrically, this means \(\mathbf{b}_{2:2^{n}}\) must lie inside the simplex formed by the columns of \(\left[\mathrm{diag}^{-1}(W^{(n)}\mathbf{p}_{k})W^{(n)}\right]\) with its first row removed. Equation 7 is equivalent to a Fourier-Walsh estimator with coefficients proportional to elements \(b_{j}\left[W^{(n)}\mathbf{p}_{k}\right]_{j}\). Here, the constraint \(b_{j}\in[0,1]\) is justified in addition to the aforementioned constraints on \(\mathbf{b}\). The chosen binary basis \(X_{i}\in\{-1,1\}\) means \(\left[\mathbf{r}_{X}^{(n)}\right]_{j}\in\{-1,1\}\) and \(\left[W^{(n)}\mathbf{p}_{k}\right]_{j}\) represents its mean from binary samples. Suppose an analogous univariate random variable \(Y\in\{-1,1\}\) with \(\mathbb{E}[Y]=q\) and \(N\) observed outcomes of \(Y\) with a sample mean \(\bar{y}\). If one estimates \(q\) using a factor \(b\) as \(\hat{q}=b\bar{y}\), then the square error minimizing \(b^{*}\) given a true \(q\) is \[b^{*}=\operatorname*{argmin}_{b}\mathbb{E}_{q}\left[(b\bar{y}-q)^{2}\right]=q \frac{\mathbb{E}\left[\bar{y}\right]}{\mathbb{E}\left[\bar{y}^{2}\right]}= \frac{q^{2}}{\mathbb{E}\left[\bar{y}^{2}\right]}=\frac{Nq^{2}}{(N-1)q^{2}+1}. \tag{8}\] This equation constrains \(b^{*}\in[0,1]\lor q\in[-1,1],N\in\mathbb{N}\). As expected, \(b^{*}(-1)=b^{*}(1)=1\) and \(b^{*}(0)=0\); note that the optimal \(b^{*}\) varies substantially over \(q\) even for large values of \(N\). This shrinkage technique works similarly to others in applied statistics, such as the James-Stein estimator [13] and lasso and ridge regressions. Note that the case \(\mathbf{b}=\mathbf{1}\) corresponds to no regularization and reduces Equation 7 to \(\hat{\mathbf{p}}=\mathbf{p}_{k}\), that is, the data frequency estimate. The fully regularized case of \(\mathbf{b}=[1\ \mathbf{0}^{\intercal}]^{\intercal}\) yields the uniform estimate \(\hat{\mathbf{p}}=\mathbf{1}/2^{n}\). A later section discusses using cross-validation to optimize \(\mathbf{b}\) within these extremes. The derivation of Equation 7 elucidates that \(b_{j}\) regularizes the expectation of \(\left[\mathbf{r}_{X}^{(n)}\right]_{j}\) used in the estimator. This means that \(\binom{n}{k}\) elements of \(\mathbf{b}\) - and Walsh coefficients - correspond to products of elements in subsets of \(\{X_{i},\ldots,X_{n}\}\) of size \(k\in\mathbb{N}\). Trivially \(b_{1}\) corresponds to the empty set whose product is 1, further justifying setting \(b_{1}=1\). In applications where variables \(X_{i}\) reflect interpretable information, therefore, elements of \(\mathbf{b}\) associated products of small subsets of \(\{X_{i},\ldots,X_{n}\}\) carry more meaning. This creates an intuition hierarchy of Walsh coefficients that can inform optimization choices when \(n\) is large. To specify this hierarchy, define \(S_{k}^{(n)}\subset\mathbb{N}\) to be the set of indexes of \(\mathbf{r}_{X}^{(n)}\) corresponding to products of \(k\) variables. From the construction of \(\mathbf{r}_{X}^{(n)}\), it follows that \[S_{0}^{(n)}=\{1\},\qquad S_{k}^{(n+1)}=S_{k}^{(n)}\cap\left\{x+2^{n}\Big{|}\,x \in S_{k-1}^{(n)}\right\}. \tag{9}\] Note that \(S_{1}^{(n)}=\{2^{x-1}+1\mid x\in[n]\}\), which matches the indexes of \(W^{(n)}\) elements in Equation 6. An intuition-first approach to optimizing \(\mathbf{b}\) prescribes prioritizing indexes in sets \(S_{k}^{(n)}\) of low \(k\). In a case where one considers any combination of more than three variables in \(\{X_{i},\ldots,X_{n}\}\) uninterpretable, for example, only \((n^{3}+5n+6)/6\) out of the \(2^{n}\) elements require optimization for an interpretable estimator. ### Monotonic Transformations of Fourier-Walsh Diagonalization Elements At its core, the Fourier-Walsh estimator in matrix form defines a similarity metric between two points on the \(\{-1,1\}^{n}\) hypercube. For hypercube points assigned to indexes \(i\) and \(j\) in \(\mathbf{p}\), Equation 7 gives the kernel \(K_{ij}=\mathbf{b}^{\intercal}(W_{:,i}^{(n)}\odot W_{:,j}^{(n)})/2^{n}\). This intuitive kernel sums elements of \(\mathbf{b}\) with a factor \((\pm 1)\) on \(b_{k}\) depending on whether the two input points share the same \(\left[\mathbf{r}_{X}^{(n)}\right]_{k}\); it then divides this by the number of elements. In many cases, monotonic transformations of this kernel, which preserve its interpretable ordering, can enhance it. For example, transforming elements \(K_{ij}\) using a nonnegative function guarantees nonnegative density estimates without any restriction on \(\mathbf{b}\). Furthermore, some transformations allow the kernel to be positive definite without the requirement \(\mathbf{b}\succ\mathbf{0}\) as in Equation 7. Estimating binary densities using such transformed Fourier-Walsh matrices requires a guarantee of normalization. Here s fact specific to Fourier-Walsh matrices is proven to facilitate a normalized generalization of Equation 7 with monotonically transformed elements. Since products of Walsh functions are themselves Walsh functions, the columns and rows of Walsh matrices must be closed under element-wise multiplication. Given this fact, suppose a mapping matrix \(\mathcal{M}^{(n)}\in\mathbb{N}^{2^{n}\times 2^{n}}\) where \(W_{:,\mathcal{M}_{ij}^{(n)}}^{(n)}=W_{:,i}^{(n)}\odot W_{:,j}^{(n)}\). **Lemma 2.1**.: \(\mathcal{M}^{(n)}\) _exists and each of its rows and columns contain unique elements in \([2^{n}]\subset\mathbb{N}\)._ Proof.: In the base case \(n=0\), it is evident that \(W^{(0)}=[1]\implies\mathcal{M}^{(0)}=[1]\). From the recursive construction of \(W^{(n)}\), the following hold true for indexes \(i,j\in\{1,\ldots,2^{n}\}\): \[W_{:,i}^{(n)}\odot W_{:,j}^{(n)}=W_{:,\mathcal{M}_{ij}^{(n)}}^{(n)}\implies \left\{\begin{aligned} & W_{:,i}^{(n+1)}\odot W_{:,j}^{(n+1)}=W_{:,2^{n+i}}^{(n+1)}\odot W _{:,2^{n+j}}^{(n+1)}=W_{:,\mathcal{M}_{ij}^{(n)}}^{(n+1)}\\ & W_{:,i}^{(n+1)}\odot W_{:,2^{n+j}}^{(n+1)}=W_{:,2^{n+i}}^{(n+1)} \odot W_{:,j}^{(n+1)}=W_{:,2^{n+}\mathcal{M}_{ij}^{(n)}}^{(n+1)}\\ \end{aligned}\right. \tag{10}\] Thus, \[\mathcal{M}^{(n+1)}=\begin{bmatrix}\mathcal{M}^{(n)}&2^{n}J_{2^{n}}+\mathcal{M}^{(n )}\\ 2^{n}J_{2^{n}}+\mathcal{M}^{(n)}&\mathcal{M}^{(n)}\end{bmatrix}, \tag{11}\] where \(J_{m}\) is the \(m\times m\) all-ones matrix. Note from Equation 11 that if every row and every column of \(\mathcal{M}^{(n)}\) contains all integers \([2^{n}]\), then \(\mathcal{M}^{(n+1)}\) will have the same property for integers \([2^{n+1}]\). Because this is true for the base case \(\mathcal{M}^{(0)}=[1]\) it holds for all \(\mathcal{M}^{(n)}\). This concludes the proof. With the mapping matrix \(\mathcal{M}^{(n)}\) defined, one can write \[\left[W^{(n)}\mathrm{diag}(\mathbf{b})W^{(n)}\right]_{ij}=\sum_{k=1}^{2^{n}}b_ {k}W_{ik}^{(n)}W_{jk}^{(n)}=\mathbf{b}^{\intercal}\left(W_{\cdot,i}^{(n)} \odot W_{\cdot,j}^{(n)}\right)=\mathbf{b}^{\intercal}W_{\cdot,\mathcal{M}_{ij }^{(n)}}^{(n)}. \tag{12}\] By Lemma 2.1, this means that all rows and columns of \(W^{(n)}\mathrm{diag}(\mathbf{b})W^{(n)}\) contain the same elements. This also applies to element-wise transformations of \(W^{(n)}\mathrm{diag}(\mathbf{b})W^{(n)}\). Denoting \((Q)^{f}\) to mean the element-wise action of \(f:\mathbb{R}\rightarrow\mathbb{R}\) on matrix \(Q\), This fact means \[\left(W^{(n)}\mathrm{diag}(\mathbf{b})W^{(n)}\right)^{f}\mathbf{1}=\left[ \left(W^{(n)}\mathrm{diag}(\mathbf{b})W^{(n)}\right)^{f}\mathbf{1}\right]_{1} \mathbf{1}. \tag{13}\] Since \(W_{\cdot,1}=\mathbf{1}\), it follows that \[\left[\left(W^{(n)}\mathrm{diag}(\mathbf{b})W^{(n)}\right)^{f}\mathbf{1} \right]_{1}=\sum_{j=1}^{2^{n}}f\left(\sum_{k=1}^{2^{n}}b_{k}W_{1k}^{(n)}W_{jk} ^{(n)}\right)=\left(\mathbf{b}^{\intercal}W^{(n)}\right)^{f}\mathbf{1}. \tag{14}\] This normalization factor yields the complete general-form estimator \[\hat{\mathbf{p}}=\frac{\left(W^{(n)}\mathrm{diag}(\mathbf{b})W^{(n)}\right)^ {f}}{\left(\mathbf{b}^{\intercal}W^{(n)}\right)^{f}\mathbf{1}}\mathbf{p}_{k}. \tag{15}\] Note that \(f(x)=x\) and \(b_{1}=1\) yield \(\left(\mathbf{b}^{\intercal}W^{(n)}\right)^{f}\mathbf{1}=\mathbf{b}^{\intercal }W^{(n)}\mathbf{1}=2^{n}\), which is consistent with Equation 7. Elements of the matrix in the numerator of Equation 15 are evaluated in time proportional to the number of nonzero elements in \(\mathbf{b}\) regardless of \(f\). However, the normalization factor for nontrivial \(\mathbf{b}\) and nonlinear \(f\) typically takes \(O(n2^{n})\) time to calculate using the Fast Walsh Transform algorithm [14], regardless of the number of nonzero elements in \(\mathbf{b}\). In the cases of logistic and exponential transformations, a certain restriction on \(\mathbf{b}\) can greatly reduce the normalization times to \(O(1)\) and \(O(n)\) respectively. These improvements are referenced in Table 1. Define a vector \(\mathbf{w}\in[0,1]^{n}\) and an associated \(\mathbf{b_{w}}\) such that \(\mathbf{b_{w}}=\sum_{k=1}^{n}w_{k}\hat{e}_{2^{k-1}+1}\); this means \(\mathbf{b_{w}}\) has nonzero elements only at indexes in \(S_{1}^{(n)}\). Since all column vectors in \(\{W_{\cdot,k}^{(n)}|k\in S_{1}^{(n)}\}\) are anti-symmetric - in the sense \(\mathbf{v}=-\mathrm{flip}(\mathbf{v})\) - it follows that \(W^{(n)}\mathbf{b_{w}}\) is also anti-symmetric. From here, the propriety of the sigmoid/logistic function \(f(x)+f(-x)=1\) allows one to write \[f=\frac{1}{1+\gamma^{-x}}\implies\mathbf{1}^{\intercal}\left(W^{(n)}\mathbf{ b_{w}}\right)^{f}=\left(\mathbf{b_{w}^{\intercal}W^{(n)}}\right)^{f}\mathbf{1}=2^{n}/2, \tag{16}\] \(\forall\,\gamma\in\mathbb{R}_{+}\). This is evaluated in \(O(1)\) time. Turning to the exponential case, one notes that \[W^{(n)}\mathbf{b_{w}}=\begin{bmatrix}1\\ 1\end{bmatrix}\otimes W^{(n-1)}\left[\mathbf{b_{w}}\right]_{1:2^{n-1}}W^{(n-1) }+w_{n}\begin{bmatrix}\mathbf{1}\\ -\mathbf{1}\end{bmatrix}. \tag{17}\] This means \[f(x)=\gamma^{x}\implies\left(W^{(n)}\mathbf{b_{w}}\right)^{f}=\begin{bmatrix} \gamma^{w_{n}}\\ \gamma^{-w_{n}}\end{bmatrix}\otimes\left(W^{(n-1)}\left[\mathbf{b_{w}}\right]_ {1:2^{n-1}}W^{(n-1)}\right)^{f}=\bigotimes_{i=0}^{n-1}\begin{bmatrix}\gamma^{w _{n-i}}\\ \gamma^{-w_{n-i}}\end{bmatrix} \tag{18}\] and, finally, \[f(x)=\gamma^{x}\implies\left(\mathbf{b_{w}^{\intercal}W^{(n)}}\right)^{f} \mathbf{1}=\prod_{i=1}^{n}\left(\gamma^{w_{i}}+\gamma^{-w_{i}}\right). \tag{19}\] Equation 19 evaluates in \(O(n)\) time. The powerful flexibility of kernel transformation enables this binary density estimator to employ the wide range of activation functions used in applied machine learning. These include exponential, logistic/sigmoid, step, ReLU, \(\tanh\), ELU functions, and many others. The nonnegative natures of the first four listed functions make them especially useful in guaranteeing nonnegative density estimates. The choices among these functions can depend on the performance of cross-validation, the context of the application, and the desired evaluation speed. An additional benefit of the presented matrix formulation is that convex combinations of normalized matrices may also be used, allowing mixtures of transformed kernel estimators. Such a mixed estimator would take the form \[\hat{\mathbf{p}}=\left[\sum_{i=1}^{m}c_{i}\frac{\left(W^{(n)}\mathrm{diag}( \mathbf{b}_{i})W^{(n)}\right)^{f}}{\left(\mathbf{b}_{i}^{\intercal}W^{(n)} \right)^{f_{i}}\mathbf{1}}\right]\mathbf{p}_{k}, \tag{20}\] where \(\sum_{i=1}^{m}c_{i}=1\), \(c_{i}>0\). ### Aitchison Aitken Kernel from Exponential Fourier-Walsh Matrix This section demonstrates that using an exponential function in Equation 15 with \(\mathbf{b}=\mathbf{b_{w}}\) type restrictions yields the AA kernel estimator; this is despite the seemingly fundamental differences between the AA and Fourier-Walsh approaches to estimation. The AA kernel gives a similarity metric between two \(n\)-length binary vectors \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}\) with parameter \(\lambda\in[1/2,1]\) as \[K(\mathbf{x}_{i},\mathbf{x}_{j};\lambda)=\lambda^{n-d(\mathbf{x}_{i},\mathbf{ x}_{j})}(1-\lambda)^{d(\mathbf{x}_{i},\mathbf{x}_{j})},\] where \(d(\mathbf{x}_{i},\mathbf{x}_{j})\) gives the number of elements where \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}\) differ. This kernel guarantees normalization and nonnegative density estimation [1]. Supposing that \(\mathbf{x}_{i},\mathbf{x}_{j}\in\{-1,1\}^{n}\), one can write \[\mathbf{x}_{i}^{\intercal}\mathbf{x}_{j}=[n-d(\mathbf{x}_{i},\mathbf{x}_{j})] -d(\mathbf{x}_{i},\mathbf{x}_{j})\implies d(\mathbf{x}_{i},\mathbf{x}_{j})= \frac{n-\mathbf{x}_{i}^{\intercal}\mathbf{x}_{j}}{2},\quad n-d(\mathbf{x}_{i},\mathbf{x}_{j})=\frac{n+\mathbf{x}_{i}^{\intercal}\mathbf{x}_{j}}{2}. \tag{21}\] The AA kernel equation now becomes \[K(\mathbf{x}_{i},\mathbf{x}_{j};\lambda)=\sqrt{\lambda}^{n+\mathbf{x}_{i}^{ \intercal}\mathbf{x}_{j}}\sqrt{1-\lambda}^{n-\mathbf{x}_{i}^{\intercal} \mathbf{x}_{j}}=\sqrt{\lambda(1-\lambda)}^{n}\sqrt{\frac{\lambda}{1-\lambda}}^ {\mathbf{x}_{i}^{\intercal}\mathbf{x}_{j}}. \tag{22}\] Using the binary hypercube to \(2^{n}\)-vector mapping implicit in Equation 6, one can write \(\mathbf{x}_{i}^{\intercal}\mathbf{x}_{j}=\left[W^{(n)}\mathrm{diag}(\mathbf{b }_{\mathbf{1}})W^{(n)}\right]_{ij}\), where \(\mathbf{b_{1}}\) corresponds to \(\mathbf{b_{w}}\) in the case \(\mathbf{w}=\mathbf{1}\). Using Equation 19, one notes that \[f(x)=\sqrt{\frac{\lambda}{1-\lambda}}^{x}\implies\left[\left(\mathbf{b}_{ \mathbf{1}}^{\intercal}W^{(n)}\right)^{f}\mathbf{1}\right]^{-1}=\left(\sqrt{ \frac{\lambda}{1-\lambda}}+\sqrt{\frac{1-\lambda}{\lambda}}\right)^{-n}=\sqrt {\lambda(1-\lambda)}^{n}, \tag{23}\] in agreement with the AA kernel normalization in Equation 22. Therefore, the AA kernel estimator is equivalent to \[\hat{\mathbf{p}}_{\mathrm{AAK}}=\frac{\left(W^{(n)}\mathrm{diag}(\mathbf{b_{ \mathbf{1}}})W^{(n)}\right)^{f}}{\left(\mathbf{b}_{\mathbf{1}}^{\intercal}W^ {(n)}\right)^{f}\mathbf{1}}\mathbf{p}_{k},\qquad f(x)=\sqrt{\frac{\lambda}{1 -\lambda}}^{x}, \tag{24}\] a restricted form of the estimator in Equation 15. Given this equation, one may relax \(\mathbf{b}\) from \(\mathbf{b_{1}}\) to \(\mathbf{b_{w}}\) to yield a variable-weighted version of the AA kernel. This allows the elements of \(\mathbf{w}\) to parameterize different "smoothing" levels along the \(n\) dimensions of the hypercube. A variable-weighted kernel reflects realistic cases where similarities of two points in some indexes matter more to overall similarity than similarities in other indexes. For convenience, a reparameterization \(\gamma=\sqrt{\lambda/(1-\lambda)}\) is introduced. The bounds \(\lambda\in[1/2,1]\) correspond to \(\gamma\in[1,\infty)\). Letting \(\mathbf{b_{1}}\rightarrow\mathbf{b_{w}}\) yields the weighted AA kernel estimator \[\hat{\mathbf{p}}_{\mathrm{WAAK}}=\frac{\left(W^{(n)}\mathrm{diag}(\mathbf{b_{ w}})W^{(n)}\right)^{f}}{\left(\mathbf{b_{w}^{\intercal}W^{(n)}}\right)^{f} \mathbf{1}}\mathbf{p}_{k},\qquad f(x)=\gamma^{x}. \tag{25}\] It is now shown that one may write the numerator of Equation 25 as a series of Kronecker products, easing computation and elucidating certain properties about the kernel's positive definiteness. Noting that the product \(W^{(n-1)}\mathrm{diag}\left([a\ \mathbf{0}^{\intercal}]\right)W^{(n-1)}\) gives \(aJ_{2^{n-1}}\), the recursive form of \(W^{(n)}\) means \[W^{(n)}\mathrm{diag}(\mathbf{b_{w}})W^{(n)}=\begin{bmatrix}1&1\\ 1&1\end{bmatrix}\otimes W^{(n-1)}\mathrm{diag}\left([\mathbf{b_{w}}]_{1:2^{n-1 }}\right)W^{(n-1)}+\begin{bmatrix}1&-1\\ -1&1\end{bmatrix}w_{n}J_{2^{n-1}}. \tag{26}\] This means \[f(x)=\gamma^{x}\implies\left(W^{(n)}\mathrm{diag}(\mathbf{b_{w}})W^{(n)}\right)^{ f}=\begin{bmatrix}\gamma^{w_{n}}&\gamma^{-w_{n}}\\ \gamma^{-w_{n}}&\gamma^{w_{n}}\end{bmatrix}\otimes\left(W^{(n-1)}\mathrm{diag}([ \mathbf{b_{w}}]_{1:2^{n-1}})W^{(n-1)}\right)^{f}. \tag{27}\] Continuing the recursion, the complete weighted AA kernel estimator can be written as \[\hat{\mathbf{p}}_{\mathrm{WAAK}}=\left[\prod_{j=1}^{n}\left(\gamma^{w_{j}}+ \gamma^{-w_{j}}\right)\right]^{-1}\bigotimes_{i=0}^{n-1}\begin{bmatrix}\gamma^ {w_{n-i}}&\gamma^{-w_{n-i}}\\ \gamma^{-w_{n-i}}&\gamma^{w_{n-i}}\end{bmatrix}\mathbf{p}_{k}. \tag{28}\] In this form, elements of the weighted AA kernel matrix can be evaluated in \(O(n)\) time. Note also that since a \(2\times 2\) matrix with on-diagonal elements \(\gamma^{w}\) and off diagonal elements \(\gamma^{-w}\) has eigenvalues \(\gamma^{w}-\gamma^{-w}\) and \(\gamma^{w}+\gamma^{-w}\), the matrices in the Kronecker product in Equation 28 are all positive definite when \(\mathbf{w}\succ\mathbf{0}\) and \(\gamma>1\). Since a Kronecker product of two positive definite matrices is also positive definite, it follows that the weighted AA kernel estimator uses a positive definite kernel function. This analysis rederives the known result that the AA kernel is positive definite [7] and extends this property to its variable-weighted generalization. ## 3 Cross Validation The flexibility of the presented general-form estimator is driven by its variable, and possibly high, number of parameters. These include elements of \(\mathbf{b}\), parameters defining \(f\), and weights \(c_{i}\) applied to matrices when mixing estimators. Practical optimization of these parameters typically involves a cross-validation scheme. This section outlines the common leave-one-out approach to cross-validation using the squared error (SE) and Kullback-Leibler (KL) loss functions. It specifically focuses on risk function evaluation times across variants of the general-form estimator. For the optimizations described, suppose a "true" probability distribution \(\mathbf{p}\) and the estimator \(\hat{\mathbf{p}}_{\lambda}\) parameterized by \(\lambda\). The optimal \(\lambda^{*}\) minimizes the expectation of a loss function \(L(\mathbf{p},\hat{\mathbf{p}}_{\lambda})\) i.e., the risk. The SE and KL loss functions are respectively defined as \[L_{\mathrm{SE}}(\mathbf{p},\hat{\mathbf{p}}_{\lambda})=||\hat{\mathbf{p}}_{ \lambda}-\mathbf{p}||_{2}^{2},\qquad L_{\mathrm{KL}}(\mathbf{p},\hat{\mathbf{ p}}_{\lambda})=\sum_{j=1}^{2^{n}}\mathbf{p}_{j}\log\left(\mathbf{p}_{j} \big{/}\left|\hat{\mathbf{p}}_{\lambda}\right|_{j}\right). \tag{29}\] To implement the leave-one-out technique, let \(K\in[2^{n}]\) be a multiset encoding indexes of observations. Also, define \(\hat{\mathbf{p}}_{\lambda}^{(k)}\) as the estimate made without an observed data point corresponding to \(k\in K\). By the law of the unconscious statistician, \[\mathbb{E}\left[\hat{\mathbf{p}}_{\lambda}^{\mathsf{T}}\mathbf{p}\right]= \frac{1}{|K|}\sum_{k\in K}\mathbb{E}\left[\left(\hat{\mathbf{p}}_{\lambda}^{(k) }\right)^{\mathsf{T}}\hat{e}_{k}\right]\implies\arg\min_{\lambda}\mathbb{E} \left[L_{\mathrm{SE}}(\mathbf{p},\hat{\mathbf{p}}_{\lambda})\right]=\arg\min_{ \lambda}\left[\hat{\mathbf{p}}_{\lambda}^{\mathsf{T}}\hat{\mathbf{p}}_{ \lambda}-\frac{2}{|K|}\sum_{k\in K}\left(\hat{\mathbf{p}}_{\lambda}^{(k)} \right)^{\mathsf{T}}\hat{e}_{k}\right]. \tag{30}\] For the KL loss, the cross-validation optimization is given by \[\arg\min_{\lambda}\mathbb{E}\left[L_{\mathrm{KL}}(\mathbf{p},\hat{\mathbf{p}} _{\lambda})\right]=\arg\max_{\lambda}\sum_{k\in K}\log\left[\left(\hat{ \mathbf{p}}_{\lambda}^{(k)}\right)^{\mathsf{T}}\hat{e}_{k}\right], \tag{31}\] using a discrete adaptation of the known KL cross validation optimizer for density estimation over continuous spaces [15]. One can now impose the familiar form \(\hat{\mathbf{p}}_{\lambda}=Q_{\lambda}\mathbf{p}_{k}=Q_{\lambda}\left[(1/|K|) \sum_{k\in K}\hat{e}_{k}\right]\) for some generic symmetric estimator matrix \(Q_{\lambda}\in\mathbb{R}^{2^{n}\times 2^{n}}\). This means that evaluating \(\left(\hat{\mathbf{p}}_{\lambda}^{(k)}\right)^{\mathsf{T}}\hat{e}_{k}\) involves knowing \(|K|-1\) elements of \(Q_{\lambda}\). Therefore, the computation of the KL risk function requires the evaluation of \(|K|(|K|-1)/2\) elements of \(Q_{\lambda}\). In the SE risk case, the second term also involves knowing \(|K|(|K|-1)/2\) elements of \(Q_{\lambda}\), while the first requires evaluating \(|K|(|K|+1)/2\) elements of \(Q_{\lambda}^{2}\). Squared-matrix elements are particularly simple to calculate for numerators of Equations 7 and 28 since \[\left(W^{(n)}\mathrm{diag}(\mathbf{b})W^{(n)}\right)^{2}=2^{n}\left(W^{(n)} \mathrm{diag}(\mathbf{b}\odot\mathbf{b})W^{(n)}\right) \tag{32}\] and \[f(x)=\gamma^{x}\implies\left[\left(W^{(n)}\mathrm{diag}(\mathbf{b}_{\mathbf{ w}})W^{(n)}\right)^{f}\right]^{2}=\bigotimes_{i=0}^{n-1}\begin{bmatrix}\gamma^{2w_{n-i}} +\gamma^{-2w_{n-i}}&2\\ 2&\gamma^{2w_{n-i}}+\gamma^{-2w_{n-i}}\end{bmatrix}, \tag{33}\] by the Kronecker mixed product property. This section concludes with a tabulation of the computation times required to normalize and evaluate matrix and squared-matrix elements of useful estimators derived from restrictions of Equation 15. Table 1 shows these computation times, where \(b\) gives the number of nonzero (or non-constant) elements of \(\mathbf{b}\) and the fast Walsh transform [14] is used for listed operations that take time \(O(n2^{n})\). ## 4 Summary and Discussion This article presents a powerful binary density estimator built on element-wise monotonic transformations of Fourier-Walsh diagonalizations. To accomplish this, the article first provides an intuitive rederivation of the Fourier-Walsh decomposition in the form of a diagonalization. In this form, Walsh coefficients are shown to relate to unique products of constituent univariate binary variables. A specific property of Walsh matrices is then shown that enables normalization of an estimator arising from any elementwise transformation function. It is then elucidated how the AA kernel arises from the above process with a generic exponential transformation, and a variable-weighted extension of the kernel is introduced that retains its desirable properties. Finally, the implementations of leave-one-out cross-validation risk functions are outlined for squared error and Kullback-Leibler loss functions and their computation times are compared across estimators. The flexibility, speed, and interpretability of this new estimator under different constraints make it an ideal candidate for use in a wide range of estimation and learning applications. The comparison made in Table 1 shows that variants of the proposed estimator under different constraints serve best in different regimes of data science. For problems of up to \(n\approx 20\) dimensions - i.e. binary inputs - computers can safely handle \(O(n2^{n})\) time operations and the estimator in Equation 15 may apply in its most general form. Learning in this setting could involve iterating over a large number of transformation functions and exploring mixtures of several different estimation matrices. Problems of approximately 20 dimensions and 40 data points have been cited as typical in applied binary density estimation [8]. At the other extreme, one could consider a high-dimensional case of \(n\) up to \(n\approx 10^{4}\). Here, any approach other than the introduced weighted AA kernel and untransformed Fourier-Walsh diagonalization - with heavily restricted \(\mathbf{b}\) - becomes highly intractable. These extremes not only showcase the high versatility of the general form estimator, but also invite the possibility of variable selection when faced with faced with a learning task. Suppose, for example, that 100 variables encode five response variables and 95 regressors of varying inference importance. One could first employ a direct Fourier-Walsh diagonalization estimator and optimize only elements of \(\mathbf{b}\) in \(S_{1}^{(95)}\), \(S_{2}^{(95)}\), and \(S_{3}^{(95)}\). From these optimized quantities, one could find the set of 20 binary regressor variables most correlated with the response variables and then apply less regularized and restricted estimator variants using only these binary inputs. Such "variable search" approaches made possible by the presented estimator can make it powerful in the realm of machine learning over massive binary spaces. Methods to select hypercubes over which to estimate practical densities could be the focus of exciting future research.
2310.07903
Sorting it out in Hardware: A State-of-the-Art Survey
Sorting is a fundamental operation in various applications and a traditional research topic in computer science. Improving the performance of sorting operations can have a significant impact on many application domains. For high-performance sorting, much attention has been paid to hardware-based solutions. These are often realized with application-specific integrated circuits (ASICs) or field-programmable gate arrays (FPGAs). Recently, in-memory sorting solutions have also been proposed to address the movement cost issue between memory and processing units, also known as Von Neumann bottleneck. Due to the complexity of the sorting algorithms, achieving an efficient hardware implementation for sorting data is challenging. A large body of prior solutions is built on compare-and-swap (CAS) units. These are categorized as comparison-based sorting. Some recent solutions offer comparison-free sorting. In this survey, we review the latest works in the area of hardware-based sorting. We also discuss the recent hardware solutions for partial and stream sorting. Finally, we will discuss some important concerns that need to be considered in the future designs of sorting systems.
Amir Hossein Jalilvand, Faeze S. Banitaba, Seyedeh Newsha Estiri, Sercan Aygun, M. Hassan Najafi
2023-10-11T21:21:07Z
http://arxiv.org/abs/2310.07903v1
# Sorting it out in Hardware: A State-of-the-Art Survey ###### Abstract Sorting is a fundamental operation in various applications and a traditional research topic in computer science. Improving the performance of sorting operations can have a significant impact on many application domains. For high-performance sorting, much attention has been paid to hardware-based solutions. These are often realized with application-specific integrated circuits (ASICs) or field-programmable gate arrays (FPGAs). Recently, in-memory sorting solutions have also been proposed to address the movement cost issue between memory and processing units, also known as Von Neumann bottleneck. Due to the complexity of the sorting algorithms, achieving an efficient hardware implementation for sorting data is challenging. A large body of prior solutions is built on compare-and-swap (CAS) units. These are categorized as _comparison-based_ sorting. Some recent solutions offer _comparison-free_ sorting. In this survey, we review the latest works in the area of hardware-based sorting. We also discuss the recent hardware solutions for _partial_ and _stream_ sorting. Finally, we will discuss some important concerns that need to be considered in the future designs of sorting systems. comparison-based sorting, comparison-free sorting, hardware-based sorting, in-memory sorting, partial sorting. ## I Introduction Today, the data volume has increased significantly in many application domains. Processing data at the terabyte and petabyte levels has become routine. Processing large volumes of data is challenging and is expected to remain at an upward rate [1]. Sorting is one of the substantial operations in computer science performed for different purposes, from putting data in a specific order, such as _ascending_ or _descending_, to find the minimum and maximum values, finding the median, and partial sorting to find the top-\(m\) greatest or smallest values. As Fig. 1 shows sorting is used in many application domains, from data merging to big data processing [2, 3], database operations [4] especially when the scale of files/data are very large, robotics [5, 6, 7], signal processing (_e.g.,_ sorting radar signals) [8, 9, 10], and wireless networks [11]. Sorting the time series data according to their timestamps holds critical importance in numerous artificial intelligence (AI) applications, such as forecasting and anomaly detection [31], where the sequential occurrence of events is of paramount significance [32]. Wireless sensor network applications often incorporate genetic algorithms, with the 'Non-dominated Sorting Genetic Algorithm (NSGA)' being a commonly employed and efficient approach requiring sorting [33]. Additionally, wireless networks necessitate the implementation of sorting algorithms that are both energy-optimal and energy-balanced, such as enhanced sorting algorithms [24]. The concept of sorting also extends to the realm of robotic visual tasks. Much like traditional scalar sorting, the sorting of items based on attributes like color, shape, or other features within a robot's perceived environment constitutes a tangible engineering application of sorting [21]. In the field of robotics, object sorting is a significant task. Particularly in computer vision applications, sorting objects by robots based on their perceived environment is challenging [19]. Another intriguing application is to control greenhouse climatic factors through sorting networks [34]. For sorting large-scale datasets, some researchers adopt an external sorting methodology. External sorting serves as a solution for sorting vast datasets that cannot fit into the primary memory of a computing platform. Instead, it utilizes additional memory elements like hard disk drives, employing a sort-and-merge strategy [35]. Sorting also finds unconventional applications in signal processing. This extends from theoretical scalar sorting to sorting tasks in real-world signal processing. An illustrative example is radar signal sorting, a recent and intricate sorting challenge in the context of multi-function radar systems [28]. Improving the sorting speed can have a significant impact on all these applications. Many software- and hardware-based solutions have been proposed in the literature for high-performance sorting. Software-based solutions rely on powerful single/multi-core and graphics processing unit (GPU)-based processors for high performance [39]. Much attention has been paid to hardware sorting solutions, especially for Fig. 1: Common applications of sorting: _Big Data_[12, 13, 14], _Database operations_[15, 16, 17], _Robotics_[18, 19, 20, 21], _Wireless Sensor Networks_[22, 23, 24, 25, 26], _Signal Processing_[9, 10, 27, 28, 29], and _Wireless Networks_[30]. applications that require very high-speed sorting [40, 6, 41]. These have been implemented using either application-specific integrated circuits (ASICs) or field-programmable gate arrays (FPGAs). Depending on the target applications, the hardware sorting units vary greatly in how they are configured and implemented. The number of inputs can be as low as nine for some image processing applications (_e.g.,_ median filtering [42]) to tens of thousands [40, 41]. The data inputs have been binary values, integers, or floating-point numbers ranging from 4- to 256-bit precision. Hardware cost and power consumption are the dominant concerns with hardware implementations. The total chip area is limited in many applications [43]. As fabrication technologies continue to scale, keeping chip temperatures low is an important goal since leakage current increases exponentially with temperature. Power consumption must be kept as low as possible. Developing low-cost, power-efficient hardware-based solutions to sorting is an important goal [41]. There is a large body of work on the design of customized sorting hardware. These works seek to utilize the hardware resources fully and to provide a custom, cost-effective hardware sorting engine. Developing hardware-efficient implementations for sorting algorithms is challenging, considering the complexity of these algorithms [40, 41, 44]. A significant amount of hardware resources is spent by comparators, memory elements including large global memories, complex pipelining, and complicated local and global control units [44]. Many of the prior hardware solutions are built on basic compare-and-swap (CAS) units that compare pairs of data and swap if needed. These solutions are categorized as _comparison-based_ sorting. As shown in Fig. 2, each basic CAS unit is conventionally implemented with a binary comparator and two multiplexers (MUX) units [41]. Sorting networks of CAS units are frequently used for fast and parallel hardware sorting. Their inherent parallelism enables them to achieve sorting at a considerably faster rate than the fastest sequential software-based sorting algorithms. However, these CAS-based hardware solutions suffer from high hardware costs, especially when the _number_ and _precision_ of input data increase [45]. In the last few years, some _comparison-free_/_quasi-comparison-free_ sorting solutions have been proposed to address the challenges with _comparison-based_ sorting designs. We will discuss these novel solutions in Section II-B. _Complete_ sorting sorts all items (\(N\)) of a list. _Partial_ sorting has also been a popular sorting variant. Unlike complete sorting, partial sorting returns a list of the \(k\) smallest or largest elements in order where \(K<N\)[40, 75]. The cost of partial sorting is often substantially less than complete sorting when the number of sub-sorting attempts is small compared to \(N\). The other elements (above the \(k\) smallest ones) may also be sorted as in-place partial sorting or discarded, which is common in streaming partial sorts [76]. Despite many recent works in hardware-assisted sorting, no recent survey reviews the latest developments in this area. Studying the literature, we found three surveys discussing prior hardware-based sorting designs. These are compared in Table I. Jmaa _et al._[36] compare the performance of the hardware implementations of popular sorting algorithms (_i.e._, Bubble Sort, Insertion Sort, Selection Sort, Quick Sort, Heap Sort, Shell Sort, Merge Sort, and Tim Sort) in terms of execution time, standard deviation, and resource utilization. They synthesized the designs on a Zynq-7000 FPGA platform. Skliarova [37] reviewed different implementation approaches for network-based hardware accelerators for _sorting_, _searching_, and _counting_ tasks. Ali [38] looked closely at comparison-based and comparison-free hardware solutions for sorting. As in-memory and partial sorting are relatively emerging topics, these previous surveys do not cover them. Motivated by this, this work reviews the latest hardware solutions for complete, partial and in-memory sorting, covering both comparison-based and comparison-free approaches. Table II summarizes and classifies the important works we study in this article. The remainder of this paper is organized as follows. Section II reviews complete sorting solutions. Section III reviews hardware solutions for partial sorting. Section IV discusses recent works on emerging in-memory sorting. Section V discusses open challenges and future works. Finally, Section VI concludes the paper. ## II Complete Sorting Methods We begin by reviewing recent works on complete sorting, which processes all the data to sort them in an ascending or descending order. We divide our discussion into two categories of _comparison-based_ and _comparison-free_ sorting. Fig. 2: Compare-and-Swap (CAS) operation in hardware. ### _Comparison-based_ Farmahini _et al._[40] proposed a comparison-based design that employs efficient techniques for constructing high-throughput, low-latency sorting units using smaller building blocks in a hierarchical manner. Their design includes \(N\)-to-\(M\)_sorting_ and _max-set-selection_ units. They extensively discuss the structure, performance, and resource requirements of these units. Despite its primary focus on integer numbers, their design efficiently accommodates two's complement and floating-point numbers, as the comparators utilized in their compare-and-exchange (CAE) blocks can be substituted accordingly. Some sorting applications do not need to sort all input data. Instead, the application may only require the identification of the \(M\) largest or \(M\) smallest values from a set of \(N\) inputs. These algorithms are called _partial sorters_ and will be discussed later in this survey. In an \(N\)-to-\(M\) max-set-selection unit used in the sorting designs of [40], only the \(M\) largest inputs are required in no specific order. Lin _et al._[46] proposed a hardware acceleration architecture for real-time sorting of \(M\) out of \(N\) inputs. Their design benefits from moving indexes instead of data and is called a _pointer-like_ design. They reduce power consumption by reducing switching activities and signal transitions while maintaining high throughput. Their sorting approach has a complexity of \(O(\log_{2}^{2}M)\). The primary contributor to power consumption is the switching activities of registers. To effectively reduce power, they recommend modification to the register transfer level (RTL) design. Notably, signal transitions increase when the input dataset is larger or when the bit width of the input sample is significant. They propose to incorporate additional registers to represent the position of each input sample. So, only the indexes need to be migrated from register to register. When \(N\) inputs are present, the complete index can be represented using only \(\log_{2}N\) bits, irrespective of whether the bit-widths are 8-bit, 16-bit, or more. While modifications may increase the total cell area, they achieve a substantial reduction in dynamic power dissipation. Executing the sorting process using a single module is impractical for large input datasets, as it requires high I/O bandwidth and large cell area. To mitigate this issue, Lin _et al._[46] proposed to reuse smaller sorting units as the core module and combine these small units with other control units to implement an iterative architecture. Fig. 3 shows their proposed architecture. Users have the flexibility to select different sorting units as the core module, enabling them to trade off throughput for resource constraints. Najafi _et al._[41, 77] developed an area- and power-efficient hardware design for complete sorting based on _unary_ computing (UC). They convert the data from binary to unary bit-streams to sort them in the unary domain. Their approach replaces the conventional complex design of the _CAS_ unit implemented based on binary radix with a simple unary-based design made of simple standard AND and OR gates. Fig. 4 demonstrates how a _CAS_ block is implemented in the unary domain. When two unary bit-streams of equal length are connected to the inputs, an AND gate yields the minimum value, whereas an OR gate produces the maximum value. An overhead of this unary design is the cost of converting data from binary to unary representation. However, compared to the cost savings in the computation circuits, this conversion overhead is insignificant. They report an area and power saving of more than 90% for implementing a 256-input complete sorting network. The unary design of [41] consists of simple logic gates independent of data size. The computation accuracy is controlled by the length of bit-streams. The longer the bit-stream, the higher the accuracy. But processing long unary bit-streams can result in long latency with the sorting design of [41]. This causes runtime overhead compared to the conventional binary process. While the latency may be tolerated in many applications, they introduce a time-based unary design to mitigate the latency issue. They encode the input data to pulse-width modulated signals. The data value is determined by the duty cycle in this approach. At the cost of slight accuracy loss, the time-based approach significantly reduces the latency. Prince _et al._[52] combined the bit-stream capabilities of stochastic computing (SC) with binary weighting, reducing latency of bit-stream-based sorting. The approach offers good scalability and cost-efficiency compared to SC and traditional binary methods, making it an efficient solution for sorting tasks. They use a weighted bit-stream converter to generate weighted bit-streams for an adaptable sorting network. Unlike conventional SC bit-streams, each bit in the weighted bit-streams retains its weight as a standard binary value. This conversion reduces the number of bits in SC from \(2^{N}\) to \(N\) for \(N\)-bit precision, resulting in a substantial reduction in latency and energy consumption by shifting from exponential to linear representation. They propose a new lock-and-swap (LAS) unit to sort weighted bit-streams. Their LAS-based sorting network can determine the result of comparing different input values early and then map the inputs to the corresponding outputs based on shorter weighted bit-streams. Norollah _et al._[47] presented a novel multidimensional sorting algorithm (MDSA) and its corresponding architecture, a real-time hardware sorter (RTHS), to efficiently sort large sequences of records. MDSA reduces the required resources, enhances memory efficiency, and has a minimal negative impact on execution time, even when the number of input records increases. To sort large sequences of records, MDSA divides a sequence into smaller segments, which are then sorted separately. As shown in Fig. 5, the MDSA algorithm consists of six consecutive phases and two modes: normal and reverse sorting. The sorting network organizes the records in descending and ascending order for normal and reverse modes, respectively. In each phase, the sorting networks are fed by a group of input records to sort independently. The authors in [47] claim that their sorting method is more beneficial for resource conservation (memory efficiency) while providing high performance. Fig. 6 shows the complete archi Fig. 4: The hardware implementation of a Unary _CAS_ block [41]. Fig. 5: The MDSA with 64 input records, forming an 8x8 matrix [47]. Fig. 3: Lin _et al._[46] iterative sorting system. An iterative architecture is designed by repeatedly employing a smaller sorting unit to process streaming input data. Within this iterative max-set-selection or partial sorting system, the input remains constant, contingent on the type of sorting unit in use. Users have the flexibility to select different sorting units as the Core Module, allowing them to strike a balance between throughput and resource constraints. Importantly, this iterative architecture imposes no limitations on the volume of input samples it can handle. As the input size scales, resource consumption remains constant, effectively mitigating resource overhead challenges. In addition to the application of the low-power sorting module, the design also incorporates an adaptive clipping mechanism and a reordering module. These elements are instrumental in further reducing the switching activities of registers. The adaptive clipping coefficient increases in pair with the temporal results during the sorting process, serving to block a substantial number of samples. tecture of RTHS. In this design, pipelining is used to reduce the critical path in dual-mode pipeline bitonic networks (DPBNs). Fig. 7 shows a DPBN unit for 8 inputs. The number of pipeline stages in a DPBN is directly proportional to its number of steps, which can be computed by \((1/2\log_{2}(N)(\log_{2}(N)+1))\), where \(N\) is the number of inputs. The implicit switch is done by fixed wiring and so is completely static. This hardwired switching does not require additional routing resources and has minimal overhead. Jelodari _et al._[48] proposed a low-complexity sorting network design, which maps the unsorted input data to a graph. In this graph, the vertices represent inputs and are fully connected through directed edges as shown in Fig. 8. This structure allows comparing all inputs with each other through the directed edges connecting their corresponding vertices. At each end of any graph edge, the corresponding vertex is tagged by 0 or 1. The tags of the vertices connected by an edge are always complementary. The outgoing tag "1" means the source vertex is greater than or equal to the sink vertex. The sum of the tags assigned to each vertex indicates the position of the corresponding input data in the sorted output. Papaphilippou _et al._[49, 50] introduced a merge sorter tailored for small _lists_, with the capability to merge sublists recursively. This feature sets their solution apart from most large-scale sorters, often reliant on pre-sorted sublists or established hardware sorter modules. Their design bridges the gap between high-throughput and many-leaf sorters by merge sorters, allowing customization of bandwidth, data, and payload width. They assess the applicability of their solution in their specialized in-house context, specifically for database analytics. This involved calculating the count of distinct values per key (group) from a dataset comprising key-value pairs. They integrated a fully-pipelined high-throughput stream processor seamlessly with the sorter's output, enabling real-time result generation. Their streamlined process eliminates the need for temporary data storage, exemplifying task-pipelining for efficient data processing. Fig. 9 shows their setup. They incorporate a fast lightweight merge sorter (FLiMS) as a key component within their parallel merge/sort tree. The FLiMS unit combines two separate -already sorted- lists. The design is characterized by \(w\) linear sorters, where \(w\) signifies the degree of parallelism being employed. Each individual linear sorter has a length of \(k\)/\(w\), with \(k\) representing the total merge capacity or the size of the sorted chunk. This architectural arrangement sorts an input dataset comprising "\(k\)" elements while adeptly merging already sorted lists of varying lengths. Preethi _et al._[51] investigated the use of clock gating technique to design low-power sorters. The bubble sort, bitonic sort, and odd-even sorting algorithms are redesigned to make them low-power using the clock gating technique. The implementation results showed that clock gating can reduce the dynamic power consumption of sorters by 47.5% with no significant impact on the performance. ### _Comparison-free_ Comparison-free sorting designs do not involve direct element comparisons. Instead, they employ alternative approaches Fig. 8: The graph representation of [48]. Each input, represented by a vertex, is linked to all other vertices through directed edges, indicating a directed, fully connected graph. Fig. 6: Real-time hardware sorter (RTHS) architecture for 8x8 matrix records [47]. Fig. 7: Dual-mode pipeline bitonic network (DPBN) unit for 8 inputs [47]. The direction signal indicates the mode for sorting: normal or reverse. to accomplish efficient sorting. In recent years, there has been a notable surge in research and development of this type of sorting. In this section, we will provide an overview of these advancements. Abdel-Hafeez and Gordon [44] proposed a comparison-free sorting algorithm for large datasets. The method operates on the elements' one-hot weight representation, a unique count weight associated with each of the \(N\) elements. The input elements are inserted into a binary matrix of size \(N\times 1\), where each element is \(k\) bits. Concurrently, the input elements are converted to a one-hot weight representation and stored in a one-hot matrix of size \(N\times H\). In this matrix, each stored element is of size \(H\)-bit and \(H=N\) gives a one-hot matrix of size \(N\)-bit\(\times N\)-bit. The one-hot matrix is transposed to a transpose matrix of size \(N\times N\), which is multiplied by the binary matrix to produce a sorted matrix. An example of this method is illustrated in Fig. 10. The total number of sorting cycles is linearly proportional to the number of input data elements \(N\). The architecture of [44] is a high-performance and low-area design for hardware implementation. Bhargav and Prabhu [53] later proposed an algorithm for comparison-free sorting using finite-state machines (FSMs). Their FSM consists of six states that describe the functionality of a comparison-free sorting algorithm dealing with \(N\) inputs. Their proposed design shows 53% and 68% savings in area and power consumption compared to the design of [44]. Chen at al. [54] improve the number of sorting cycles, which range from \([2N\) to \(2N+2K-1]\) to \([1.5N\) to \(2N+(\frac{\pi}{2})-2]\). Their proposed architecture improves the performance of the unidirectional architecture in [44] by reducing the total number of sorting cycles via bidirectional sorting along with two auxiliary modules. One of the auxiliary modules is _boundary finding_, which is designed to record the maximum and minimum values of the input data for the high-index part (max H and min H) and the low-index part (max L and min L). As shown in Fig. 11, the boundary values are stored in four \(K\)-bit registers where \(K\) is the bit-width of input data. In the initial state of the circuit, the values of max H, min H, max L, and min L are set to \(2K/2\), \(2K-1\), \(0\), and \((2K/2)-1\), respectively. A _binary finding_ module shortens the range for index searching by finding the boundaries of the range. Bidirectional sorting allows the sorting tasks to be conducted concurrently in the high- and low-index parts of the architecture. Sri _et al._[55] reduce the area, delay, and power consumption of the design of [54] by improving the boundary finding module. The improvements are achieved by removing the AND gates and MUX components. Ray and Ghosh [45] developed an architecture for parallel comparison-free sorting based on a model presented earlier in [78]. This work sorts \(N\) data elements completely by utilizing \(N\) iterations with speed-up of \(\frac{n}{\lceil\frac{n}{k}\rceil+k}\) compared to non-parallel architectures. Jalilvand _et al._[56] proposed a fast and low-cost comparison-free sorting architecture based on UC. Similar to [79, 80], their method iteratively finds the index of the Fig. 11: Architecture of the boundary founding module used in [54]. Fig. 10: Example of sorting four input data with the method of [44]. Fig. 9: The high-throughput sorting system of [49] sorts data quickly while merging them efficiently at a rate of 4. The system uses a specific design where wire widths are chosen based on multiples of the width of the data values. This structure possesses the capability to perform both the sorting of an input containing “\(k\)” elements and the merging of “\(k\)” sorted lists of variable lengths. In sorting mode, a 2-bit counter value is appended to the most significant bits of all outputs from the linear sorters. This 2-bit counter is incremented whenever a new sorted chunk is flushed to the “Parallel Merge Tree.” This plays a vital role in the FLiMS (fast lightweight merge sorter) system [50], ensuring correct sorting prioritization for independently sorted chunks. maximum value by converting data to left-aligned unary bit-streams and finding the first "1" in the generated bit-streams. Fig. 12 shows the high-level architecture. The architecture includes a sorting engine, a controller, and a multiplexer. The design reads unsorted data from the input registers and performs sorting by finding the address of the maximum number at each step. Fig. 13 shows the architecture of the sorting engine. The sorting engine contains simple logic and converts data to right-alighted unary bit-streams. It returns the index of the bit-stream corresponding to the maximum value. This is done by finding the bit-stream that produces the first "1". The design also employs a controller that gets a duplication sign signal from the sorting engine and puts the next value to the output sorted register. Finally, Yoon [81] proposed a sorting engine based on the radix-2 sorting algorithm. Their sorting engine avoids comparison by creating and distributing data into buckets according to the radix-2 sorting. ## III Partial Sorting Partial sorting is primarily used to sort the top-\(k\) largest or smallest values out of \(N\) elements, where \(k<N\). Partial sortings have been used for determining the minimum and maximum values, finding more than one relative maximum and minimum (max-set min-set selection), merging of partially sorted data, and approximate partial sorting [82, 40, 75, 83]. Finding the minimum and maximum values among a set of data has been particularly an important target of partial sorting. FPGA has been a popular platform for implementing this type of sorting in hardware. Yan _et al._[63] proposed an architecture for determining the \(k\) largest or smallest numbers on FPGA. Their work allows selecting two min/max subsets with a real-time hardware partial sorter (RTHPS) structure consisting of even-odd swap blocks, a bitonic sorting network, and parallel swap blocks. Korat _et al._[60] proposed a sorting algorithm that partially sorts the odd and even parts in a vector structure. Their method guarantees a linear time complexity with \(O(n)\). The hardware unit includes two multiplexers and a comparator, which is responsible for ordering input pairs. Their FPGA-based hardware design implemented on a Xilinx VIRTEX-7 VC707 FPGA consumes 136 LUTs and 181 registers with a working frequency of 370 MHz when sorting eight inputs. Median sorting is another practice of partial sorting with wide application in image processing, particularly for image enhancement. Various hardware designs for median filtering have been proposed in the literature. Subramaniam _et al._[59] proposed a hardware design for finding the median value of a set of data. They employ selective comparators as a means to locate the median, allowing for partial sorting with fewer elements compared to the conventional designs that necessitate a fully sorted list. CAS operations are obtained using a comparator and two 2-to-1 MUXs. They implement the design on an FPGA (Xilinx FPGA Virtex 4 XC4VSX25) and evaluate it using an image processing case study [59]. Using a pipelined architecture, Cadenas _et al._[86, 87] proposed a median filtering architecture using accumulative parallel counters. Najafi _et al._[41] further implemented a low-cost median filtering design based on UC by converting data to unary bit-streams and processing them in the unary domain using simple standard AND and OR gates. Finally, Riahi Alam _et al._[69] proposed a binary and a unary architecture for energy-efficient median filtering completely in memory. Finding the maximum and minimum values is one of the current topics in in-memory computing applications. Zhang _et al._[62] proposed an in-memory min-max sorting architecture in DRAM technology for fast and big data applications (see Fig. 14). Sorting and graph processing applications are provided with an architecture that produces results 50 times faster than a GPU. This architecture includes two-row decoders, a one-column decoder, a modified logic sense amplifier with a typical sense amplifier (TSA), one latch per bit-line, a pseudo-OR gate, and one priority encoder (for the resultant index of minimum and maximum locations). Campobello _et al._[58] discuss sorting networks' complexity and propose a multi-input maximum finder circuit. Their design finds the maximum value by using an XNOR comparator, a zero catcher (via \(Q\)-port feedbacked D flip-flip), a buffer with enable for each input, an OR gate, and a D-flip-flop. Partial sorting can also be used as an intermediary tool to help understand data, _e.g.,_ to find outliers [88]. This includes the complex task of _spike sorting_ in brain-inspired computing. Spike sorting encompasses algorithms designed to identify individual spikes from extracellular neural recordings and classify them based on their shapes, attributing these detected spikes to their respective originating neurons. This sorting process differs from conventional sorting as it involves machine learning-related steps such as detection, feature extraction, and Fig. 12: High-level architecture of the comparison-free unary sorter in [56]. Fig. 13: The _Unary Sorting Engine_ proposed in [56]. classification. Instead of straightforward scalar sorting, spike sorting resembles the segmentation of patterns within brain signal pulses [89]. Spike sorting involves partial sorting for tasks such as early learning termination, outlier analysis, and spike activity thresholding. In the literature, spike sorting for a unit activity may encompass partial sorting to separate multi-unit activity into distinct groups of single-unit activity [90]. The segmentation of spike data plays a significant role in distinguishing specific activities within the overall spike data. Within the realm of spike processing, some studies underscore partial sorting for outlier analysis of the spikes [88], while others commend it for thresholding operations [91]. Valencia and Alimohammad [61, 92] implement a hardware module for spike sorting architecture. Their design incorporates a template matching unit to compute the minimum distance between spikes during the spike sorting process. Fig. 15 depicts the spike sorting design, which relies on template checks and minimum distance calculations. Such advancement in hardware-powered sorting is expected to open new research avenues in emerging machine-learning models, particularly brain-inspired computing. ## IV In-Memory Sorting In traditional processors, data are retrieved from disk storage and loaded into memory for processing. In this conventional approach, a significant portion of the total processing time and energy consumption is wasted for transferring data between the memory and processing unit. Most prior sorting designs are implemented based on this Von-Neumann architecture with separate memory and processing units [93]. In-memory computation (IMC) -aka processing-in-memory (PIM)- is a promising solution to address this data movement bottleneck. In this processing approach, the chip memory is used for both storage and computation [94]. To address the data movement issue and improve sorting speed, _in-memory sorting_[67, 69, 70] has been proposed. In particular, the special properties of non-volatile memories (NVMs) make them a promising candidate for efficient sorting in memory. Chu _et al._[68] proposed an NVM-friendly sorting algorithm called "NVMSorting". NVMSorting is a modification of the MONTRES algorithm [96], a sorting algorithm resembling merge sort, designed for flash memory. MONTRES aims to enhance performance by minimizing I/O operations and reducing the generation of temporary data during sorting. It includes a run generation phase and a run merge phase, employing optimized block selection, continuous run expansion, and on-the-fly merging for efficient data organization. NVMSorting has the ability to detect partially ordered runs by using a new concept, called _natural run_, to reduce the sorting cost. A natural run consists of multiple blocks. The items within each block are not required to be sorted, but the items between any two consecutive blocks are ordered. In the first step, the algorithm searches for the partially ordered runs (_i.e.,_ natural runs) in the input data. The next step is the _run generation_, which is based on a merge-on-the-fly mechanism and a run expansion mechanism. DRAM is divided into two sections: I) workspace for the natural runs, and II) workspace for the other input data. Chu _et al._ take advantage of the NVM's byte-addressable capability to merge the runs. Their evaluations show that NVMSorting is more efficient than the traditional merge sorting algorithms in terms of execution time (t) and number of NVM writes (w). However, if the dataset is entirely random, NVMSorting can achieve similar performance to MONTRES, hybrid sort, and external sort [97]. Li _et al._[66] proposed a PIM architecture called IMC-Sort to perform parallel sort operations using a hybrid memory cube (HMC). As shown in Fig. 16, IMC-Sort is comprised of sorting units that are specifically designed to operate within each HMC vault's logic layer. The control unit of the HMC vault is enhanced with some logic to carry out the sorting process. Fig. 14: The MIN-MAX PIM architecture proposed by Zhang _et al._[62] preserves the original memory hierarchy with each DRAM chip divided into multiple banks. These partitions share the Input-Output and buffers. Each bank comprises multiple memory matrices (MATs), which are essentially DRAM subarrays. The design supports Ambit [84, 85] logic and instructions with enhanced support for the Dual Row Activation (DRA) mechanism, thus providing compatibility with XNOR operations. The Computational Array includes (i) two-row decoders, (ii) one column decoder, (iii) modified logic sense-amplifier, (iv) one latch per bit-line, (v) pseudo-OR gate, and (vi) one priority encoder. The circuit for zero detection uses a pseudo OR gate. This gate is employed to govern the update of the matching vector latch. The priority encoder returns the index of the minimum or maximum value for partial sorting purposes. The sorting units in IMC-Sort are capable of parallel access and utilize the HMC crossbar network to communicate with one another. In an "_Intra-vault merging_" step, they utilize a _chunking_ technique to accommodate a range of input sequence lengths using a fixed number of CAS units and a fixed input permutation unit. They divide the sequence into chunks of a specific size determined by the number of CAS units. Then, they sort the chunks. Finally, the sorted values are merged into a single sorted sequence. On the other hand, an "_Inter-vault merging_" step combines the sorted values or sequences from all vaults to produce a globally sorted sequence. IMC-Sort delivers 16.8\(\times\) and 1.1\(\times\) speedup and 375.5\(\times\) and 13.6\(\times\) reduction in energy consumption compared to the widely used CPU implementation and a state-of-the-art near memory custom sort accelerator, respectively [98, 99, 65, 100, 101]. Riahi Alam _et al._[69] proposed the first in-array (in-memory) architectures for high-performance and energy-efficient data sorting completely in memory using memristive devices. They introduce two different architectures. The first architecture, "_Binary Sorting_," is based on the conventional weighted binary representation, while the second architecture, "_Unary Sorting_," is based on the non-weighted unary representation. Both of these sorting designs achieve a significant reduction in the processing time compared to prior off-memory binary and unary sorting designs. The memristor technology they used is based on the _stateful logic_ in which the input and output are both presented as the state of input and output memristors. In stateful logic, values are stored and maintained within memristive switches through their resistance states. These switches not only store logic values but also perform logical operations, exhibiting both memory and computational capabilities [102, 103, 104]. They implement the boolean operations with memristor-aided logic (MAGIC) [103] in a crossbar implementation. Each MAGIC logic gate utilizes memristors as inputs, which contain previously stored data, and additional memristor functions as the output. Parallel architectures such as CAS-based sorting networks can be executed efficiently within the memory using these IMC logic operations [69]. In the first design, the memory is split into multiple partitions to enable parallel execution of different CAS operations of each bitonic CAS stage. The number of partitions indicates the number of CAS units that can run in parallel. The first two inputs of each partition are sorted using a basic sorting operation. Then the maximum value of each basic sort operation is copied to another partition determined by the sorting network. The second design is a complete unary sort system that follows the same approach as the binary implementation but represents and processes the data in the _unary_ domain with uniform unary bit-streams [105]. The comparison operations are implemented in this design based on a basic unary sorting unit. Their performance evaluation results show a significant latency and energy consumption reduction compared to the conventional off-memory designs. On average, their in-memory binary sorting resulted in a 14\(\times\) reduction in latency Fig. 16: Overall architecture of the IMC-Sort. A single stack HMC vault is composed of several DRAM banks that are linked to the logic layer via through-silicon vias (TSV) [66]. Fig. 15: Template matching-based architecture for spike sorting. The aligned spike is directed into an ASR (Aligned Shift Register) module, which has been set up for parallel input and serial output. The values stored in the templates and the ASR module are then transferred into some SDA (Squared Difference Accumulator) units. These SDA units are used to calculate and accumulate the squared differences between the spike waveform preserved in the ASR and templates. The MIN unit identifies and conveys the minimum value to the comparator, along with the index of the minimum value, which is then passed on to the Control Unit. The substantial reduction of raw data to sorted spikes is achieved by transmitting only those sorted spikes (in a partial sorting manner) that match a small set of frequently encountered waveforms [61, 92]. and a 37\(\times\) reduction in energy consumption. On the other hand, the average latency and energy reductions for the in-memory unary sorting design were much greater, at 1200\(\times\) and 138\(\times\), respectively. Further, they implemented two in-memory binary and unary designs for Median filtering based on their developed in-memory basic sorting units. Their results showed an energy reduction of 14\(\times\) (binary) and 5.6\(\times\) (unary) for a 3 \(\times\) 3-based image processing system, and 3.1\(\times\) and 12\(\times\) energy reduction for binary and unary median filtering, respectively, for a 5 \(\times\) 5-based image processing system compared to their corresponding off-memory designs. Today's systems often face memory bandwidth constraints that can limit their performance. The efficiency of the sorting algorithms can be significantly impacted by the available memory bandwidth. To overcome the bandwidth problem in large-scale sorting applications, Prasad _et al._[67] proposed an iterative in-memory min/max computation technique. They applied a novel mechanism called "RIME", which enhances bandwidth efficiency by enabling extensive in-situ bit-wise comparisons. RIME eliminates unnecessary data movement on the memory interface, resulting in improved performance. They provide an API library with significant control over essential in-situ operations like ranking, sorting, and merging. With RIME, users can efficiently manage and manipulate data. To perform bit-serial min/max operation, they execute an iterative search for bit value (1 or 0) within individual columns of a data array using a 1T1R memristive memory. In each iteration of the search, a match vector is generated to identify which rows in the array should be eliminated from the dataset. The memory array must be capable of performing two additional operations, namely bitwise column search and selective row exclusion. The algorithm starts by examining the binary values of all bit positions, beginning from the most significant bit position in a set of numbers. This process is carried out using a \(k\)-step algorithm, during which some of the non-minimum or non-maximum values may be removed from the set at each step. At each step, a selection of matching numbers is formed by searching for "1" at the current bit position. The selected numbers are removed from the set only if the set and selection are unequal. This results in all the final remaining numbers in the set having the minimum value. By eliminating the unnecessary data movement for finding min/max of given data, their sorting operation obtains a bandwidth complexity of \(O(N)\). With the suggested in-memory min/max locator, the costs of accessing bandwidth when searching for the \(k^{th}\) value in a range of data decrease to \(k\) operations, which shows a bandwidth complexity of \(O(k)\). Their simulation results on a group of advanced parallel sorting algorithms demonstrate a significant increase in throughput ranging from 12.4\(\times\) to 50.7\(\times\) when using RIME. Yu _et al._[70] improve the speed and performance of Parasad _et al._'s design by proposing a column-skipping algorithm that keeps track of the column read conditions and skips those that are leading 0's or have been processed previously (see Fig. 17). A _bank manager_ enables column-skipping for datasets stored in different banks of the memristive memory. For detecting and skipping redundant column reads the algorithm records the \(k\) most recent row exclusion states and their corresponding column indexes, which can be reloaded to avoid repeating these states. To tackle the sorting challenges of large-scale datasets, Zokaee _et al._[71] proposed Sky-Sorter, a cutting-edge sorting accelerator powered by Skyrmion Racetrack Memory (SRM). Sky-Sorter leverages the unique capabilities of SRM, enabling the storage of 128 bits of data within a single racetrack. Sky-Sorter adopts the sample sort algorithm, which encompasses a sequence of four essential steps: sampling, splitting marker sorting, partitioning, and bucket sorting. First, it employs a random sampling technique to estimate the distribution of the dataset. This sampled subset is then sorted, and specific records are selected as splitting markers. The markers are crucial for defining the boundaries of non-overlapping buckets. The next step involves partitioning, where all records, excluding the splitting markers, are allocated to appropriate buckets based on their relationship to the markers. Lastly, each bucket is sorted individually, and the results are concatenated to produce the final sorted sequence. Bucket sorting, known for its high parallelizability, is the key to this algorithm's efficiency, with the distribution of bucket sizes playing a crucial role in maintaining balance. To achieve balanced distribution and prevent load imbalances during bucket sorting, it is essential to distribute records evenly across all buckets. Larger random sampling sizes contribute to more accurate estimates of the data distribution and less variability in bucket sizes. The algorithm ensures that the probability of any bucket exceeding an upper size limit is nearly zero. In rare cases where a bucket size surpasses this threshold, the algorithm triggers the resampling of splitting markers to maintain uniformity in bucket sizes. The fundamental cell structure of SRM is composed of four integral parts. These components encompass two injectors devoted to the creation of skyrmions, a detector designed for the precise detection of skyrmions, a nanotrack to facilitate the controlled motion Fig. 17: Iterative min search with proposed column-skipping algorithm [70]. of these skyrmions and peripheral circuits that support and coordinate the functionality of the entire cell. The authors claim that Sky-Sorter improves the throughput per Watt \(\sim\)4\(\times\) over prior FPGA-, Processing Near Memory (PNM)-, and PIM-based accelerators when sorting with a high bandwidth memory DRAM, a DDR4 DRAM, and an SSD [65, 66, 67]. To address the challenges of sorting vast datasets with limited memory resources, significant efforts have been dedicated to enhancing external sorting algorithms. While efforts have been made to enhance external sorting algorithms, few have considered the I/O requests and byte-addressable characteristics of NVM. Liu _et al._[74] proposed LazySort, an external sorting algorithm tailored to the NVM-DRAM hybrid storage architecture. LazySort leverages NVM's byte-addressable feature and locally ordered data to minimize write operations to NVM. It comprises two stages: run generation and merge. To enhance efficiency, they introduce an optimization strategy known as RunMerge for the merge stage. RunMerge intelligently merges non-intersecting data blocks based on the range of an index table records, reducing the total number of runs and memory usage. To validate the performance, they established a real NVM-DRAM experimental platform and conducted comprehensive experiments. The results showed LazySort's superior time performance and significantly reduced NVM write operations. Compared to traditional external sorting algorithms, LazySort reduced sorting time by 93.08% and minimized NVM write operations by 49.50%. This design then addresses an important need for efficient external sorting methods for NVM-DRAM hybrid storage. Lenjani _et al._[72] proposed _Pulley_, an algorithm/hardware co-optimization technique for in-memory sorting. Pulley uses 3D-stacked memories. They employ Fulcrum [73] for the baseline PIM architecture. Fulcrum inputs data into a single-word arithmetic logic unit (ALU) in a sequential manner and enables operations that involve data dependencies as well as operations based on a predicate. In Fulcrum, every pair of subarrays has three row-wide buffers called _Walkers_. In the radix sorting proposed in Fulcrum, all buckets have the same length, and a bucket in each pass can always fit in one subarray. For efficient sorting of large data using Fulcrum, Lenjani _et al._ modified the design by calculating the exact length of each bucket and the position of each key within that bucket. In the first step, the keys of each processing unit are sorted locally. In this step, the keys are dichotomized into two buckets (_Bucket0_ and _Bucket1_). The subarray-level processing unit (SPU) starts _Bucket0_ from the bottom of the space and fills it upward, and starts _Bucket1_ from the end of the space and fills it downward. In the next step, each SPU generates the histogram values of the first 256 buckets iteratively, and all SPUs reduce the histogram values of each of the 256 buckets in the lowest subarray. In Pulley, each vault's core in the logic layer performs a prefix-sum on all the shared sub-arrays in the vaults. Then, the cores in the vaults aggregate their prefix-sum arrays. They evaluate Pulley in 1-device and 6-device settings, where each device has four stacks of 8-GB memories. Compared to IMC-Sort, Pulley has a lower working frequency. Wu and Huang [64] introduced a novel sorting technique specifically tailored to NAND flash-based storage systems, aiming to optimize performance and efficiency. They propose a record rearrangement and replacement method for unclustered sorting, which involves scanning sorted tags to efficiently rearrange records and minimize unnecessary page reads during the process. They introduce a strategic decision rule to harness the advantages of both clustered and unclustered sorting approaches. This rule categorizes records based on their length and then selects the most appropriate sorting method (clustered or unclustered) for each category, followed by merging the sorted results. They reuse data to reduce page writes by detecting content similarities in the output buffer and marking logical addresses in the address translation table for potential reuse. They provide a comprehensive I/O analysis, comparing the performance of clustered sorting, unclustered sorting, MinSort, and FAST in terms of page reads and writes. Finally, they implement and test the proposed methods on real hardware, including an Intel SSD and a Hitachi HDD, demonstrating significant performance improvements compared to traditional external sorting methods. Samardzic _et al._[65] introduced "Bonsai," an adaptive sorting solution that leverages merge tree architecture to optimize sorting performance across a wide range of data sizes, from megabytes to terabytes, on a single CPU-FPGA server node. Bonsai's adaptability is achieved by considering various factors, including computational resources, memory sizes, memory bandwidths, and record width. It employs analytical performance and resource models to configure the merge tree architecture to match the available hardware and problem sizes. Their approach can enhance sorting efficiency on a single FPGA while also being used as a foundation for potential use in larger distributed sorting systems. Bonsai's primary objective is to minimize sorting time by selecting the optimal adaptive merge tree configuration based on the hardware, merger architecture, and input size. They demonstrate the feasibility of implementing merge trees on FPGAs, highlighting their superior performance across various problem sizes, particularly for DRAM-scale sorting. Bonsai achieves significant speedup over CPU, FPGA, and GPU-based sorting implementations, along with impressive bandwidth efficiency improvements, making it an appealing solution for adaptive sorting. ## V Open challenges Although significant strides have been made in the field of hardware sorting, numerous challenges persist, warranting further research and innovation. In this section, we explore the ongoing challenges within the research on hardware-assisted sorting. Addressing these challenges can result in sorting solutions that are more efficient in different aspects, from performance to footprint area, power, and energy consumption. These challenges are elaborated on in the following sections. ### _Algorithmic Considerations_ With recent research opportunities and emerging sorting solutions such as in-memory and partial sorting, future research needs to explore potential avenues for radically novel sorting architectures, from algorithmic considerations to hardware-level enhancements. For instance, when developing new sorting algorithms, it is crucial to commence with an initial argument considering a time complexity of \(O(n)\). Table III enumerates various sorting network architectures and highlights key features emphasized by Zuluaga _et al._[106]. Assessing the evolution of sorting architectures, an emerging trend involves using RAM devices for a new sorting approach known as _stream sorting_[106, 116]. Stream sorting takes \(n\) data words as input and produces \(w\) words per clock cycle across \(n/w\) clock cycles. The sorter achieves a throughput of \(w\) if it operates in a fully streaming manner, implying no waiting time between consecutive input sets. Without a fully stream network, the throughput will be less than \(w\) words per cycle. We anticipate that one of the pivotal challenges lies in devising algorithms tailored specifically for hardware design, addressing pipeline and parallel processing concerns. Solutions such as stream sorting represent cutting-edge approaches for achieving a more efficient design right from the initial stages, optimizing both memory utilization and time complexity. ### _Power and Energy Efficiency_ The issue of power usage holds significant importance in current and future hardware designs. Given that sorting designs are being incorporated into a range of embedded and power-limited systems, the reduction of power consumption takes on a vital role. Future works must delve into innovative strategies for ultra-low-power hardware. These could encompass advanced clock gating, dynamic voltage scaling, and enhanced management of data transfer to curtail the energy consumption tied to the implementation of sorting designs. Additionally, by loosening accuracy demands and taking advantage of approximate computing techniques, hardware has the capacity to execute computations with fewer resources. ### _Resource Limitations_ Hardware designs must operate within the boundaries defined by accessible resources such as registers, memory, and processing units. Striving to optimize the utilization of these resources while upholding performance is challenging, especially when dealing with intricate sorting algorithms that exhibit diverse computational demands. Lin _et al._[46] provide a trade-off between throughput and resources. UC-based solutions (_e.g.,_[69, 41, 56]) have successfully achieved hardware sorting designs with extremely simple digital logic. However, they achieved this at the cost of an exponential increase in latency. Developing future sorting systems based on such emerging computing systems that operate on simple data representations [117, 41, 118] is a promising path forward. ### _Latency vs. Throughput Trade-off_ Designing hardware sorting systems necessitates finding the right compromise between latency (the duration of a single sorting operation) and throughput (the number of sorting operations completed within a specific period). Designers must achieve an optimum point based on the application expectations and hardware constraints. ### _Parallelism_ Sorting algorithms encompass repetitive and regular processes that hold the potential for improvement with parallelization and pipelining. Nonetheless, implementing efficient parallel/pipelined hardware architectures (_e.g.,_[50]) and the oversight of data inter-dependencies can intricate these endeavors. Striking a harmonious equilibrium amidst diverse processing units while upholding synchronization and communication can pose a considerable challenge. PIM solutions hold significant promise for the highly parallel execution of future sorting architectures. ### _Adaptation_ Numerous practical applications demand data sorting in dynamic and ever-evolving streams. Crafting hardware-based sorting designs capable of adeptly managing these dynamic inputs in real-time presents a multifaceted difficulty. It is imperative for researchers to delve into adaptive algorithms capable of flexibly adapting to shifting input patterns. This adaptability should ensure sustained, efficient sorting performance while minimizing any notable additional workload. ### _Customization_ Hardware sorting designs may need to be customized for specific applications or environments. This requires flexibility in the design process (_e.g.,_[46, 50]) to accommodate different requirements. From different data types to various data precisions (_i.e.,_ bit-widths), size of the dataset, and hardware constraints (_e.g.,_ area and power budget), achieving the best performance may require customized hardware. However, the higher design time and cost of implementing customized hardware must also be considered. ### _Data Movement and Memory Access_ Optimal memory access is pivotal for sorting algorithms, and hardware architectures must strive to curtail data transfer and cache-related inefficiencies. Sorting entails frequent data comparisons and exchanges, introducing the potential for irregular memory access patterns. Effectively handling these access patterns is imperative to avert potential performance bottlenecks. The problem aggregates in big data applications where the sorting engine is expected to sort a large set of data. ### _Technology Scaling_ Hardware designs might necessitate adjustments to accommodate technological shifts, such as advancements in the semiconductor manufacturing process. Designers must meticulously evaluate the potentials and consequences of technological scaling on factors such as performance, area, power and energy usage, and various design parameters. ## VI Conclusion Sorting is one of the crucial operations in computer science, widely used in many application domains, from data merging to big data processing, database operations, robotics, wireless sensor networks, signal processing, and wireless networks. A substantial body of work is dedicated to designing hardware-based sorting. In this survey, we reviewed the latest developments in hardware-based sorting, encompassing both comparison-based and comparison-free solutions. Comparison-based solutions tend to incur high hardware costs, particularly as the volume and precision of data increase. Comparison-free solutions have recently been proposed to overcome the challenges associated with compare-and-swap-based sorting designs. We reviewed recent hardware solutions for partial sorting and stream sorting, which are used to sort the top-\(k\) largest or smallest values of the dataset. We also studied the latest emerging in-memory solutions for sorting operations. Finally, we outlined the challenges in developing future hardware sorting, aiming to provide readers with insights into the next generation of sorting systems.
2305.15169
The Cooperative Maximal Covering Location Problem with ordered partial attractions
The Maximal Covering Location Problem (MCLP) is a classical location problem where a company maximizes the demand covered by placing a given number of facilities, and each demand node is covered if the closest facility is within a predetermined radius. In the cooperative version of the problem (CMCLP), it is assumed that the facilities of the decision maker act cooperatively to increase the customersz' attraction towards the company. In this sense, a demand node is covered if the aggregated partial attractions (or partial coverings) of open facilities exceed a threshold. In this work, we generalize the CMCLP introducing an Ordered Median function (OMf), a function that assigns importance weights to the sorted partial attractions of each customer and then aggregates the weighted attractions to provide the total level of attraction. We name this problem the Ordered Cooperative Maximum Covering Location Problem (OCMCLP). The OMf serves as a means to compute the total attraction of each customer to the company as an aggregation of ordered partial attractions and constitutes a unifying framework for CMCLP models. We introduce a multiperiod stochastic non-linear formulation for the CMCLP with an embedded assignment problem characterizing the ordered cooperative covering. For this model, two exact solution approaches are presented: a MILP reformulation with valid inequalities and an effective approach based on Generalized Benders' cuts. Extensive computational experiments are provided to test our results with randomly generated data and the problem is illustrated with a case study of locating charging stations for electric vehicles in the city of Trois-Rivi\`eres, Qu\'ebec (Canada).
Concepción Domínguez, Ricardo Gázquez, Juan Miguel Morales, Salvador Pineda
2023-05-24T13:59:28Z
http://arxiv.org/abs/2305.15169v3
# The Cooperative Maximum Capture Facility Location Problem ###### Abstract. In the Maximum Capture Facility Location (MCFL) problem with a binary choice rule, a company intends to locate a series of facilities to maximize the captured demand, and customers patronize the facility that maximizes their utility. In this work, we generalize the MCFL problem assuming that the facilities of the decision maker act cooperatively to increase the customers' utility over the company. We propose a utility maximization rule between the captured utility of the decision maker and the opt-out utility of a competitor already installed in the market. Furthermore, we model the captured utility by means of an Ordered Median function (OMf) of the partial utilities of newly open facilities. We name this problem "the Cooperative Maximum Capture Facility Location problem" (CMCFL). The OMf serves as a means to compute the utility of each customer towards the company as an aggregation of ordered partial utilities, and constitutes a unifying framework for CMCFL models. We introduce a multiperiod non-linear bilevel formulation for the CMCFL with an embedded assignment problem characterizing the captured utilities. For this model, two exact resolution approaches are presented: a MILP reformulation with valid inequalities and an effective approach based on Benders' decomposition. Extensive computational experiments are provided to test our results with randomly generated data and an application to the location of charging stations for electric vehicles in the city of Trois-Rivieres, Quebec, is addressed. ## 1. Introduction Location problems aim to find the optimal placement of one or more facilities and have attracted the attention of researchers during the last decades (Laporte et al., 2019). Typically, these problems consider the distances from the customer to the facilities or the customer demand, assuming that customers will patronize the company's facilities even if it is not explicitly incorporated into the problem. However, when customers' preferences towards new and existing facilities are taken into account, the problem falls into a well-known branch of location theory: competitive location problems. For a review of competitive location models, see the recent chapter by Eiselt et al. (2019), and the reviews by Berman et al. (2009) and Drezner (2014). This paper is devoted to a new problem in the family of Maximum Capture Facility Location (MCFL) problems. These problems decide on the location of a series of facilities to open in a competitive market to maximize the adoption of a service or product. Customers' preferences are integrated in the model assuming that customers behave rationally following a utility maximization rule. They assign a utility to each alternative that is associated to the attractiveness perceived by the customer, and the decision is based on the alternative that maximizes the utility. Typically, each alternative is given by a facility location, and the attractiveness perceived by the customer depends on features of the facility such as distance, size or type of facility, distance to other services, availability of parking space, cost, etc. In this paper, we fill the gap in the MCFL literature with a new approach over the facilities. We propose a model where customers choose between the new company and existing ones, and the set of facilities placed act cooperatively to provide a final utility for the customer associated to the services of the company. This _cooperative_ means that the facilities are not competing with each other as usual in the literature, but rather working together to provide a better utility over the service. For this reason, we name the problem the Cooperative Maximum Capture Facility Location (CMCFL). In the following, we explain the distinctive features of the MCFL problem and our generalization. Regarding the choice rule, there is extensive literature assuming that customers choose the facilities to be served from following a proportional rule: the demand is split among all open facilities in proportion to their utility (Benati, 1999; Benati and Hansen, 2002; Haase and Muller, 2014). The problem is known as the MCFL with Random Utilities (MCFLRU), and most of the literature on this problem features a multinomial logit model (or the mixed multinomial logit model) to account for uncertainty in the customer behavior. These assumptions allow for an analytic formula for the probabilities, so the MCFLRU can be formulated as a mixed-integer non-linear problem (MINLP) with a concave (maximization) objective function (when the linear relaxation is considered). The state-of-the-art exact methodologies exploit particular properties of the objective function to design cutting plane approaches or branch-and-cut algorithms with submodular cuts and outer approximation cuts (Ljubic and Moreno, 2018; Mai and Lodi, 2020). In the recent paper by Lin and Tian (2021), the MCFLRU is generalized assuming that customers split the demand among the competitors and a _consideration set_ containing the \(n\) most attractive open facilities, again proportionally to their utility. The problem is named the Facility Location with Limited Choice rule (FLLC). On the other side of the spectrum, when the demand is split proportionally between the competitor's option and the (single) most attractive facility of the decision maker, we have a _partially binary rule_(Hakimi, 1990; Biesinger et al., 2016; Fernandez et al., 2017; Mendez-Vogel et al., 2023). When the previous choice rules are applied, it is assumed that the facilities of the decision maker and the existing facilities of competitors can be used indistinctly by the customers. In contrast, in this work we assume that customers follow a binary choice rule: the demand of the customer is entirely fulfilled by the company that provides the highest utility (Fernandez et al., 2017; Gentile et al., 2018; Lancinskas et al., 2020). In this way, customers maximize their utility by choosing between the _opt-out utility_ associated to the competitor and the _captured utility_ associated to the new set of facilities to open. The opt-out utility is a parameter reflecting the base utility of the customer given by the existing facilities of a competitor that already operates in the market, and the captured utility is a variable reflecting the customer's utility towards the company. In the existing literature on the MCFL with binary choice rule it is assumed that customers always patronize the closest or most attractive facility. This is a standard utility maximization in a non-cooperative setting, in the sense that the captured utility is given by the utility of a single facility, regardless of the rest of them. However, there are many applications where the customers' captured utility increases when more facilities are installed, since customers may patronize a subset of the facilities of the company. We propose a generalization to a cooperative setting, where the captured utility depends on a combination of the open facilities. In this new setting, the captured utility is a function of the _partial utilities_ of open facilities. The combination of the discrete choice model and the cooperative setting suits applications where the use of the facilities of a company requires a previous affiliation or a specific product, making it impractical or impossible to patronize facilities from different companies. For instance, an application from the energy sector is the maximization of electric vehicle adoption through the location of charging stations. Clearly, the customers' captured utility increases when more electric vehicle charging stations are placed, as it is known to be highly dependent on the placement of charging infrastructure (Coffman et al., 2017; Lamontagne et al., 2022). Moreover, customers with an electric vehicle no longer patronize petrol stations, hence the binary choice rule of the customers. This problem is of great relevance nowadays, since the European Commission has agreed on an ambitious new law to deploy sufficient alternative fuels infrastructure (European Green Deal) and to ban the sale of new combustion-engine cars in the bloc by 2035 (Zero emission vehicles), as part of the European Green Deal towards zero emissions. Many other applications arise from companies that offer a subscription/membership for a fee in exchange for the use of any available facility over a period of time. This is the business model of fitness chains where a gym membership allows the use of any fitness center of the company, and the memberships make impractical patronizing facilities from different companies. In this paper, we propose a unified modeling framework for the non-cooperative and a wide range of cooperative settings. To do so, we define the captured utility using an ordered median function (OMf), a function that assigns importance weights to the sorted partial utilities, and then aggregates the weighted utilities. The OMf is a very general function that has as a particular case the (non-cooperative) MCFL with binary choice rule. It also allows to form a consideration set in the manner of Lin and Tian (2021) for the FLLC, or to model the captured utility as the aggregation of partial utilities of open facilities. This function is also known as an Ordered Weighted Average (OWA) operator, and it was introduced by Yager (1988) in the context of artificial intelligence. In the context of Location Theory, there is a vast literature under the umbrella of the Ordered Median Problem (OMP) (Puerto and Fernandez, 1994; Puerto and Rodriguez-Chia, 2019), since the most used objective functions (e.g., median, center, \(k\)-centrum or centian) can be covered by OMs. Thus, it has been successfully applied to a wide range of areas such as covering problems (Blanco and Puerto, 2021; Blanco and Gazquez, 2022), hub location problems (Puerto et al., 2011, 2016) or \(p\)-median problems both discrete (Deleplanque et al., 2020; Marin et al., 2020) and continuous (Blanco et al., 2016, 2023). However, to the best of our knowledge, this is the first time that it is used to model the utility in competitive facility location. For a comprehensive theory on the OMf and the OMP, we refer the reader to Nickel and Puerto (2006). As for the random choice model that allows to model uncertainty in customer behavior, we do not assume the multinomial logit models (on top of the added complexity of the OMf). Instead, we propose to replace the probability distribution with its empirical estimate based on a set of random samples. This simulation-based approach is known as sample average approximation (Shapiro, 2003) and has been applied in discrete choice models to specify the demand directly in terms of the utility functions (Pacheco Paneque et al., 2021). As an advantage, since we do not make any assumptions on the random utility model, this approach allows to work with observations that are available to the decision maker even when the distribution is unknown. Thus, for a given number of scenarios, the error terms are generated in advance and introduced in the formulation as input for each partial utility. Customers then are captured in the corresponding scenario if the captured utility surpasses the opt-out utility. All in all, our main contributions are as follows: * We give a unified modeling framework for a family of MCFL problems with utility maximization and binary choice rule following a cooperative behavior by means of the introduction of an OMf to model the captured utility associated to the decision maker's company. * We formulate the model as a multiperiod stochastic bilevel maximization problem where the captured utility is described by an additional embedded maximization problem. In each period, a budget is considered for the construction of new facilities that accumulates over the plan horizon if it has not been spent in previous periods. In the particular case where the weights of the OMf are sorted in non-increasing order, the OMf problem can be formulated as a linear assignment problem. In this case, the bilevel model is reformulated as a MINLP. * We propose a solution method that makes use of McCormick linearizations and perspective transformations to produce an equivalent mixed-integer linear problem (MILP), which we subsequently strengthen by way of tailored valid inequalities and preprocessing techniques. * Alternatively, we propose a decomposition method based on the addition of Generalized Benders Cuts to a relaxed version of the problem (the master problem) with a very reduced number of variables and constraints. * We run extensive computational experiments designed to test the performance of the proposed formulations and solution techniques. Furthermore, we solve medium-size and large-scale instances of practical relevance: those from a case study on placing charging stations for electric vehicles in the city of Trois-Rivieres, Quebec (Canada) proposed by Lamontagne et al. (2022). Using this case study, we illustrate how the choice of the vector of weights in the OMf is a key decision and has a significant impact on the location of the charging stations. The paper is organized as follows. Section 2 introduces the notation and the bilevel formulation presented. In Section 3, we derive a single-level MINLP model with the same set of optimal solutions (in terms of the location variables and the customer's decisions) than the bilevel model. Section 4 is devoted to the first solution approach, which includes: 1) a linearization of the MINLP model that results in a MILP model that can be solved using readily available off-the-shelf solvers (Gurobi, CPLEX, XPRESS-MP, etc.); and 2) valid inequalities and preprocessing techniques to obtain a tight and compact model. Section 5 is devoted to the second solution approach presented, a Benders' like decomposition scheme. Section 6 comprises two computational studies, one with randomly generated small and medium-size instances designed to test and compare the solution approaches presented, and a case study on the placement of electric vehicle charging stations in the city of Trois-Rivieres, Quebec (Canada), proposed by Lamontagne et al. (2022). Finally, some conclusions are stated in Section 7. ## 2. Bilevel formulation In this section, we formally define the mathematical programming model studied in this paper and introduce the notation. Recall that the objective is the maximization of captured demand, whereas the customers follow a discrete choice model that is a random utility maximization between the captured utility and the opt-out utility, where the captured utility follows an OMf. The problem is naturally modeled by means of a bilevel formulation. **First-level formulation.** Consider \(\mathcal{J}=\{1,\ldots,|\mathcal{J}|\}\) as the set of candidate locations of facilities. Throughout the paper, and abusing notation, \(j\) is used to represent both the location \(j\) and the (potential) facility located in \(j\). Let \(\mathcal{K}_{j}=\{1,\ldots,|\mathcal{K}_{j}|\}\) represent the facility types that can be installed in \(j\in\mathcal{J}\). W.l.o.g., the types are ordered from the least expensive to the most expensive one. For each time period \(t\in\mathcal{T}=\{1,\ldots,|\mathcal{T}|\}\), assume there is a budget \(b^{t}\) to spend on locating and/or extending facilities. Installing a facility \(j\) of type \(k\) at time period \(t\) has an associated cost \(c^{t}_{jk}\), with \(c^{t}_{jk}\) non-decreasing in \(k\). We assume that the facilities can only be upgraded (with a cost equal to the difference of the costs of the types in the corresponding time periods), but they cannot be eliminated or downsized. We define binary variable \(x^{t}_{jk}\), \(\forall j\in\mathcal{J}\), \(k\in\mathcal{K}_{j}\), \(t\in\mathcal{T}\), equal to \(1\) if and only if a facility of type \(k\) is installed in \(j\) at time period \(t\). Using these variables, the constraints associated to the first-level are stated as: \[\sum_{\begin{subarray}{c}t^{\prime}\in\mathcal{T}\\ t^{\prime}\leq t\end{subarray}}\sum_{j\in\mathcal{J}}\sum_{k\in\mathcal{K}_{j} }c^{t^{\prime}}_{jk}(x^{t^{\prime}}_{jk}-x^{t^{\prime}-1}_{jk})\leq\sum_{ \begin{subarray}{c}t^{\prime}\in\mathcal{T}\\ t^{\prime}\leq t\end{subarray}}b^{t^{\prime}},\quad\forall t\in\mathcal{T}, \tag{1a}\] \[\sum_{k\in\mathcal{K}_{j}}x^{t}_{jk}\leq 1,\quad\forall j\in \mathcal{J},t\in\mathcal{T},\] (1b) \[\sum_{k\in\mathcal{K}_{j}}kx^{t-1}_{jk}\leq\sum_{k\in\mathcal{K} _{j}}kx^{t}_{jk},\quad\forall j\in\mathcal{J},t\in\mathcal{T}\setminus\{1\},\] (1c) \[x^{t}_{jk}\in\{0,1\},\quad\forall j\in\mathcal{J},t\in\mathcal{ T},k\in\mathcal{K}_{j}. \tag{1d}\] The set of constraints (1a) guarantees that the cost of the facilities installed up to time period \(t\) does not surpass the total budget \(\sum_{t^{\prime}\leq t}b^{t^{\prime}}\) (with \(x_{jk}^{t}=0\) for \(t=0\)). By summing up on the time periods in (1a), we allow for the surplus budget from time period \(t\) to be used in subsequent time periods. Constraints (1b) ensure that only one facility can be placed in each location. Constraints (1c) ensure that the facility of type \(k\) can only be upgraded (or remain untouched) for subsequent time periods. Note that we have included only the simplest constraints on the location of facilities. Nevertheless, additional constraints may be required by the firm. For instance, constraint (1a) can be replaced by a constraint limiting the number of facilities of each type to open, or the company may choose to add preference constraints of the type \(\sum_{k\in\mathcal{K}_{j}}x_{jk}^{t}\leq\sum_{k\in\mathcal{K}_{j^{\prime}}}x_{ j^{\prime}k}^{t}\) for \(j,j^{\prime}\in\mathcal{J}\), \(t\in\mathcal{T}\), if for some reason location \(j^{\prime}\) is to be chosen before location \(j\). To define the objective function of the first-level formulation, consider a set of classes of customers \(\mathcal{I}=\{1,\ldots,|\mathcal{I}|\}\) with a homogeneous behavior, where \(n_{i}^{t}\) represents the weight of class \(i\in\mathcal{I}\) (associated, for instance, to the population of such a class) in period \(t\in\mathcal{T}\). As stated, the classes follow a Random Utility Maximization (RUM) model with an Ordered Median function (OMf) embedded in the discrete choice rule. Since the utilities for each alternative are unknown, we follow a sample average approximation method, widely used in Stochastic Programming, to estimate them. Thus, we consider a set \(\mathcal{S}\) of scenarios with equal probabilities. Defining a binary variable \(z_{i}^{ts}\)\(\forall i\in\mathcal{I}\), \(t\in\mathcal{T}\), \(s\in\mathcal{S}\), equal to \(1\) if and only if user class \(i\) is captured in time period \(t\) and scenario \(s\), the objective of the first-level problem can be stated as: \[\max_{\mathbf{x}}\quad\sum_{t\in\mathcal{T}}\sum_{i\in\mathcal{I}}n_{i}^{t} \frac{1}{|\mathcal{S}|}\sum_{s\in\mathcal{S}}z_{i}^{ts} \tag{2}\] ### Second-level formulation The customer decision problem is the maximization of the customer's utility. We consider independent customer classes \(i\), so we introduce a second-level problem for each \(i\), \(t\) and \(s\). Hence, for any user class \(i\in\mathcal{I}\), time period \(t\in\mathcal{T}\) and scenario \(s\in\mathcal{S}\), we consider two different alternatives: \(u_{i0}^{ts}\) is a parameter representing the utility of the opt-out alternative, and \(U_{i}^{ts}\) is a continuous variable that represents the captured utility associated to \(i\). Then, customer problem is defined as follows: \[\max_{z_{i}^{ts}\in\{0,1\}}\quad u_{i0}^{ts}(1-z_{i}^{ts})+U_{i}^{ts}z_{i}^{ts} \tag{3}\] ### Partial utilities As previously stated, the captured utility of a customer depends on the location of the facilities, and therefore varies with the number and type of facilities placed. In order to define it, we consider the captured utility \(U_{i}^{ts}\) as a function of the partial utilities \(u_{ij}^{ts}\) associated to each potential location \(j\in\mathcal{J}\) of a facility. As stated, there are \(k\) types of facilities that can be placed in \(j\), and w.l.o.g. the types are ordered from the least attractive to the most attractive one. Then, each partial utility \(u_{ij}^{ts}\) is a continuous variable with a strictly positive value if and only if there exists a facility \(j\in\mathcal{J}\) that is open for some \(k\in\mathcal{K}_{j}\), i.e.,: \[u_{ij}^{ts}:=\begin{cases}a_{ijk}^{ts},&\text{if a facility $j\in\mathcal{J}$ of type $k\in\mathcal{K}_{j}$ is open},\\ 0,&\text{otherwise},\end{cases}\quad\forall i\in\mathcal{I},j\in\mathcal{J},t \in\mathcal{T},s\in\mathcal{S}. \tag{4}\] Here, \(a_{ijk}^{ts}\) is a parameter that estimates the utility of placing a facility \(j\) of type \(k\) for the user class \(i\) in time period \(t\) and scenario \(s\). This utility is usually divided in two parts in the literature: a measurable and deterministic part and a random, non-observable one, i.e., \(a=\hat{a}+\epsilon\). The deterministic one, \(\hat{a}\), is based on factors like the distance to the facility, its size, if it is close to other establishments, etc. However, we cannot specify the utility function as deterministic totally, so an error, \(\epsilon\), is added (see e.g., Benati and Hansen, 2002; Mai and Lodi, 2020; Lamontagne et al., 2022). Finally, as stated, we assume that \(a^{ts}_{ijk}\) is non-decreasing in \(k\). Making use of the fact that only one facility can be placed in \(j\), we can define the value of variable \(u^{ts}_{ij}\) in terms of the location variables \(\mathbf{x}\) by means of the following equality constraint: \[u^{ts}_{ij}=\sum_{k\in\mathcal{K}_{j}}a^{ts}_{ijk}x^{t}_{jk},\quad\forall i\in \mathcal{I},j\in\mathcal{J},t\in\mathcal{T},s\in\mathcal{S}. \tag{5}\] _Captured utility: the OMf._ As stated in the introduction, we make use of the OMf to model the captured utility \(U^{ts}_{i}\) of customers for each \(i\), \(t\) and \(s\). This function is a weighted sum of ordered elements, i.e., a mapping \(\Phi_{\lambda}:\mathbb{R}^{|\mathcal{J}|}\to\mathbb{R}\) with associated weighting vector \(\boldsymbol{\lambda_{i}}=(\lambda_{i1},\ldots,\lambda_{i|\mathcal{J}|})\). Hence, the captured utility is defined as: \[U^{ts}_{i}:=\Phi_{\boldsymbol{\lambda_{i}}}(u^{ts}_{i1},\ldots,u^{ts}_{i| \mathcal{J}|})=\sum_{j\in\mathcal{J}}\lambda_{ij}u^{ts}_{i(j)}, \tag{6}\] where \(u^{ts}_{i(r)}\) is the \(r\)-th largest input vector component of \(u^{ts}_{i}\), i.e., \(u^{ts}_{i(1)}\geq\ldots\geq u^{ts}_{i(|\mathcal{J}|)}\). The value of vector \(\boldsymbol{\lambda_{i}}\) is directly related to the assumptions made on the customer's choice rule, and can be set according to the characteristics assumed for each customer class. For instance, if we consider the vector \(\boldsymbol{\lambda_{i}}=(1,0,\ldots,0)\), then the captured utility \(U^{ts}_{i}\) takes the value of the highest partial utility. Problem (3) is in this case a RUM problem where each location \(j\) is considered as an alternative. For more general vectors such as \(\boldsymbol{\lambda_{i}}=(\underbrace{1,\ldots,1}_{\ell},0,\ldots,0)\), the captured utility is given by the sum of the partial utilities of the \(\ell\)_-th best_ facilities for a client. This is a realistic assumption that accounts for the case when the customers' captured utility is given by the partial utilities of their \(\ell\) favourite facilities installed, instead of just one. This setting is addressed in Lin and Tian (2021), where customers rank the facilities by non-decreasing utility and then form a _consideration set_ with the \(\ell\) facilities of higher rank. Finally, vectors such as \(\boldsymbol{\lambda_{i}}=(1,\frac{1}{2},\frac{1}{4},0,\ldots,0)\) correspond to customers whose captured utility is _mainly_ given by their favourite facility, but additional interesting facilities can increase the utility. We remark that CMCFL is NP-hard because it reduces to MCLP when \(\lambda_{i}=(1,0,\ldots,0)\)\(\forall i\in\mathcal{I}\), i.e., when the customers follow a standard Utility Maximization rule in a non-cooperative setting. The NP-hardness of MCLP is proved in Hochbaum (1997). In our setting, and given that the partial utilities of a customer are ordered in non-increasing order, we can obtain a reformulation of the OMf by considering any vector \(\boldsymbol{\lambda_{i}}\)(see Fernandez et al., 2013). For this, define the binary variables \(\sigma^{ts}_{ijr}\)\(\forall i\in\mathcal{I}\), \(t\in\mathcal{T}\), \(s\in\mathcal{S}\), \(j,r\in\mathcal{J}\). Then \(\sigma^{ts}_{ijr}=1\) if and only if \(u^{ts}_{ij}\) is the \(r\)-th largest utility for customer class \(i\in\mathcal{I}\). With these variables, the integer model is: \[U^{ts}_{i}= \max_{\boldsymbol{\sigma_{i}^{s}}} \sum_{j\in\mathcal{J}}\sum_{r\in\mathcal{J}}\lambda_{ir}u^{ts}_{ij }\sigma^{ts}_{ijr}\] (7a) s.t. \[\sum_{j\in\mathcal{J}}\sigma^{ts}_{ijr}=1,\quad\forall r\in \mathcal{J}, \tag{7b}\] \[\sum_{r\in\mathcal{J}}\sigma^{ts}_{ijr}=1,\quad\forall j\in \mathcal{J},\] (7c) \[\sum_{j\in\mathcal{J}}u^{ts}_{ij}\sigma^{ts}_{ijr-1}\geq\sum_{j \in\mathcal{J}}u^{ts}_{ij}\sigma^{ts}_{ijr},\quad\forall r\in\mathcal{J} \setminus\{1\},\] (7d) \[\sigma^{ts}_{ijr}\in\{0,1\},\quad\forall j,r\in\mathcal{J}. \tag{7e}\] However, we consider only non-increasing vectors \(\boldsymbol{\lambda_{i}}\), i.e., any \(\boldsymbol{\lambda_{i}}\geq\mathbf{0}\) such that \(\lambda_{i1}\geq\cdots\geq\lambda_{i|\mathcal{J}|}\). The reason is that, realistically, the partial utility of a customer given by a specific facility (which can be seen e.g. as the percentage of times they make use of said facility) decreases when bigger/closer facilities are installed. Hence, if customers obtain their utility by summing up the weighted partial utilities, they will likely penalize facilities with a lower utility (such as the smallest/farthest facilities). Besides, defining the OMf in the second level with a fixed monotone \(\mathbf{\lambda}_{i}\) and parameters \(a^{ts}_{ijk}\) non-decreasing in \(k\) guarantees that the captured utility is non-decreasing when more facilities are located throughout time. This is also a realistic assumption that guarantees some consistency in the model. Finally, when the entries of the vector \(\mathbf{\lambda}_{i}\) are non-increasing and the ordering of the partial utilities with respect to \(j\) as well, \(\Phi_{\mathbf{\lambda}_{i}}\) can be stated as an assignment problem, i.e., problem (7) without constraints (7d). The complete bilevel model (BL) proposed is then: \[\text{(BL)}\quad\max_{\mathbf{x},\mathbf{u}} \sum_{t\in\mathcal{T}}\sum_{i\in\mathcal{I}}n_{i}^{t}\frac{1}{| \mathcal{S}|}\sum_{s\in\mathcal{S}}z_{i}^{ts} \tag{8a}\] \[\text{s.t.}\quad\eqref{eq:1a}-\eqref{eq:1d},\eqref{eq:2a},\] (8b) \[z_{i}^{ts}\in\quad\arg\max_{z_{i}^{ts}}\quad u_{i0}^{ts}(1-z_{i} ^{ts})+U_{i}^{ts}z_{i}^{ts}\quad\forall i\in\mathcal{I},t\in\mathcal{T},s\in \mathcal{S},\] (8c) \[\text{s.t.}\quad z_{i}^{ts}\in\{0,1\},\] (8d) \[U_{i}^{ts}=\quad\max_{\mathbf{\sigma}_{i}^{ts}}\quad\sum_{j\in \mathcal{J}}\sum_{r\in\mathcal{J}}\lambda_{ir}u_{ij}^{ts}\sigma_{ijr}^{ts}\] (8e) \[\text{s.t.}\quad\sum_{j\in\mathcal{J}}\sigma_{ijr}^{ts}=1,\quad \forall r\in\mathcal{J},\] (8f) \[\quad\sum_{r\in\mathcal{J}}\sigma_{ijr}^{ts}=1,\quad\forall j\in \mathcal{J},\] (8g) \[\quad\sigma_{ijr}^{ts}\in\{0,1\},\quad\forall j,r\in\mathcal{J}. \tag{8h}\] Problem (8) is ill-posed, since multiple solutions to the lower level (8c)-(8d) exist when \(u_{i0}^{ts}=U_{i}^{ts}\). In this case, we follow the optimistic assumption and consider that the customer makes the most favourable choice for the leader, i.e., \(z_{i}^{ts}=1\). Moreover, the values of the partial utilities \(u_{ij}^{ts}\) are uniquely determined by the first-level variables \(x_{jk}^{t}\) and act as _parameters_ for the choice rule in the objective function (8e) of the assignment problem. The assignment problem (8e)-(8h) is used to define \(U_{i}^{ts}\) through the OMf \(\Phi_{\mathbf{\lambda}_{i}}\). Model (BL) is bilevel and has a nested optimization problem (8e)-(8h) designed to obtain the value of \(U_{i}^{ts}\). We reformulate it in the next section, obtaining a single-level mixed-integer linear formulation that can be solved using modern general-purpose MILP solvers. To emphasize the relevance of adequately choosing the vector \(\mathbf{\lambda}_{i}\), we have included Example 1, which shows an optimal solution of the same instance with different vectors \(\mathbf{\lambda}_{i}\). **Example 1**.: _In the toy instance considered, \(|\mathcal{T}|=|\mathcal{S}|=1\), so we remove the indices \(t\), \(s\) from the variables and parameters. Furthermore, \(|\mathcal{J}|=2\) and \(|\mathcal{K}_{j}|=3\) for all \(j\), and the costs \(c_{jk}\) of opening any facility \(j\) of type \(k=1,2,3\) are, respectively, 2,3,5, \(\forall j\). The budget is \(b=5\), so in any feasible solution we can place up to one facility of type 3, or up to two facilities of types \(k=1,2\)._ _As for the customer classes, \(|\mathcal{I}|=3\), \(n_{i}=1\)\(\forall i\in\mathcal{I}\) (so we identify customer classes with customers) and the opt-out utility \(u_{i0}\) is the same for all the customers, \(u_{i0}=3\)\(\forall i\). The partial utilities of all the customers for each facility and type can be seen in Table 1. For instance, the partial utility \(u_{ijk}=u_{323}=3.5\)._ _We have solved this instance for two different \(\mathbf{\lambda_{i}}\) vectors and show the optimal placement of facilities in Figure 1. Note that, for ease of illustration, customer 1 is represented as \(i_{1}\) and facility \(1\) is represented as \(j_{1}\) (and so on). In Figure (a)a, the captured utility for each customer only depends on the partial utility of their most relevant station, i.e., \(\mathbf{\lambda_{i}}=(1,0)\)\(\forall i\in\mathcal{I}\). This is the classical RUM model. In this case, the optimal solution consists of placing one facility of type \(k=3\) in \(j=1\), and customers 1 and 2 are captured (i.e., \(z_{1}=z_{2}=1\), \(z_{3}=0\)). The optimal value (the number of customers captured in this example) is equal to 2._ _In Figure (b)b, we assume that the captured utility is given by an aggregation of the partial utilities of the two most relevant facilities for each customer, weighted using \(\mathbf{\lambda_{i}}=(0.9,0.5)\)\(\forall i\in\mathcal{I}\). In this setting, the optimal placement of facilities is given by opening a facility of type \(k=2\) in \(j=1\) and a facility of type \(k=1\) in \(j=2\). In this case, customer 1 prefers facility 1 to 2, so the captured utility is \(U_{1}=\lambda_{11}u_{1(1)}+\lambda_{12}u_{1(2)}=0.9\cdot 2.5+0.5\cdot 1=2.75\), so customer 1 is not captured in this solution. However, customer 3 prefers facility 2 over facility 1, thus \(U_{3}=0.9\cdot 2.5+0.5\cdot 2=3.25\), so customer 3 is captured (and so is customer 2). The optimal value is also 2._ _As illustrated, \(\mathbf{\lambda_{i}}=(1,0)\) favors the location of fewer but bigger/more attractive facilities, whereas other vectors tend to favor solutions with more facilities of smaller size. The decision maker can choose the adequate \(\mathbf{\lambda_{i}}\) depending on the setting, the type of customer classes (if they base their utility on a single facility or on a combination of several of them) and the desired solutions._ ## 3. Reformulation of the bilevel problem into a MINLP In the bilevel formulation (8), the first-level objective value depends on the decision of the customers, i.e., on the values of variables \(z_{i}^{ts}\). Thus, for each customer class \(i\), time period \(t\) and scenario \(s\), the value of \(z_{i}^{ts}\) is obtained by solving an optimization problem where the value of the captured utility variable \(U_{i}^{ts}\) is obtained as the solution of an assignment problem. To obtain a single-level formulation, we consider fixed \(i\), \(t\) and \(s\) and we focus on obtaining the value of \(z_{i}^{ts}\) for fixed partial utility values \(u_{ij}^{ts}\). \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline _Customers_ & \(u_{i0}\) & \multicolumn{3}{c}{\(u_{i1}\)} & \multicolumn{3}{c}{\(u_{i2}\)} \\ \cline{3-8} & & \(k=1\) & \(k=2\) & \(k=3\) & \(k=1\) & \(k=2\) & \(k=3\) \\ \hline \(i=1\) & \(3\) & \(2\) & _2.5_ & \(3\) & \(1\) & _1.5_ & \(2\) \\ \(i=2\) & \(3\) & \(2\) & \(3\) & \(4\) & \(1\) & _1.5_ & \(2\) \\ \(i=3\) & \(3\) & _1.5_ & \(2\) & _2.5_ & _2.5_ & \(3\) & _3.5_ \\ \hline \hline \end{tabular} \end{table} Table 1. Utility matrix for Example 1. Figure 1. Illustrative example of solutions obtained using different \(\mathbf{\lambda_{i}}\). **Proposition 1**.: _Consider the following single-level MINLP:_ \[\max_{\mathbf{x},\mathbf{z},\mathbf{u},\mathbf{\sigma}} \sum_{t\in\mathcal{T}}\sum_{i\in\mathcal{I}}n_{i}^{t}\frac{1}{| \mathcal{S}|}\sum_{s\in\mathcal{S}}z_{i}^{ts} \tag{9a}\] \[\mathrm{s.t.} (\ref{eq:1a})-(\ref{eq:1d}),(5),\] (9b) \[u_{i0}^{ts}z_{i}^{ts}\leq U_{i}^{ts}z_{i}^{ts},\quad\forall i\in \mathcal{I},t\in\mathcal{T},s\in\mathcal{S},\] (9c) \[U_{i}^{ts}=\sum_{j\in\mathcal{J}}\sum_{r\in\mathcal{J}}\lambda_{ ir}u_{ij}^{ts}\sigma_{ijr}^{ts},\quad\forall i\in\mathcal{I},t\in\mathcal{T},s \in\mathcal{S},\] (9d) \[\sum_{j\in\mathcal{J}}\sigma_{ijr}^{ts}\leq 1,\quad\forall i\in \mathcal{I},t\in\mathcal{T},s\in\mathcal{S},r\in\mathcal{J},\] (9e) \[\sum_{r\in\mathcal{J}}\sigma_{ijr}^{ts}\leq 1,\quad\forall i\in \mathcal{I},t\in\mathcal{T},s\in\mathcal{S},j\in\mathcal{J},\] (9f) \[\sigma_{ijr}^{ts}\in[0,1],\quad\forall i\in\mathcal{I},t\in \mathcal{T},s\in\mathcal{S},j,r\in\mathcal{J},\] (9g) \[z_{i}^{ts}\in[0,1],\quad\forall i\in\mathcal{I},t\in\mathcal{T},s \in\mathcal{S}. \tag{9h}\] _Problem (9) is a relaxation of problem (8) with the same set of optimal solutions in terms of \((x,z)\)._ Proof.: Consider a location vector \(\bar{x}\) feasible for the first-level problem of (8), i.e., a vector \(\bar{x}\) satisfying constraints (1a)-(1d), and the values of the partial utilities \(\bar{u}_{ij}^{ts}\) given by (5). Then for fixed \(i\), \(t\), \(s\), the values of \(\bar{U}_{i}^{ts}\), \(\bar{z}_{i}^{ts}\) are univocally determined by the assignment problem and the second-level problem, respectively. To prove the statement, it suffices to see that (i) if \(\bar{z}_{i}^{ts}=0\) in (8), then constraints (9c)-(9h) guarantee \(z_{i}^{ts}=0\) in (9), and (ii) if \(\bar{z}_{i}^{ts}=1\) in (8), then there exists a feasible solution of (9) with \(z_{i}^{ts}=1\). Let us first reformulate the assignment problem (8e)-(8h). To begin with, given that the constraint matrix is totally unimodular and the right-hand side vector is integer, we can relax the integrality constraints on the assignment variables \(\sigma\) for fixed values of \(u_{ij}^{ts}\), obtaining a linear assignment problem. Notice also that the nonnegativity assumption on the utilities \(u_{ij}^{ts}\) and the vector \(\mathbf{\lambda}_{i}\) implies that the linear equality constraints of the assignment problem can be relaxed to be less than or equal to \(1\). Second, the aim of this auxiliary problem is to provide \(\bar{U}_{i}^{ts}:=\max_{\sigma}\sum_{j\in\mathcal{J}}\sum_{r\in\mathcal{J}} \lambda_{ir}\bar{u}_{ij}^{ts}\sigma_{ijr}^{ts}\), which in turn takes part in the objective function of the second-level problem (8c) and is used to derive the value of \(z_{i}^{ts}\). But the value of \(z_{i}^{ts}\) only depends on the difference \(\bar{U}_{i}^{ts}-u_{i0}^{ts}\), and not on the actual value of \(\bar{U}_{i}^{ts}\): if \(\bar{U}_{i}^{ts}-u_{i0}^{ts}\geq 0\), then \(z_{i}^{ts}=1\). Therefore, for fixed partial utilities \(\bar{u}_{ij}^{ts}\) two possibilities can occur: 1. There exists an assignment \(\bar{\sigma}\) such that \(\sum_{j\in\mathcal{J}}\sum_{r\in\mathcal{J}}\lambda_{ir}\bar{u}_{ij}^{ts}\bar{ \sigma}_{ijr}^{ts}\geq u_{i0}^{ts}\). In this case, \(\bar{z}_{i}^{ts}=1\) in (8c) because \(\bar{U}_{i}^{ts}\geq\sum_{j\in\mathcal{J}}\sum_{r\in\mathcal{J}}\lambda_{ir} \bar{u}_{ij}^{ts}\bar{\sigma}_{ijr}^{ts}\geq u_{i0}^{ts}\) (recall that we consider the optimistic approach). 2. For any assignment \(\sigma\), it holds \(\sum_{j\in\mathcal{J}}\sum_{r\in\mathcal{J}}\lambda_{ir}\bar{u}_{ij}^{ts}\sigma _{ijr}^{ts}<u_{i0}^{ts}\). In this case, \(\bar{U}_{i}^{ts}<u_{i0}^{ts}\) and the maximization of the second-level problem implies \(z_{i}^{ts}=0\) regardless of the assignment chosen. Thus, we can consider the relaxation of the assignment problem given by (9d)-(9g). If \(i)\) holds, then the maximization of \(z\) guarantees that an assignment \(\bar{\sigma}\) such that \(U_{i}^{ts}=\sum_{j\in\mathcal{J}}\sum_{r\in\mathcal{J}}\lambda_{ir}u_{ij}^{ts} \bar{\sigma}_{ijr}^{ts}\geq u_{i0}^{ts}\) is chosen in any optimal solution. If \(ii)\) holds, any assignment satisfying (9e)-(9g) gives rise to a feasible solution of (9) with \(z_{i}^{ts}=0\), and \(U_{i}^{ts}\) can take values in the interval \([0,\bar{U}_{i}^{ts}]\). Note that formulation (9) has feasible solutions with assignments which are infeasible in (8). Finally, we enforce the second level (8c)-(8d) optimality conditions through constraints. The integrality constraint on \(z_{i}^{ts}\) can be relaxed in (8) because the objective function (8c) is linear on \(z\), and a linear function over the compact interval \([0,1]\) attains its maximum in or 1. In fact, the second-level problem (8c)-(8d) can be restated as: \[\max_{z_{i}^{ts}\in[0,1]}\quad u_{i0}^{ts}(1-z_{i}^{ts})+U_{i}^{ts}z_{i}^{ts}=u_{ i0}^{ts}+\max_{z_{i}^{ts}\in[0,1]}\quad(U_{i}^{ts}-u_{i0}^{ts})z_{i}^{ts}. \tag{10}\] The value of \(z_{i}^{ts}\) given by (10) can be equally obtained by simultaneously imposing the following constraints: \[u_{i0}^{ts}z_{i}^{ts}\leq U_{i}^{ts}z_{i}^{ts}, \tag{11a}\] \[u_{i0}^{ts}(1-z_{i}^{ts})\geq U_{i}^{ts}(1-z_{i}^{ts}),\] (11b) \[z_{i}^{ts}\in[0,1]. \tag{11c}\] When \(u_{i0}^{ts}>U_{i}^{ts}\), constraint (11a) forces \(z_{i}^{ts}=0\) and constraint (11b) is satisfied for any \(z_{i}^{ts}\in[0,1]\). When \(u_{i0}^{ts}=U_{i}^{ts}\), both constraints are satisfied for any \(z_{i}^{ts}\in[0,1]\), and the first-level objective function maximizes \(z_{i}^{ts}\). And when \(u_{i0}^{ts}<U_{i}^{ts}\), constraint (11a) is always satisfied and constraint (11b) ensures that \(z_{i}^{ts}=1\). Finally, note that constraint (11b) need not be imposed, since the maximization of \(z_{i}^{ts}\) in the objective function implies \(z_{i}^{ts}=1\) in any optimal solution with \(u_{i0}^{ts}\leq U_{i}^{ts}\). ## 4. First resolution approach: a mixed-integer linear formulation In this section, we propose a linearization of formulation (9) that results in a MILP which can be solved using off-the-shelf solvers. We also introduce several families of valid inequalities and some preprocessing techniques to obtain a tight and compact formulation. A comparison between the MILP model and the model with the valid inequalities is included in Section 6. **Linearization of the single-level MINLP** (9).: Problem (9) is non-linear due to constraints (9c) and (9d). In a first step towards obtaining a linear model, we start with constraint (9c). For fixed \(i\), \(t\), \(s\) and partial utilities \(u_{ij}^{ts}\), consider the feasible region for \((z_{i}^{ts},\sigma_{ijr}^{ts})\) given by constraints (9c)-(9h): \[W(i,t,s):= \left\{(\mathbf{\sigma}_{i}^{ts},z_{i}^{ts})\in\mathbb{R}_{+}^{| \mathcal{J}|\times|\mathcal{J}|}\times[0,1]:\left(u_{i0}^{ts}-\sum_{j\in \mathcal{J}}\sum_{r\in\mathcal{J}}\lambda_{ir}u_{ij}^{ts}\sigma_{ijr}^{ts} \right)z_{i}^{ts}\leq 0,\right.\] \[\left.0\leq\sum_{j\in\mathcal{J}}\sigma_{ijr}^{ts}\leq 1\ \forall r\in \mathcal{J},\,0\leq\sum_{r\in\mathcal{J}}\sigma_{ijr}^{ts}\leq 1\ \forall j\in\mathcal{J}\right\},\] where the value of \(U_{i}^{ts}\) has been replaced in (9c) using (9d) and is expressed as a function of \(\mathbf{\sigma}\). Defining sets \(W^{0}\) and \(W^{1}\) in the following manner: \[W^{0}(i,t,s):= \left\{(\mathbf{\sigma}_{i}^{ts},z_{i}^{ts})\in\mathbb{R}_{+}^{| \mathcal{J}|\times|\mathcal{J}|+1}:\mathbf{\sigma}_{i}^{ts}=\mathbf{0},z_{i}^{ts}=0 \right\},\] \[W^{1}(i,t,s):= \left\{(\mathbf{\sigma}_{i}^{ts},z_{i}^{ts})\in\mathbb{R}_{+}^{| \mathcal{J}|\times|\mathcal{J}|+1}:g(\mathbf{\sigma}_{i}^{ts}):=u_{i0}^{ts}-\sum_{j \in\mathcal{J}}\sum_{r\in\mathcal{J}}\lambda_{ir}u_{ij}^{ts}\sigma_{ijr}^{ts} \leq 0,\right.\] \[\left.0\leq\sum_{j\in\mathcal{J}}\sigma_{ijr}^{ts}\leq 1\ \forall r\in \mathcal{J},\,0\leq\sum_{r\in\mathcal{J}}\sigma_{ijr}^{ts}\leq 1\ \forall j\in\mathcal{J},z_{i}^{ts}=1\right\},\] we have that \(W\supset W^{0}\cup W^{1}\), where \(z_{i}^{ts}\in\{0,1\}\) can be viewed as an indicator variable. The inclusion is strict because when \(z_{i}^{ts}=0\) the assignment is irrelevant, so we only keep the feasible assignment \(\mathbf{\sigma}=\mathbf{0}\) and reject the rest of spurious solutions. On the other hand, for \(z_{i}^{ts}=1\) the assignment \(\mathbf{\sigma}\) needs to satisfy \(g(\mathbf{\sigma})\) in order to be feasible for (9). In fact, we define \(g(\mathbf{\sigma})\) to emphasize that this constraint only needs to be satisfied for \(z=1\), and to keep a notation consistent with Gunluk and Linderoth (2012). Set \(W^{0}\) is a point and \(W^{1}\) is convex and bounded, so the convex hull of \(W^{0}\cup W^{1}\) can be characterized applying a perspective transformation (see Gunluk and Linderoth, 2012). Using Lemma 3.1 and Corollary 3.1 from this paper, we obtain that \(conv\left(W^{0}\cup W^{1}\right)=\text{closure}(\text{W}^{-})\), with \[W^{-}(i,t,s)=\left\{(\mathbf{\sigma}_{i}^{ts},z_{i}^{ts})\in\mathbb{R}_{+}^{|\mathcal{ J}|\times|\mathcal{J}|+1}:z_{i}^{ts}g(\frac{\mathbf{\sigma}_{i}^{ts}}{z_{i}^{ts}})=u_{i0 }^{ts}z_{i}^{ts}-\sum_{j\in\mathcal{J}}\sum_{r\in\mathcal{J}}\lambda_{ir}u_{ij} ^{ts}\sigma_{ijr}^{ts}\leq 0,\right.\] \[\left.0\leq\sum_{j\in\mathcal{J}}\sigma_{ijr}^{ts}\leq z_{i}^{ts}\ \forall r\in\mathcal{J},0\leq\sum_{r\in\mathcal{J}}\sigma_{ijr}^{ts}\leq z_{i}^ {ts}\ \forall j\in\mathcal{J},0<z_{i}^{ts}\leq 1\right\}.\] Next, to linearize the remaining bilinear terms \(u_{ij}^{ts}\sigma_{ijr}^{ts}\) we exploit the fact that \(\sigma\) are binary. Although they are stated as linear in \(W^{-}(i,t,s)\), w.l.o.g. we can restrict them back to take values in {0,1} in order to linearize the product of a binary variable and a continuous bounded variable by a continuous variable in the manner of McCormick (1976). However, after this linearization we can no longer relax the integrality constraints on \(\sigma\). Thus, defining a new set of variables \(w_{ijr}^{ts}:=u_{ij}^{ts}\sigma_{ijr}^{ts}\), \(\forall j,r\in\mathcal{J}\), the linearization of constraint \(u_{i0}^{ts}z_{i}^{ts}\leq\sum_{j\in\mathcal{J}}\sum_{r\in\mathcal{J}}\lambda_{ ir}u_{ij}^{ts}\sigma_{ijr}^{ts}\) reads: \[u_{i0}^{ts}z_{i}^{ts}\leq\sum_{j\in\mathcal{J}}\sum_{r\in\mathcal{J}}\lambda_{ ir}w_{ijr}^{ts},\quad\forall i\in\mathcal{I},t\in\mathcal{T},s\in\mathcal{S},\] and the following sets of constraints need to be added to the model: \[u_{ijr}^{ts} \leq M_{ij}^{ts}\sigma_{ijr}^{ts},\quad\forall i\in\mathcal{I},t \in\mathcal{T},s\in\mathcal{S},j,r\in\mathcal{J}, \tag{12a}\] \[u_{ijr}^{ts} \leq u_{ij}^{ts},\quad\forall i\in\mathcal{I},t\in\mathcal{T},s\in \mathcal{S},j,r\in\mathcal{J},\] (12b) \[u_{ijr}^{ts} \geq u_{ij}^{ts}-M_{ij}^{ts}(1-\sigma_{ijr}^{ts}),\quad\forall i\in \mathcal{I},t\in\mathcal{T},s\in\mathcal{S},j,r\in\mathcal{J},\] (12c) \[u_{ijr}^{ts} \geq 0,\quad\forall i\in\mathcal{I},t\in\mathcal{T},s\in\mathcal{S},j,r\in\mathcal{J}, \tag{12d}\] where for each \(i\in\mathcal{I}\), \(t\in\mathcal{T}\), \(s\in\mathcal{S}\), each Big-M constant is an upper bound on the value of \(u_{ij}^{ts}\), \(M_{ij}^{ts}:=a_{ij|\mathcal{K}_{j}|}^{ts}\). Since we maximize on \(z\) (and hence on \(w\)), we can omit the third set of constraints (12c) and still obtain a valid model. The resulting MILP is: \[\max \sum_{t\in\mathcal{T}}\sum_{i\in\mathcal{I}}n_{i}^{t}\frac{1}{| \mathcal{S}|}\sum_{s\in\mathcal{S}}z_{i}^{ts}\] (13a) s.t. \[\sum_{\begin{subarray}{c}t^{\prime}\in\mathcal{T}:\\ t^{\prime}\leq t\end{subarray}}\sum_{j\in\mathcal{J}}\sum_{k\in\mathcal{K}_{j }}c_{jk}^{t^{\prime}}(x_{jk}^{t^{\prime}}-x_{jk}^{t^{\prime}-1})\leq\sum_{ \begin{subarray}{c}t^{\prime}\in\mathcal{T}:\\ t^{\prime}\leq t\end{subarray}}b^{t^{\prime}},\quad\forall t\in\mathcal{T}, \tag{13b}\] \[\sum_{k\in\mathcal{K}_{j}}x_{jk}^{t}\leq 1,\quad\forall j\in \mathcal{J},t\in\mathcal{T},\] (13c) \[\sum_{k\in\mathcal{K}_{j}}kx_{jk}^{t-1}\leq\sum_{k\in\mathcal{K}_ {j}}kx_{jk}^{t},\quad\forall j\in\mathcal{J},t\in\mathcal{T}\setminus\{1\},\] (13d) \[u_{ij}^{ts}=\sum_{k\in\mathcal{K}_{j}}a_{ijk}^{ts}x_{jk}^{t},\quad \forall i\in\mathcal{I},j\in\mathcal{J},t\in\mathcal{T},s\in\mathcal{S},\] (13e) \[u_{i0}^{ts}z_{i}^{ts}\leq\sum_{j\in\mathcal{J}}\sum_{r\in \mathcal{J}}\lambda_{ir}w_{ijr}^{ts},\quad\forall i\in\mathcal{I},t\in\mathcal{T },s\in\mathcal{S},\] (13f) \[\sum_{j\in\mathcal{J}}\sigma_{ijr}^{ts}\leq z_{i}^{ts},\quad \forall i\in\mathcal{I},t\in\mathcal{T},s\in\mathcal{S},r\in\mathcal{J},\] (13g) \[\sum_{r\in\mathcal{J}}\sigma_{ijr}^{ts}\leq z_{i}^{ts},\quad \forall i\in\mathcal{I},t\in\mathcal{T},s\in\mathcal{S},j\in\mathcal{J},\] (13h) \[w_{ijr}^{ts}\leq a_{ij|\mathcal{K}_{j}|}^{ts}\sigma_{ijr}^{ts}, \quad\forall i\in\mathcal{I},t\in\mathcal{T},s\in\mathcal{S},j,r\in\mathcal{J}, \tag{13i}\] \[w^{ts}_{ijr}\leq u^{ts}_{ij},\quad\forall i\in\mathcal{I},t\in \mathcal{T},s\in\mathcal{S},j,r\in\mathcal{J}, \tag{13j}\] \[x^{t}_{jk}\in\{0,1\},\quad\forall j\in\mathcal{J},t\in\mathcal{T},k\in\mathcal{K}_{j},\] (13k) \[z^{ts}_{i}\in[0,1],\quad\forall i\in\mathcal{I},t\in\mathcal{T},s\in\mathcal{S},\] (13l) \[w^{ts}_{ijr}\geq 0,\quad\forall i\in\mathcal{I},t\in\mathcal{T},s\in\mathcal{S},j,r\in\mathcal{J},\] (13m) \[\sigma^{ts}_{ijr}\in\{0,1\},\quad\forall i\in\mathcal{I},t\in \mathcal{T},s\in\mathcal{S},j,r\in\mathcal{J}. \tag{13n}\] ### Valid inequalities and preprocessing techniques for model (13) In the following, we introduce several sets of valid inequalities for model (13). **Proposition 2**.: _For each \(i\in\mathcal{I}\), \(t\in\mathcal{T}\), \(s\in\mathcal{S}\), the set of inequalities_ \[\sum_{r\in\mathcal{J}}w^{ts}_{ijr}\leq u^{ts}_{ij},\quad\forall j\in\mathcal{ J}, \tag{14}\] _is valid for formulation (13). Furthermore, they dominate constraints (13j)._ Proof.: The proof of the validity is straightforward considering the definition of \(w\) and constraints (13h): \[\sum_{r\in\mathcal{J}}w^{ts}_{ijr}:=\sum_{r\in\mathcal{J}}u^{ts}_{ij}\sigma^{ ts}_{ijr}=u^{ts}_{ij}\sum_{r\in\mathcal{J}}\sigma^{ts}_{ijr}\leq u^{ts}_{ij}.\] Likewise, they dominate (13j) because the right-hand side of the constraints is the same, but the left-hand side of (14) has a sum of the non-negative variables \(w^{ts}_{ijr}\). **Proposition 3**.: _For each \(i\in\mathcal{I}\), \(t\in\mathcal{T}\), \(s\in\mathcal{S}\), the set of inequalities_ \[w^{ts}_{ijr}\leq a^{ts}_{ijk}\sigma^{ts}_{ijr}+\sum_{\begin{subarray}{c}k^{ \prime}\in\mathcal{K}_{j}:\\ k^{\prime}>k\end{subarray}}(a^{ts}_{ijk^{\prime}}-a^{ts}_{ijk})x^{t}_{jk^{ \prime}},\quad\forall j,r\in\mathcal{J},k\in\mathcal{K}_{j}, \tag{15}\] _is valid for formulation (13)._ Proof.: For given \(i\in\mathcal{I}\), \(t\in\mathcal{T}\), \(s\in\mathcal{S}\), \(j,r\in\mathcal{J}\), let us prove that the right-hand side of (15) is an upper bound on the value of \(w^{ts}_{ijr}\)\(\forall k\in\mathcal{K}_{j}\). Two situations can arise, depending on the values of \(\sigma^{ts}_{ijr}\) and \(x^{t}_{jk}\): * If \(\sigma^{ts}_{ijr}=0\) or \(\sum_{k\in\mathcal{K}_{j}}x^{t}_{jk}=0\), then \(w^{ts}_{ijr}=0\) and the right-hand side of the constraint is non-negative (because \(a\) is non-decreasing in \(k\) by assumption), so the constraint holds. * Otherwise, \(w^{ts}_{ijr}\leq a^{ts}_{ijk}\) for a given \(\bar{k}\in\mathcal{K}_{j}\) such that \(x^{t}_{jk}=1\). Then for \(1\leq k<\bar{k}\), the right-hand side of (15) is \[a^{ts}_{ijk}\sigma^{ts}_{ijr}+\sum_{k^{\prime}\in\mathcal{K}_{j};k^{\prime}>k} (a^{ts}_{ijk^{\prime}}-a^{ts}_{ijk})x^{t}_{jk^{\prime}}=a^{ts}_{ijk}+(a^{ts}_{ ij\bar{k}}-a^{ts}_{ijk})x^{t}_{j\bar{k}}=a^{ts}_{ij\bar{k}}.\] And for \(k\geq\bar{k}\), the right-hand side of (15) becomes \(a^{ts}_{ijk}\) with \(a^{ts}_{ijk}\geq a^{ts}_{ijk}\), so (15) is valid \(\forall k\). The previous valid inequalities are of special relevance because, apart from strengthening the bound of the linear relaxation of the problem, they allow us to relax the integrality constraints on the \(\sigma\) variables: **Proposition 4**.: _The integrality constraints (13n) can be relaxed in formulation (13) if we include valid inequalities (15)._ Proof.: Let us prove, for fixed \(i\in\mathcal{I}\), \(t\in\mathcal{T}\), \(s\in\mathcal{S}\), that the maximum value that the sum \(\sum_{j\in\mathcal{J}}\sum_{r\in\mathcal{J}}\lambda_{ir}w^{ts}_{ijr}\) attains in formulation (13) with inequalities (15) is bounded by the value that the captured utility \(U^{ts}_{i}\) takes in the MINLP (9). If \(z_{i}^{ts}=0\), then (13g)-(13h) imply \(\mathbf{\sigma}=0\), so the assignment is integer and \(U_{i}^{ts}=0\). As for \(z_{i}^{ts}=1\), let \(\bar{x}\) be a feasible (integer) solution of the first-level problem, i.e., an integer vector satisfying (13b)-(13d), and fixed values \(\bar{u}_{ij}^{ts}\). For a given \(j\in\mathcal{J}\), if \(\sum_{k\in\mathcal{K}_{j}}x_{jk}^{t}=0\), then by constraints (13j) and (13m) it holds \(w_{ijr}^{ts}=0\). Otherwise, let \(k_{j}\in\mathcal{K}_{j}\) be the unique \(k\) such that \(\bar{x}_{jk_{j}}^{t}=1\). Then, constraint \(k_{j}\) from set (15) is \[w_{ijr}^{ts}\leq a_{ijk_{j}}^{ts}\sigma_{ijr}^{ts}=\bar{u}_{ij}^{ts}\sigma_{ijr} ^{ts},\] and therefore \(\sum_{j\in\mathcal{J}}\sum_{r\in\mathcal{J}}\lambda_{ir}w_{ijr}^{ts}\leq\sum_ {j\in\mathcal{J}}\sum_{r\in\mathcal{J}}\lambda_{ir}u_{ij}^{ts}\sigma_{ijr}^{ts }=U_{i}^{ts}\) in (9). **Proposition 5**.: _For each \(i\in\mathcal{I}\), \(t\in\mathcal{T}\), \(s\in\mathcal{S}\), the set of inequalities_ \[\sum_{r\in\mathcal{J}}w_{ijr}^{ts}\leq\sum_{r\in\mathcal{J}}a_{ijk}^{ts}\sigma_ {ijr}^{ts}+\sum_{\begin{subarray}{c}k^{\prime}\in\mathcal{K}_{j}:\\ k^{\prime}>k\end{subarray}}(a_{ijk^{\prime}}^{ts}-a_{ijk}^{ts})x_{jk^{\prime}} ^{t},\quad\forall j\in\mathcal{J},k\in\mathcal{K}_{j}, \tag{16}\] _is valid for formulation (13)._ Proof.: For a fixed \(j\in\mathcal{J}\), we distinguish two cases. If \(\sum_{r\in\mathcal{J}}\sigma_{ijr}^{ts}=0\) or \(\sum_{k^{\prime}\in\mathcal{K}_{j}}x_{jk^{\prime}}^{t}=0\), then \(\sum_{r\in\mathcal{J}}w_{ijr}^{ts}=0\) and the right-hand side of (16) is non-negative, so the constraints are valid. In other case, \(\sum_{r\in\mathcal{J}}\sigma_{ijr}^{ts}=\sum_{k\in\mathcal{K}_{j}}x_{jk}^{t}=1\), so there exist \(\bar{r}\) and \(\bar{k}\) such that \(\sigma_{ij\bar{r}}^{ts}=x_{j\bar{k}}^{t}=1\). In this case, the right-hand side of (16) needs to be an upper bound of \(\sum_{r\in\mathcal{J}}w_{ijr}^{ts}=w_{ijr}^{ts}\) for all \(k\in\mathcal{K}_{j}\). Again we distinguish two cases. For \(k<\bar{k}\), \(\sum_{r\in\mathcal{J}}a_{ijk}^{ts}\sigma_{ijr}^{ts}+\sum_{k^{\prime}\in \mathcal{K}_{j}:k^{\prime}>k}(a_{ijk^{\prime}}^{ts}-a_{ijk}^{ts})x_{jk^{\prime}} ^{t}=a_{ijk}^{ts}+(a_{ijk}^{ts}-a_{ijk}^{ts})\). And for \(k\geq\bar{k}\), the right-hand side of (16) is equal to \(a_{ijk}^{ts}\), an upper bound on \(a_{ijk\bar{k}}^{ts}\). **Proposition 6**.: _The following family of inequalities_ \[\sum_{\begin{subarray}{c}k^{\prime}\in\mathcal{K}_{j}:\\ k^{\prime}\geq k\end{subarray}}x_{jk^{\prime}}^{t-1}\leq\sum_{\begin{subarray}{ c}k^{\prime}\in\mathcal{K}_{j}:\\ k^{\prime}\geq k\end{subarray}}x_{jk^{\prime}}^{t},\quad\forall j\in\mathcal{J },t\in\mathcal{T}\setminus\{1\},k\in\mathcal{K}_{j}, \tag{17}\] _is valid for formulation (13) and dominates constraints (13d)._ Proof.: The validity of (17) is straightforward using (13c) and (13d). To prove their dominance over (13d), it suffices to note that, for a fixed \(j\) and \(t\), the corresponding constraint from (13d) is obtained by summing up the subset of constraints from (17) for all \(k\in\mathcal{K}_{j}\). As previously stated, the \(\sigma\) variables are just auxiliary variables used to compute the value of \(U_{i}^{ts}\). Therefore, we can develop inequalities that bound the values of the assignment variables as long as the maximum value that variable \(U_{i}^{ts}\) attains for a given solution of the first-level problem remains unaltered. The two following propositions are developed with this purpose: **Proposition 7**.: _For each \(i\in\mathcal{I}\), \(t\in\mathcal{T}\), \(s\in\mathcal{S}\), the set of inequalities_ \[\sum_{r\in\mathcal{J}}\sigma_{ijr}^{ts}\leq\sum_{k\in\mathcal{K}_{j}}x_{jk}^{t },\quad\forall j\in\mathcal{J}, \tag{18}\] _is valid for formulation (13), in the sense that it does not eliminate feasible solutions in terms of the variables \(x,z\)._ Proof.: If \(\sum_{k\in\mathcal{K}_{j}}x_{jk}^{t}=1\), then (18) is redundant. And for \(\sum_{k\in\mathcal{K}_{j}}x_{jk}^{t}=0\), then \(\sum_{r\in\mathcal{J}}w_{ijr}^{ts}=0\) regardless of the value of \(\sum_{r\in\mathcal{J}}\sigma_{ijr}^{ts}\), so the later sum can be set to zero. Finally, we derive some preprocessing of the problem that allows to eliminate variables and constraints for particular cases of \(\mathbf{\lambda}_{i}\). **Proposition 8**.: _If \(\lambda_{ir}=0\), then \(\sigma_{ijr}^{ts}\) need not be defined in formulation (13) \(\forall i\in\mathcal{I}\), \(t\in\mathcal{T}\), \(s\in\mathcal{S}\), \(j\in\mathcal{J}\)._ Proof.: The result follows using a similar reasoning to that of Proposition 7. Proposition 8 allows us to obtain a simplified model when there is no need to order all the partial utilities of the customers because only a subset of them take part in the computation of the captured utility. ## 5. Second resolution approach: Benders Decomposition In this section, we propose a Benders' like decomposition approach to solve model (9). The standard Benders' recipe consists of projecting out all the continuous variables (i.e., all the variables except for vector \(x\)) and the associated constraints in (9), following the structure of the bilevel model (8). However, we follow a different approach. Recall that for fixed values of the location variables \(x\), our problem is decomposable by customer \(i\), time period \(t\) and scenario \(s\). Since there is only one customer decision variable \(z_{i}^{ts}\) per subproblem and they take part in the objective function, we leave variables \(x\) and \(z\) in the master problem and project out the variables and constraints associated to the assignment problem used to characterize the OMf. This translates to solving the master problem \[\text{(MP)}\quad\max \sum_{t\in\mathcal{T}}\sum_{i\in\mathcal{I}}n_{i}^{t}\frac{1}{| \mathcal{S}|}\sum_{s\in\mathcal{S}}z_{i}^{ts} \tag{19a}\] \[\text{s.t.} \eqref{eq:Benders}-\eqref{eq:Benders},\eqref{eq:Benders},\] (19b) \[\mathcal{B}_{i}^{ts}(x,z,u)\geq 0,\quad\forall i\in\mathcal{I},t \in\mathcal{T},s\in\mathcal{S},\] (19c) \[z_{i}^{ts}\in[0,1],\quad\forall i\in\mathcal{I},t\in\mathcal{T}, s\in\mathcal{S}, \tag{19d}\] where \(\mathcal{B}_{i}^{ts}(x,z,u)\) represents the Benders concave function bounding variable \(z_{i}^{ts}\) by the captured utility \(U_{i}^{ts}\) given by the OMf assignment problem. For each \(i\), \(t\), \(s\) and at each iteration, \(\mathcal{B}_{i}^{ts}(\bar{x},\bar{z},\bar{u})\) is a feasibility cut associated to the dual of each assignment subproblem for a fixed solution \((\bar{x},\bar{z},\bar{u})\) of the relaxation of the master problem (19): \[\text{(SUB)}_{i}^{ts}\quad\max\left\{0:(\text{9c})-(\text{9g})\right\}. \tag{20}\] Instead of solving the dual of problems (20), we seek for a different normalization of the Benders cuts that appears naturally in our problem. For that, we follow the reasoning of the proof of Proposition 1 and exploit the fact that a solution of the master (19) is feasible if and only if it satisfies constraint \(u_{0i}^{ts}z_{i}^{ts}\leq U_{i}^{ts}z_{i}^{ts}\). The latter constraint is non-linear, but it is linear (and hence concave) for fixed values of \(U\). Furthermore, since \(U_{i}^{ts}\) is the captured utility associated to the location of the facilities, it can be seen as a concave function of \(x\): \(U_{i}^{ts}(x)\). Therefore, we can approximate the non-linear constraints by linear (outer approximation) cuts that are generated and added on the fly to feasible (possibly non-integer) solutions of the master problem. Thus, for fixed values of the location variables \(\bar{x}\), \(U_{i}^{ts}\) can be overestimated by a supporting hyperplane at \(\bar{x}\) and the following linear cut can be obtained: \[u_{0i}^{ts}z_{i}^{ts}\leq U_{i}^{ts}(x)z_{i}^{ts}\leq\left(U_{i }^{ts}(\bar{x})+\sum_{j\in\mathcal{J}}\sum_{k\in\mathcal{K}_{j}}\bar{s}_{jk}(x _{jk}^{t}-\bar{x}_{jk}^{t})\right)z_{i}^{ts}\leq\\ \leq U_{i}^{ts}(\bar{x})z_{i}^{ts}+\sum_{j\in\mathcal{J}}\sum_{k \in\mathcal{K}_{j}}\left[\bar{s}_{jk}(x_{jk}^{t}-\bar{x}_{jk}^{t})\right]^{+}. \tag{21}\] The latter cut is known as a generalized Benders' cut (Geoffrion, 1972), and \(\bar{s}_{jk}\in\partial U_{i}^{ts}(\bar{x})\) is any supergradient of \(U_{i}^{ts}(x)\) at \(\bar{x}\). In a similar spirit to Fischetti et al. (2017), we explain in the following how we can compute the values of \(\bar{s}_{jk}\) using the Lagrangian function. To this end, \(U_{i}^{ts}(x)\) can be bounded by the objective value of problem (8e)-(8h). Using (5) to replace the values of \(u_{ij}^{ts}\) in terms of \(x\), the objective function (8e) becomes: \[\max_{\sigma_{i}^{ts}} \sum_{j\in\mathcal{J}}\sum_{r\in\mathcal{J}}\lambda_{ir}\sum_{k\in \mathcal{K}_{j}}a_{ijk}^{ts}x_{jk}^{t}\sigma_{ijr}^{ts} \tag{22}\] We linearize the objective function defining a new set of assignment variables \(\sigma_{ijkr}^{ts}\coloneqq x_{jk}^{t}\sigma_{ijr}^{ts}\)\(\forall i\in I\), \(t\in\mathcal{T}\), \(s\in\mathcal{S}\), \(j,r\in\mathcal{J}\), \(k\in\mathcal{K}_{j}\). Intuitively, \(\sigma_{ijkr}^{ts}=1\) if and only if \(x_{jk}^{t}=1\) and \(u_{ij}^{ts}=a_{ijk}^{ts}\) is the \(r\)-th greatest partial utility for customer \(i\) at time period \(t\) in scenario \(s\). Using these new set of variables, model (8e)-(8h) is reformulated as the following LP (barring subscripts and superscripts \(i\), \(t\), \(s\) to ease notation): \[\max_{\sigma} \sum_{j\in\mathcal{J}}\sum_{r\in\mathcal{J}}\sum_{k\in\mathcal{K }_{j}}\lambda_{r}a_{jk}\sigma_{jkr} \tag{23a}\] \[\mathrm{s.t.} \sum_{j\in\mathcal{J}}\sum_{k\in\mathcal{K}_{j}}\sigma_{jkr}\leq 1,\quad\forall r\in\mathcal{J},\] (23b) \[\sum_{r\in\mathcal{J}}\sum_{k\in\mathcal{K}_{j}}\sigma_{jkr}\leq 1,\quad\forall j\in\mathcal{J},\] (23c) \[\sum_{r\in\mathcal{J}}\sigma_{jkr}\leq x_{jk},\quad\forall j\in \mathcal{J},k\in\mathcal{K}_{j}\] (23d) \[\sigma_{jkr}\geq 0,\quad\forall j,r\in\mathcal{J},k\in \mathcal{K}_{j}. \tag{23e}\] Note that this linearization has many more variables than the one proposed in Section 4. However, these variables are to be projected out, and this linearization has the advantage of having a much simpler structure, since it is a linear assignment problem. In the same spirit, observe also that constraints (23c) are dominated by (23d) because any feasible solution of (MP) satisfies \(\sum_{k\in\mathcal{K}_{j}}x_{jk}\leq 1\). However, we decided to keep them in the model to provide a solution of its dual easier to understand. Let \((\bar{x},\bar{z},\bar{u})\) be a solution of the master problem (19) and consider fixed customer \(i\), time period \(t\) and scenario \(s\). Consider an optimal solution \(\sigma_{jkr}^{*}\) of (23), and let \(\gamma_{r}^{*}\), \(\delta_{j}^{*}\), \(\eta_{jk}^{*}\) be nonnegative optimal dual variables for each set of constraints (23b)-(23d). Then the Lagrangian function of \(U(x)\) at \(\bar{x}\) in \(\sigma_{jkr}^{*}\), \(\gamma_{r}^{*}\), \(\delta_{j}^{*}\), \(\eta_{jk}^{*}\) is: \[\sum_{j\in\mathcal{J}}\sum_{r\in\mathcal{J}}\lambda_{r}a_{jk} \sigma_{jkr}^{*}+\sum_{r\in\mathcal{J}}\gamma_{r}^{*}(1-\sum_{j\in\mathcal{J}} \sum_{k\in\mathcal{K}_{j}}\sigma_{jkr}^{*})+\\ +\sum_{j\in\mathcal{J}}\delta_{j}^{*}(1-\sum_{r\in\mathcal{J}} \sum_{k\in\mathcal{K}_{j}}\sigma_{jkr}^{*})+\sum_{j\in\mathcal{J}}\sum_{k\in \mathcal{K}_{j}}\eta_{jk}^{*}(\bar{x}_{jk}-\sum_{r\in\mathcal{J}}\sigma_{jkr}^ {*}), \tag{24}\] so \(\bar{s}_{jk}\) depends exclusively on the dual values \(\eta_{jk}\): \(\bar{s}_{jk}=\eta_{jk}^{*}\) and the generalized Benders' cut reads \[u_{0}z\leq\left(\sum_{j\in\mathcal{J}}\sum_{r\in\mathcal{J}}\lambda_{r}a_{jk} \sigma_{jkr}^{*}\right)z+\sum_{j\in\mathcal{J}}\sum_{k\in\mathcal{K}_{j}}\left[ \eta_{jk}^{*}(x_{jk}-\bar{x}_{jk})\right]^{+}. \tag{25}\] Rather than solving the dual of problem (23), we have derived specialized primal and dual algorithms to obtain the optimal values of the dual variables for any integer vector \(\bar{x}\). In this way, we provide a fast inclusion of cuts that are numerically accurate. First, we introduce the dual problem of (23): \[\min \sum_{r\in\mathcal{J}}\gamma_{r}+\sum_{j\in\mathcal{J}}\delta_{j}+ \sum_{j\in\mathcal{J}}\sum_{k\in\mathcal{K}_{j}}x_{jk}\eta_{jk} \tag{26a}\] \[\mathrm{s.t.} \gamma_{r}+\delta_{j}+\eta_{jk}\geq\lambda_{r}a_{jk},\quad\forall j,r\in\mathcal{J},k\in\mathcal{K}_{j}. \tag{26b}\] Next, we introduce the algorithms to solve (23) and (26) for fixed \(i,t,s\). ``` 1:Input: Integer solution \((\bar{x},\bar{z},\bar{u})\) of problem (19). Output: Optimal solution \((\sigma^{*}_{jkr})\) of problem (23). 2:Ordering \(\tau:\mathcal{J}\rightarrow\mathcal{J}\) such that \(\bar{u}_{\tau(1)}\geq\cdots\geq\bar{u}_{\tau(|\mathcal{J}|)}\). 3:for\(j\in\mathcal{J}\)do 4:for\(r\in\mathcal{J}\)do 5:if\(j==\tau(r)\)then 6:\(\bar{k}_{j}\gets k\) if \(\bar{x}_{jk}=1\), \(\bar{k}_{j}\gets 0\) if \(\sum_{k\in\mathcal{K}_{j}}\bar{x}_{jk}=0\) 7:for\(k\in\mathcal{K}_{j}\)do 8:if\(k==\bar{k}_{j}\)then 9:\(\sigma^{*}_{jkr}\gets 1\) 10:else 11:\(\sigma^{*}_{jkr}\gets 0\) 12:endif 13:endfor 14:endif 15:endfor 16:endfor ``` **Algorithm 1** [Primal Algorithm] ``` 1:Input: Integer solution \((\bar{x},\bar{z},\bar{u})\) of problem (19). Output: Optimal solution \((\gamma^{*}_{r},\delta^{*}_{j},\eta^{*}_{jk})\) of problem (26). 2:Ordering \(\tau:\mathcal{J}\rightarrow\mathcal{J}\) such that \(\bar{u}_{\tau(1)}\geq\cdots\geq\bar{u}_{\tau(|\mathcal{J}|)}\). 3:for\(r\in\mathcal{J}\)do 4:\(\gamma^{*}_{r}\leftarrow\sum_{r^{\prime}=r}^{|\mathcal{J}|-1}\left(\lambda_{r^{ \prime}}-\lambda_{r^{\prime}+1}\right)\bar{u}_{\tau(r^{\prime})}+\lambda_{| \mathcal{J}|}\bar{u}_{\tau(|\mathcal{J}|)}\) 5:endfor 6:for\(j\in\mathcal{J}\)do 7:\(\delta^{*}_{j}\leftarrow\sum_{r^{\prime}=\tau^{-1}(j)}^{|\mathcal{J}|-1} \lambda_{r^{\prime}+1}\left(\bar{u}_{\tau(r^{\prime})}-\bar{u}_{\tau(r^{ \prime}+1)}\right)\) 8:endfor 9:for\(j\in\mathcal{J}\)do 10:if\(k\leq\bar{k}_{j}\)then 11:\(\eta^{*}_{jk}\gets 0\) 12:else 13:\(r^{*}_{jk}\leftarrow\min\left\{|\mathcal{J}|,\left\{r\in\mathcal{J}:\bar{u}_{ \tau(r)}<a_{jk}\right\}\right\}\) 14:\(\eta^{*}_{jk}\leftarrow\lambda_{r^{*}_{jk}}a_{jk}-\gamma^{*}_{r^{*}_{jk}}- \delta^{*}_{j}\) 15:endif 16:endfor 17:endfor ``` **Algorithm 2** [Dual Algorithm] **Theorem 9**.: _Algorithm (1) (resp. (2)) provides an optimal solution of formulation (23) (resp. (26)) for a given integer vector \(\bar{x}\)._ Proof.: The proof can be found in Appendix A. Using the notation from Algorithms 1 and 2 and simplifying the value of \(\eta^{*}\), the Benders' cut introduced for \(i\in\mathcal{I}\), \(t\in\mathcal{T}\), \(s\in\mathcal{S}\), to cut out infeasible solutions of (MP) is: \[u_{i0}^{ts}z_{i}^{ts}\leq\left(\sum_{j\in\mathcal{J}}\lambda_{i\tau^{-1}(j)}\bar {u}_{ij}^{ts}\right)z_{i}^{ts}+\sum_{j\in\mathcal{J}}\sum_{\begin{subarray}{c} k\in\mathcal{K}_{j}:\\ k>\bar{k}_{j}^{t}\end{subarray}}\left(\lambda_{ir_{jk}^{*}}a_{ijk}^{ts}-\lambda_ {i\tau^{-1}(j)}\bar{u}_{ij}^{ts}\right)x_{jk}^{t}, \tag{27}\] where \(\bar{k}_{j}^{t}=0\) if \(\sum_{k\in\mathcal{K}_{j}}\bar{x}_{jk}^{t}=0\), and \(\bar{k}_{j}^{t}\) is the unique \(k\) such that \(\bar{x}_{jk}^{t}=1\) otherwise (it coincides with the definition of \(\bar{k}_{j}\) in Algorithms 1 and 2). Note that expression (27) coincides with (25). Indeed, \(\eta_{jk}^{*}\bar{x}_{jk}=0\), since \(\bar{x}_{jk}^{t}=1\Rightarrow\eta_{jk}^{*}=0\). Furthermore, \(\eta_{jk}^{*}=0\ \forall j\in\mathcal{J}\), \(k\in\mathcal{K}_{j}\) such that \(k\leq\bar{k}_{j}\), and \(\eta_{jk}^{*}=\lambda_{r_{jk}^{*}}a_{jk}-\gamma_{r_{jk}^{*}}^{*}-\delta_{j}^{*}\) is simplified in the second sum of the above expression for the remaining \(k\in\mathcal{K}_{j}\). ## 6. Computational study In this section we report results from computational experiments that empirically show our contribution to the CMCFL. We use two different data sets, one of a synthetic character, where the parameters are randomly generated, and another one based on the real data set provided by Lamontagne et al. (2022). The first one is designed to show and analyze the computational improvement implied by the methodology developed in the paper. The real data set is used to test the performance of our approach in a real application of the problem studied in the literature, specifically, the location of charging stations for electric vehicles. Throughout the section, we refer to these sets as Syn-Data and Real-Data, respectively. In these experiments, we use SL to denote the MILP formulation (13), VI for the MILP formulation incorporating valid inequalities (14), (15) (and hence, with \(\sigma\in[0,1]\) due to Proposition 4), (17) and (18) and the preprocessing due to Proposition 8 from Section 4.1, and B for the Benders' Decomposition approach presented in Section 5. All experiments are run on a Linux-based server with CPUs clocking at 2.6 GHz, 8 threads and 32 gigabytes of RAM. The models are coded in Python 3.7 and we used Gurobi 9.5 as optimization solver. The rest of the section is organized as follows: Section 6.1 defines the parameters and the size of the two data sets used for the experiments; Section 6.2 shows the computational results of the proposed models and approaches; finally, Section 6.3 presents the case study on the location of charging stations for electric vehicles. ### Data The parameters and sizes of the instances used for the simulations are given in the following subsections. However, here we include the definition of some parameters that apply to both Syn-Data and Real-Data. Specifically, the model for the parameter \(a_{ijk}^{ts}\) representing the partial utility is: \[a_{ijk}^{ts}=\hat{a}_{ijk}^{t}+\bar{a}_{ij}^{t}+\epsilon_{ijk}^{ts},\ \forall t\in\mathcal{T},s\in\mathcal{S},i\in\mathcal{I},j\in \mathcal{J},k\in\mathcal{K}_{j},\] where \(\hat{a}_{ijk}^{t}\) is associated to the type of station \(k\) placed in \(j\); \(\bar{a}_{ij}^{t}\) is related to the features of location \(j\); and \(\epsilon_{ijk}^{ts}\) is the error associated to scenario \(s\). This definition is consistent with that given by Lamontagne et al. (2022), and is useful for the Real-Data case study. Furthermore, we have tested instances with different \(\mathbf{\lambda}\)-vectors to compare their impact on the computational performance of the different models considered and the solutions obtained. Although they can differ, in these experiments we have taken the same values of the \(\mathbf{\lambda}\)-vectors for all \(i\in\mathcal{I}\), \(t\in\mathcal{T}\), \(s\in\mathcal{S}\) of each instance. Hence, \(\mathbf{\lambda}\) is a \(|J|\)-dimensional vector, and the different values considered are: **Type C:**: \(\mathbf{\lambda}=(1,0,\ldots,0)\). This type is the standard one in the MCLP literature, where the captured utility corresponds to the maximum utility of any open facility. **Type G:****:**: \(\mathbf{\lambda}=(1,\frac{1}{9},\frac{1}{27},0,\ldots,0)\). This is based on the assumption that the weights associated to the partial utilities decrease following a geometric rule, and it takes into account a maximum of three facilities to calculate the captured utility. **Type K:**: \(\mathbf{\lambda}=(1,1,0,\ldots,0)\). It models the captured utility as an aggregation of the partial utilities of the two facilities with the highest utility. **Type L:**: \(\mathbf{\lambda}=(1,\frac{1}{2},0,\ldots,0)\). For this type, the weights are assumed to decrease in a linear fashion, and only the two best facilities are considered by the customer. #### 6.1.1. Synthetic data As for the sizes of the instances of the set Syn-Data, we consider three time periods, \(|\mathcal{T}|=3\), and we test two sets of scenarios, \(|\mathcal{S}|\in\{5,10\}\). The coordinates of the customer classes are generated uniformly and randomly in the square \([0,1]\times[0,1]\) (these kinds of sets are frequently used in the location literature, see, e.g., ReVelle et al., 2008; Cordeau et al., 2019; Lin and Tian, 2021; Baldomero-Naranjo et al., 2022, among others). The sizes chosen for the set of customer classes are \(|\mathcal{I}|\in\{20,30,40,50\}\), and we generate five instances of each size, so 20 instances in total. In the case of the set of candidate locations of the facilities, we uniformly and randomly generate two sets with sizes \(|\mathcal{J}|\in\{10,30\}\), although we only run instances with \(|\mathcal{J}|\leq|\mathcal{I}|\). Finally, four types of facilities can be installed at each location, that is, \(|K_{j}|=4\ \forall j\in\mathcal{J}\). The weight of each class \(n_{i}^{t}\), \(\forall i\in\mathcal{I},t\in\mathcal{T}\), is given uniformly at random in the interval \([0,1]\). The opt-out utility is set to the same value for each customer class, time period and scenario, and three values are considered: \(u_{i0}^{ts}\in\{10,12,15\}\). The maximum budget is set at the same value for all periods, \(b^{t}\in\{5,10\}\). The costs of opening different facilities are equal for each \(t\in\mathcal{T}\) and \(j\in\mathcal{J}\), but vary with \(k\in\mathcal{K}_{j}\) following the function \(c_{jk}^{t}=k+3\). Finally, we define the three values that form the partial utilities, namely, \(\hat{a}_{ijk}^{t}\)**:**: This value is defined for each \(k\in\mathcal{K}_{j}\) as \(\frac{k}{2}\). \(\bar{a}_{ij}^{t}\)**:**: We calculate all the distances among customer classes \(i\in\mathcal{I}\) and candidate locations \(j\in\mathcal{J}\), and distribute them into four groups (determined by the quantiles Q1, Q2, and Q3 of the computed distances). If the distance between a customer class and a candidate location is strictly below Q1, the value of the parameter is set to 8; if it is in (Q1,Q2], then it is set to 4; if it is in (Q2,Q3], then it is fixed to 2; and for the distances strictly above Q3, the assigned value is 0. \(\epsilon_{ijk}^{ts}\)**:**: It is a random value that follows a normal distribution of mean zero and standard deviation one. Thus, a total of 1680 instances are tested in our computational experiments with the data set Syn-Data. Any interested reader can replicate these experiments by finding the value of all parameters in our online GitHub repository (Dominguez et al., 2023). #### 6.1.2. Real data In our case study, we utilize a data set provided by Lamontagne et al. (2022) based on the city of Trois-Rivieres, Quebec, which is divided into 317 zones. The authors consider the centroids of each zone as customer classes \(i\) with weights \(n_{i}^{t}\) equal to the 10% of the population of each zone for every time period. A network is generated with a node for each customer class and edges linking adjacent zones. In this case, the Euclidean distance between each centroid is set as the length of the edge. Additionally, a subset of 30 locations among the centroids is chosen as the set of candidate locations for the installation of the facilities. To create the instances, we consider subsets of their set of customer classes of three different sizes, namely \(|\mathcal{I}|\in\{100,200,317\}\), and we do likewise for the set of potential locations of facilities, \(|\mathcal{J}|\in\{10,20,30\}\). For the subsets of \(\mathcal{I}\), we take the classes with larger weights. As for the subsets of \(\mathcal{J}\), we select the first 10 and 20 of the total set. Finally, we consider their short and long span, \(|\mathcal{T}|\in\{4,10\}\), and we set the number of scenarios to \(|\mathcal{S}|=5\). Moreover, Lamontagne et al. (2022) define different types of charging stations according to the number of outlets they contain, i.e., the capacity of the station to charge vehicles simultaneously. This definition is consistent with the assumptions made in the paper, in the sense that the customer's utility increases as the number of outlets in a facility does. We maintain the maximum number of outlets per station proposed by the authors, \(|\mathcal{K}_{j}|=6,\forall j\in\mathcal{J}\). The authors define separate costs for opening a facility and for increasing its number of outlets. In our case, these costs are incorporated in the value of the parameter \(c\) as \(c_{jk}^{t}=100+50k\). The total budget for all \(t\in\mathcal{T}\) is fixed to \(b^{t}=400\). In addition to the value of the opt-out utility considered by the authors, \(u_{0i}^{ts}=4.5\), we also include instances with \(u_{0i}^{ts}=9\ \forall i\in\mathcal{I}\), \(t\in\mathcal{T}\), \(s\in\mathcal{S}\). Finally, to define the partial utilities we also follow the authors' design: \[\tilde{a}_{ijk}^{t} =0.281k,\] \[\bar{a}_{ij}^{t} =1.638-0.63d_{ij},\] \[\epsilon_{ijk}^{ts} =FT\xi^{s}+\zeta^{s},\] for all \(t\in\mathcal{T},s\in\mathcal{S},i\in\mathcal{I},j\in\mathcal{J},k\in\mathcal{ K}_{j}\), and \(d_{ij}\) representing the distance of the shortest path between \(i\) and \(j\). In the above, \(F\) is a factor loading matrix, \(T\) is a diagonal matrix, \(\xi^{s}\) is a vector of IID random terms from a normal distribution with location zero and a scale of one, and \(\zeta^{s}\) is a vector of IID random terms from a Gumbel distribution with location of zero and a scale of three. However, for our real-world case study, we adopt the approach proposed by the authors, where customers consider only the facilities located within a radius of ten kilometers, and they have no utility for those outside the radius, (i.e., \(u_{ij}^{ts}=0\) for any pair \((i,j)\) whose distance is superior to ten kilometers). For more information about the definition of these parameters, see Lamontagne et al. (2022). Following the same scheme as in this work, we solve 20 independent and different instances on this data set by modifying only the random vectors \(\xi\) and \(\zeta\). As a result, the set Real-Data has a total of 2880 instances. These instances are also available in our GitHub repository (Dominguez et al., 2023). ### Study on computational performance This section is devoted to show the computational performance of the solution approaches described throughout the paper on a synthetic data set (Syn-Data), and a time limit of 1 hour (3600 seconds) is established for each approach. First, Figure 2 provides two plots that effectively compare the performance of the proposed approaches in this paper. In both plots, the solid line represents the MILP model (SL); the dashed line, the MILP model with valid inequalities (VI); and the dotted line, the Benders' decomposition based approach (B). Figure 1(a) depicts a performance profile of the percentage of instances solved to optimality within a computational time in seconds. A point in the figure with coordinates \((x,y)\) indicates that for \(y\%\) of the instances, the instance was solved in less than \(x\) seconds. It is noticeable that B outperforms the others, that is, the number of instances solved by B before the time limit (around a 70%) is the highest out of the three models. On the other hand, SL solves around a 25% of the instances (the smallest ones) in less computational time compared to the other two models, although it is only able to solve to optimality about 35% of the instances proposed. Similarly, Figure 1(b) shows a performance profile of the percentage of instances with respect to the MIP Gap after one hour of computation time. Thus, a point in the graph with coordinates \((x,y)\) indicates that for \(y\%\) of instances, the MIP gap is less than \(x\%\). Clearly, the instances solved to optimality by a certain model have a MIP gap of 0%. Here we see the clear improvement of B with respect to the two models presented: It solves more instances to optimality and when the time limit is reached, the gap is much lower. This follows from the fact that, with a 20% gap, we have less than 50% of the instances for SL, while for VI it is 60% and more than 75% for B. Note that there are a few instances that end up with a GAP greater than 100%, but we did not include these outliers in the performance profile. With these plots, we illustrate that the resolution approaches VI and B presented in this paper improve the initial formulation SL. Indeed, approach B consistently outperforms the others, solving more instances in less time and returning smaller MIP gaps when it is not capable of solving the instance. The difference in performance associated to the size of the instances can be seen when we break down the solutions depending on the parameter values. In this line, the median computational time in seconds of the instances that finished within the time limit (Time[s]), the number of solved instances, and the median of the captured customers by period (\(t\)) for the instances solved with B are collated in Table 2 for various values of \(\boldsymbol{\lambda}\) (types C, G, K and L) and different opt-out values (\(u_{0}\)). There are 140 instances summarized per row. In this table and the following ones, we have used the median of the data instead of the average to avoid the influence that the outliers may cause. For the case when \(\boldsymbol{\lambda}\) is set to C, which considers only the most useful facility, the SL model solves 100% of the instances with very low computational times. However, for the rest of the \(\boldsymbol{\lambda}\)-vectors (where more than one facility is taken into account for the calculation of the captured utility), the SL model performs worse, giving lower percentages of solved instances and higher computational times compared to VI and B. In contrast, approach B shows a better computational performance in most cases, solving more instances and taking less time compared to VI. This suggests that approach B is more effective in handling the calculation of the captured utility when multiple facilities are involved. Thus, it can potentially offer improved computational performance when solving problems with general or varied \(\boldsymbol{\lambda}\)-vectors. Note that the adjustment of the values given to the opt-out utility and the \(\boldsymbol{\lambda}\)-vector has been made so that the percentage of customers captured ranges from 0% to almost 100%. This illustrates how a slight modification in these values may have a significant impact on the overall solutions reported in terms of captured customers. In turn, the selection of parameters affects the computational performance: extreme cases where either everyone or no one is captured are easier to solve than intermediate settings. The reason is that, in the extreme cases, the opt-out value is so low or so high that the customers' decision is very little influenced by the location of the stations. Figure 2. Performance profiles. Table 3 provides insight into the computational complexity of the problem for different number of scenarios (\(|\mathcal{S}|\)), instance sizes (\(|\mathcal{I}|\)), \(\boldsymbol{\lambda}\)-values, and the sizes of the set of candidate locations for the facilities (\(|\mathcal{J}|\)). For the sake of clarity, we include here only two data sets, \(|\mathcal{I}|\in\{30,50\}\). To see the results regarding the rest of the sizes, we refer the reader to Appendix B. This table shows the median computational time in seconds only for the instances that have been solved to optimality within one hour (Time[s]), the number of solved instances between parentheses (# Solved), and two GAP values for the unsolved instances: \(\mathtt{MIPGap}=\frac{|z_{bb}-z_{P}|}{z_{P}}\cdot 100\), and \(\mathtt{FGap}=\frac{|z_{bb}-z_{bP}|}{z_{bP}}\cdot 100\), where \(z_{bb}\) is the best upper bound, \(z_{P}\) is the incumbent value (i.e., the current best primal objective bound), and \(z_{bP}\) is the best incumbent offered by any of the three approaches. When none of the instances are solved within the time limit, we have written TL in the time column. There are 30 instances summarized per row. It is observed that approach SL struggles to solve instances with five scenarios, even with small-sized instances like those with 30 customer classes with a success rate of only 27.2% for the \(\boldsymbol{\lambda}\)-values G, K and L. On the contrary, approach B performs well even with an increased number of customer classes. For instance, with 50 customer classes, a 63.3% of the instances are solved for the same \(\boldsymbol{\lambda}\)-values G, K and L. For the simplest case \(\boldsymbol{\lambda}\)=C, it is observed that the inclusion of valid inequalities (approach VI) to the MILP model is not beneficial, since for \(|\mathcal{J}|=30\) and for both values of \(|\mathcal{I}|\), SL solves the 30 instances proposed while VI only solves 25. This suggests that the valid inequalities are of no use when a single station is taken into account in the computation of the captured utility. However, for the rest of the \(\boldsymbol{\lambda}\)-vectors VI clearly outperforms SL, (although B remains unbeaten). If we examine the results for ten scenarios, we observe that the slight increase in the number of scenarios already entails a deterioration in the computational performance of all the methods. This fact is particularly evident for specific \(\boldsymbol{\lambda}\)-values. For instance, approach B solves 22 (resp. 18) instances with five scenarios, \(\boldsymbol{\lambda}\) set to K, 30 candidate facilities and 30 (resp. 50) customer classes. When the number of scenarios is increased to ten, the number of solved instances goes down to 14 (resp. 12). The case of \(\boldsymbol{\lambda}\) set to G is more extreme, since none of the approaches is able to solve any instance for 30 facilities, even with 30 customer classes. \begin{table} \begin{tabular}{r r r r r r r r r r} \hline \hline \(u_{0}\) & \(\boldsymbol{\lambda}\) & \multicolumn{3}{c}{Time[s]} & \multicolumn{3}{c}{\# Solved} & \multicolumn{3}{c}{Cap.} & \multicolumn{1}{c}{c}{customer[\%]} \\ \cline{3-10} & & SL & VI & B & SL & VI & B & \(t=1\) & \(t=2\) & \(t=3\) \\ \hline 10 & C & 1.8 & 85.1 & 16.5 & 140 & 102 & 113 & 20.0 & 38.2 & 52.2 \\ & G & 1355.2 & 919.1 & 331.1 & 20 & 54 & 82 & 35.1 & 76.7 & 90.1 \\ & K & 1046.2 & 167.9 & 65.5 & 54 & 101 & 138 & 46.5 & 91.9 & 98.4 \\ & L & 1089.4 & 495.3 & 197.6 & 48 & 75 & 119 & 38.5 & 82.2 & 93.1 \\ 12 & C & 1.5 & 20.2 & 9.3 & 140 & 140 & 138 & 1.1 & 2.9 & 4.2 \\ & G & 2359.8 & 1723.2 & 623.4 & 3 & 20 & 45 & 14.3 & 46.4 & 67.0 \\ & K & 1272.6 & 260.9 & 177.6 & 32 & 88 & 118 & 33.7 & 81.8 & 93.8 \\ & L & 1036.2 & 1029.8 & 348.1 & 12 & 45 & 66 & 19.8 & 57.0 & 76.1 \\ 15 & C & 1.3 & 8.9 & 7.5 & 140 & 140 & 140 & 0.0 & 0.0 & 0.0 \\ & G & 1867.8 & 515.3 & 185.1 & 26 & 59 & 68 & 0.8 & 8.5 & 18.1 \\ & K & 1756.9 & 844.9 & 260.5 & 8 & 66 & 66 & 19.1 & 57.2 & 76.8 \\ & L & 671.3 & 281.3 & 225.3 & 48 & 59 & 74 & 5.7 & 19.0 & 31.8 \\ \hline \hline \end{tabular} \end{table} Table 2. Syn-Data. The total number of instances per row is 140. Time limit equal to 3600 seconds. Median computational time of solved instances in seconds (Time[s]), number of instances solved by the different models for each opt-out and \(\boldsymbol{\lambda}\)-value, and median number of captured customer classes (Cap. customer[\%]) per time period for each opt-out and \(\boldsymbol{\lambda}\)-value. In terms of computational times, in the majority of the combinations of parameters the median is under 1800s. Approach B provides the best solution times, with reductions of up to one order of magnitude with respect to SL or half the time compared to VI (for \(|\mathcal{I}|\)=50, \(|\mathcal{J}|\)=30 and \(\boldsymbol{\lambda}\)=G). For the instances that are not solved to optimality within the time limit, approach B achieves low MIPGap values, typically remaining below a 10.6% of gap for five scenarios and 17.0% for ten scenarios, based on the instances analyzed. This implies that approach B effectively solves larger instances and can achieve good optimality gaps indicative of near-optimal solutions. As for the instances with MIPGap higher than 100%, this is due to the incumbent solution provided at the time limit being very close to zero. This results in a high percentage gap, but it is interesting to observe that the upper bound is similar among the three models even \begin{table} \begin{tabular}{r r r r r r r r r r r r r} \hline \hline \(|\mathcal{S}|\) & \(|\mathcal{I}|\) & \(\boldsymbol{\lambda}\) & \(|\mathcal{J}|\) & \multicolumn{4}{c}{Time[s] (\#Solved)} & \multicolumn{4}{c}{MIPGap[\%]} & \multicolumn{4}{c}{F Gap[\%]} \\ \cline{4-13} & & & & SL & VI & B & SL & VI & B & SL & VI & B \\ \hline 5 & 30 & C & 10 & 0.4 (30) & 6.5 (30) & 2.5 (30) & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\ & & & 30 & 3.6 (30) & 33.9 (25) & 17.5 (30) & 0.0 & 22.0 & 0.0 & 0.0 & 22.0 & 0.0 \\ & & G & 10 & 1775.2 (11) & 652.2 (22) & 185.1 (29) & 35.6 & 13.3 & 1.2 & 35.5 & 13.2 & 1.2 \\ & & & 30 & TL (0) & TL (0) & 1151.0 (6) & 104.5 & 77.6 & 50.1 & 94.9 & 57.5 & 50.1 \\ & & K & 10 & 1125.2 (20) & 86.4 (30) & 22.9 (30) & 22.9 & 0.0 & 0.0 & 22.9 & 0.0 & 0.0 \\ & & & 30 & TL (0) & 1659.5 (18) & 160.6 (22) & 33.9 & 19.1 & 10.3 & 32.0 & 18.1 & 10.2 \\ & & L & 10 & 441.2 (18) & 351.0 (28) & 72.1 (30) & 12.0 & 8.2 & 0.0 & 12.0 & 8.2 & 0.0 \\ & & & 30 & TL (0) & TL (0) & 378.7 (12) & 74.4 & 44.0 & 25.3 & 69.2 & 43.1 & 25.2 \\ 50 & C & 10 & 0.9 (30) & 9.3 (30) & 4.0 (30) & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\ & & & 30 & 7.5 (30) & 99.0 (25) & 52.1 (27) & 0.0 & 36.2 & 2.9 & 0.0 & 36.2 & 2.9 \\ & & G & 10 & 3089.7 (4) & 1057.2 (16) & 577.7 (27) & 53.5 & 32.9 & 2.9 & 50.3 & 30.4 & 2.9 \\ & & & 30 & TL (0) & TL (0) & 1709.7 (3) & 123.0 & 345.4 & 50.4 & 94.1 & 115 & 50.4 \\ & & K & 10 & 1507.9 (7) & 267.8 (30) & 145.3 (30) & 8.4 & 0.0 & 0.0 & 8.4 & 0.0 & 0.0 \\ & & & 30 & TL (0) & 2634.0 (3) & 591.1 (18) & 39.0 & 17.9 & 11.1 & 37.0 & 17.9 & 11.1 \\ & & L & 10 & 1147.3 (11) & 638.9 (21) & 197.6 (29) & 23.9 & 19.7 & 4.0 & 23.9 & 19.7 & 4.0 \\ & & & 30 & TL (0) & TL (0) & 1713.3 (7) & 76.9 & 52.3 & 31.1 & 63.3 & 48.6 & 31.1 \\ 10 & 30 & C & 10 & 0.9 (30) & 11.7 (30) & 6.3 (30) & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\ & & & 30 & 6.7 (30) & 112.0 (25) & 49.2 (25) & 0.0 & 27.2 & 2.0 & 0.0 & 27.2 & 2.0 \\ & & G & 10 & 1890.4 (1) & 1210.5 (13) & 671.5 (20) & 56.3 & 27.3 & 14.8 & 51.1 & 27.3 & 14.8 \\ & & & 30 & TL (0) & TL (0) & TL (0) & 126.7 & 219.0 & 47.0 & 103.8 & 39.8 & 47.0 \\ & & K & 10 & 2539.9 (9) & 355.5 (30) & 174.6 (28) & 14.9 & 0.0 & 14.5 & 13.8 & 0.0 & 14.3 \\ & & & 30 & TL (0) & 3365.0 (2) & 475.2 (14) & 38.9 & 14.4 & 10.3 & 33.4 & 14.4 & 10.2 \\ & & L & 10 & 1548.3 (11) & 791.1 (18) & 665.3 (28) & 23.4 & 21.5 & 6.7 & 23.4 & 21.5 & 6.7 \\ & & & 30 & TL (0) & TL (0) & 1151.3 (5) & 86.1 & 50.7 & 31.8 & 74.9 & 34.5 & 31.8 \\ 50 & C & 10 & 1.6 (30) & 30.1 (30) & 14.7 (30) & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\ & & & 30 & 13.6 (30) & 215.2 (20) & 73.0 (18) & 0.0 & 44.3 & 8.6 & 0.0 & 44.2 & 8.6 \\ & & G & 10 & TL (0) & 1597.1 (5) & 338.6 (10) & 56.6 & 33.2 & 20.3 & 54.4 & 31.8 & 20.3 \\ & & & 30 & TL (0) & TL (0) & TL (0) & 166.8 & 404.1 & 67.1 & 114.5 & 46.0 & 67.1 \\ & K & 10 & 3128.0 (1) & 656.3 (22) & 299.3 (21) & 18.5 & 12.7 & 6.8 & 18.5 & 11.7 & 6.8 \\ & & & 30 & TL (0) & TL (0) & 1184.4 (12) & 59.7 & 23.8 & 17.9 & 39.0 & 16.9 & 17.5 \\ & L & 10 & 2723.4 (3) & 1421.0 (13) & 1191.6 (17) & 31.3 & 22.6 & 7.7 & 29.4 & 21.7 & 7.7 \\ & & & 30 & TL (0) & TL (0) & 2659.9 (4) & 101.5 & 61.8 & 33.6 & 78.8 & 31.3 & 33.6 \\ \hline \hline \end{tabular} \end{table} Table 3. Syn-Data. The total number of instances per row is 30. Time limit equal to 3600 seconds. Median computational time of solved instances in seconds (Time[s]), number of solved instances (#Solved), median Gap between the incumbent solution and the best bound (MIPGap), and median Gap between the best solution found among the models and the model’s best bound (F Gap). in cases where the incumbent is close to zero. This is shown in the column Fgap, where we compare the bound of each model with respect to the best incumbent found. Thus, the models provide meaningful bounds even when the solution is not near-optimal within the time limit. These findings indicate that approach B is more suitable for solving larger instances with five or ten scenarios, even with an increased number of customer classes and candidate locations, and can achieve good solution quality with low optimality gaps. Therefore, approach B may be preferred for practical applications with larger instances, while approach SL may be suitable for smaller instances with fewer scenarios, or when the calculation of the captured utility only involves a single facility (i.e., when \(\mathbf{\lambda}=\mathsf{C}\)). ### Case study on the installation of charging stations for electric vehicles We conclude this section by presenting a case study for increasing electric vehicle adoption through the placement of new charging stations in the city of Trois-Rivieres, Quebec, using the instances employed by Lamontagne et al. (2022). In this section, in light of the results discussed in the previous section, we use the approach with the best performance for each \(\mathbf{\lambda}\)-vector, namely, approach SL for \(\mathsf{C}\), and approach B for the rest of them. In this case study, the time limit is set to 8 hours (28800 seconds). Table 4 summarizes the computational experience for larger instances, for different sizes of the set of customers (\(|\mathcal{I}|\)) and potential locations (\(|\mathcal{J}|\)), and for different values of the opt-out (\(u_{0}\)). Each row summarizes 80 instances, that is, 20 instances for each \(\mathbf{\lambda}\)-value. The table shows the median computational time of solved instances in seconds (Time[s]), the total number of instances solved (#Solved), and the median gap using the incumbent solution found and the best bound (MIPGap) when \(|\mathcal{T}|=4\). We refer the reader to Appendix C for a summary of our computational results for \(|\mathcal{T}|=10\). This table illustrates how the difficulty of the problem scales up when the size of the instance increases (compared to the sizes considered in Syn-Data). In fact, when only 20 instances out of 80 are solved, these 20 instances are the simplest ones, i.e., the ones with \(\mathbf{\lambda}=\mathsf{C}\). It is noteworthy that, for an opt-out of 4.5, we were able to solve many instances in a relatively shorter time period despite the larger problem sizes of this case study (compared to the previous one). This is due to the fact that customers consider only facilities within a 10 km radius. As a result, some customers consider only one facility, while others consider the entire set of facilities. This modeling assumption is aligned with the realistic setting where \begin{table} \begin{tabular}{r r r r r r} \hline \hline \(|\mathcal{J}|\) & \(|\mathcal{I}|\) & \multicolumn{2}{c}{\(u_{0}=4.5\)} & \multicolumn{2}{c}{\(u_{0}=9\)} \\ \cline{3-6} & & Time[s] (\# Solved) & MIPGap[\%] & Time[s] (\# Solved) & MIPGap[\%] \\ \hline 10 & 100 & 261.9 (80) & 0.0 & 3963.4 (65) & 1.5 \\ & 200 & 5076.2 (57) & 1.0 & 8.3 (20) & 4.9 \\ & 317 & 53.0 (35) & 1.5 & 14.0 (20) & 6.1 \\ 20 & 100 & 3752.6 (48) & 1.5 & 8.8 (20) & 9.5 \\ & 200 & 122.5 (20) & 6.4 & 35.3 (20) & 16.0 \\ & 317 & 455.3 (20) & 7.8 & 87.1 (20) & 17.8 \\ 30 & 100 & 30.5 (21) & 2.9 & 16.5 (20) & 13.9 \\ & 200 & 1260.6 (20) & 9.0 & 117.8 (20) & 20.8 \\ & 317 & 8716.9 (20) & 10.1 & 1205.2 (20) & 22.4 \\ \hline \hline \end{tabular} \end{table} Table 4. Real-Data Computational results with \(|\mathcal{T}|=4\). The total number of instances per row is 80. Time limit equal to 28800 seconds (8 hours). Median computational time of solved instances in seconds (Time[s]), number of solved instances (#Solved), and median gap using the incumbent solution and the best bound (MIPGap). customers prioritize facilities in the proximity to their location, and disregard those that are further away. Next we examine how different \(\mathbf{\lambda}\)-values influence the optimal location of charging stations. For this purpose, we compare the solutions obtained assuming the cooperative \(\mathbf{\lambda}\)-values, G, K and L, with those given by the standard C from the literature. In Table 5, we depict the regret of locating the stations assuming \(\mathbf{\lambda}\)=C when another \(\mathbf{\lambda}\)-value should have been assumed instead. This regret is calculated as a percentage deviation, \(\texttt{deviation}_{\mathbf{\lambda}}=\frac{f_{\lambda}(x)-f_{\lambda}(x_{\mathbf{ \lambda}}^{\mathsf{C}})}{f_{\lambda}(x)}\cdot 100\), where \(f_{\lambda}(x)\) represents the best objective value found when \(\mathbf{\lambda}\) is used, and \(f_{\lambda}(x_{\mathbf{\mathcal{C}}}^{\mathsf{C}})\) represents the objective value when \(\mathbf{\lambda}\) is used fixing the location decision to the one found by the C-model. The results of Table 5 are the average of these deviations and are given by instance size (\(|\mathcal{J}|\) and \(|\mathcal{I}|\)), by lambda value (\(\mathbf{\lambda}\in\{\texttt{G},\texttt{K},\texttt{L}\}\)) and by opt-out (\(u_{0}\)), only for \(|\mathcal{T}|=4\) (for the long span, see Appendix C). For these averages of regrets, we have included only the instances such that C is solved to optimality, and averaged all the non negative regrets (since the regret is in fact non negative if the solutions reported are optimal). We also want to point out that, despite the number of instances not solved for the cooperative \(\mathbf{\lambda}\)-vectors, only around a \(7.7\%\) of the regrets are negative, so the immense majority of the solutions reported represent an improvement over the standard case \(\mathbf{\lambda}=\texttt{C}\). This table confirms that assuming a cooperative approach in facility placement decisions has a significant impact on the resulting location decisions. If considering a cooperative setting for the captured utility did not affect the location of the stations, the deviations in the objective value would have been zero. However, as shown in Table 5, there are noticeable deviations when using cooperative models, such as a \(3\%\) deviation in one case, even in cases where optimality is not achieved. When the opt-out value is more _challenging_ and the customers' decision rule is cooperative, a planning that takes into account their cooperative behavior is more important. This is clear when comparing the regrets obtained for \(u_{0}=4.5\) and \(u_{0}=9\). Finally, we conclude by showing an example in Figure 3 of how the distribution of the stations changes for different \(\mathbf{\lambda}\)-values. In the figure, we present a specific instance that is solved to optimality, with the following parameters: \(|\mathcal{I}|=317,|\mathcal{J}|=10,|\mathcal{T}|=4,|\mathcal{S}|=5\), and \(u_{0}=4.5\). The figure illustrates the last period, where the black dots represent the \(317\) centroids of each subregion in Trois-Rivieres, Quebec. The red stars represent the open charging stations, while the yellow stars depict the stations that remained closed. The size variations among the stars indicate different types: larger stars represent charging stations with a greater number of outlets. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \(|\mathcal{J}|\) & \(|\mathcal{I}|\) & \multicolumn{3}{c}{\(u_{0}=4.5\)} & \multicolumn{3}{c}{\(u_{0}=9\)} \\ \cline{3-7} & & G & K & L & G & K & L \\ \hline 10 & 100 & 0.7 & 1.1 & 0.8 & 1.9 & 2.2 & 2.2 \\ & 200 & 0.5 & 0.6 & 0.6 & 1.2 & 1.3 & 1.3 \\ & 317 & 0.4 & 0.5 & 0.4 & 0.9 & 1.1 & 1.0 \\ 20 & 100 & 0.5 & 0.8 & 0.6 & 1.6 & 2.5 & 2.0 \\ & 200 & 0.4 & 0.7 & 0.5 & 1.0 & 1.6 & 1.4 \\ & 317 & 0.2 & 0.4 & 0.3 & 0.5 & 1.2 & 0.6 \\ 30 & 100 & 0.5 & 0.9 & 0.6 & 2.3 & 3.2 & 2.9 \\ & 200 & 0.5 & 0.8 & 0.6 & 0.8 & 1.6 & 1.2 \\ & 317 & 0.3 & 0.5 & 0.3 & 0.6 & 1.4 & 0.8 \\ \hline \hline \end{tabular} \end{table} Table 5. Real-Data with \(|\mathcal{T}|=4\). The total number of instances per row is 20. Instances solved with B. Time limit equal to 28800 seconds (8 hours). Average of the regret between C and other \(\mathbf{\lambda}\)-values. Upon closer inspection of the figure, when \(\boldsymbol{\lambda}=\mathtt{C}\), all the stations are open, with only one of them having a greater number of outlets. This proves that for an opt-out value of 4.5, distributing the facilities throughout the region leads to a better solution. For other \(\boldsymbol{\lambda}\)-values, multiple facilities are considered when calculating the captured utility. The case \(\mathtt{G}\) is similar to \(\mathtt{C}\) because the weights assigned by customers to their second and third options are very low, so the model tends to favor again solutions where the stations are distributed throughout the territory. The cases with \(\boldsymbol{\lambda}\) equal to \(\mathtt{K}\) and \(\mathtt{L}\), where two facilities are considered to compute the utility, result in similar solutions: they are the only two cases in which a station remains closed. However, the distinction lies in the fact that for \(\boldsymbol{\lambda}=\mathtt{K}\), one station with 4 outlets and two stations with 2 outlets are open. Conversely, when \(\boldsymbol{\lambda}=\mathtt{L}\), the solution consists in opening one station with 6 outlets while the rest only have 1 outlet. This difference occurs because for \(\boldsymbol{\lambda}\)=\(\mathtt{L}\) the customers attribute more importance to their first option than to their second one. As a result, the model favors a more concentrated distribution by opening more attractive stations, i.e., stations with more outlets. ## 7. Conclusions In this paper, we provide a general framework for Cooperative Maximum Capture problems by considering a generalized version of the Cooperative Maximum Captured Facility Location where the captured utility is given as an ordered median function of the partial utilities, i.e., a weighted sum of ordered partial utilities of open facilities. We formulate the problem as a multiperiod stochastic bilevel problem with an embedded linear assignment problem characterizing the ordered median function. As a first solution approach, we present a MILP reformulation of the bilevel problem that can be solved using general-purpose solvers like Gurobi or CPLEX. We obtain a tight and compact model by deriving several sets of valid inequalities and some preprocessing techniques for particular values of the vector of weights of Figure 3. Real-Data with \(|\mathcal{I}|=317,|\mathcal{J}|=4,|\mathcal{T}|=4,|\mathcal{S}|=5\), and \(u_{0}=4.5\). Example of the optimal placement of charging stations for electric vehicles in the city of Trois-Rivières, Québec, for different \(\boldsymbol{\lambda}\)-values. Red stars denote open stations, with the size of the star indicating the number of outlets, yellow stars represent closed stations, and black points indicate the client class for each region. the ordered median function. Our second solution method is based on the well-known Benders Decomposition for MILPs, where we project out the assignment problem for each customer, scenario and time period. For this setting, we are able to derive an ad-hoc algorithm to include the Benders cuts in an effective and numerically more accurate manner. We test and compare all the methods by means of an extensive battery of computational experiments, and we show the variability in the location solutions for different \(\mathbf{\lambda}\)-weights in the case study proposed by Lamontagne et al. (2022), which deals with the placement of charging stations for electric vehicles in the city of Trois-Rivieres, Quebec (Canada). In view of the results obtained for the largest and more challenging instances, our exact approaches can be complemented with the development of tailored algorithms and efficient methods such as heuristics that take into account the cooperative decision rule of the customers. Further research on the topic includes, among others, the consideration of capacity constraints in the facilities that depend, for instance, on the type of facility installed. From a modeling point of view, the problem would be seen as a cooperative location-allocation problem. From the bilevel setting, including capacity constraints is not a simple task, since it may lead to infeasible solutions if the capacity constraints are included in the first level. Another possible extension of the Cooperative Maximum Captured Facility Location consists in robustifying the ordered median function problem associated to the captured utility. Indeed, since not only the partial utilities of the customers are uncertain, but the actual \(\mathbf{\lambda}\)-weights as well, we can consider this problem with variable \(\mathbf{\lambda}\)-weights that meet certain conditions associated to the knowledge of the customer, and optimize the captured utility in the worst-case. Considering non-monotone or negative \(\mathbf{\lambda}\)-weights are other extensions of the problem that may have applications in settings where the captured utility meets different requirements. ## Acknowledgements This work was supported in part by the European Research Council (ERC) under the EU Horizon 2020 research and innovation program (grant agreement No. 755705), in part by the Spanish Ministry of Science and Innovation (AEI/10.13039/501100011033) through project PID2020-115460GB-I00, and AEI grant number RED2022-134149-T (Thematic Network: Location Science and Related Problems). C. Dominguez and R. Gazquez are also financially supported through the Research Program for Young Talented Reseachers of the University of Malaga under Project B1-2022_37. Finally, the authors thankfully acknowledge the computer resources, technical expertise, and assistance provided by the SCBI (Supercomputing and Bioinformatics) center of the University of Malaga.
2302.03974
Hypersimple Rings and Modules
In this paper a simple right R-module S over a ring R is called hypersimple if its injective hull E(S) is cyclic, and a ring R is called right hypersimple if every simple right R-module is hypersimple. We initiate a study of these new notions, and revisit Osofsky's work on hypercyclic rings, i.e. rings whose cyclic right modules have cyclic injective hulls.
Christian Lomp, Mohamed Yousif, Yiqiang Zhou
2023-02-08T10:11:10Z
http://arxiv.org/abs/2302.03974v1
# Hypersimple rings and modules ###### Abstract. In this paper a simple right \(R\)-module \(S\) over a ring \(R\) is called hypersimple if its injective hull \(E(S)\) is cyclic, and a ring \(R\) is called right hypersimple if every simple right \(R\)-module is hypersimple. We initiate a study of these new notions, and revisit Osofsky's work on hypercyclic rings, i.e. rings whose cyclic right modules have cyclic injective hulls. Key words and phrases:Hopfian, co-Hopfian, Dedekind-finite Rings and Modules, Self-injective Rings 2010 Mathematics Subject Classification: Primary 16D40, 16D50, 16D60; Secondary 16L30, 16L60, 16P20, 16P40, 16P60 ## 1. Introduction The Prufer groups are the injective hulls of the simple Abelian groups and they are Artinian, but not Noetherian. More generally any injective hull of a simple module over a commutative Noetherian ring is Artinian as it was shown by Matlis in his seminal work [24]. Other finiteness conditions on the injective hull of modules have been considered for example by Rosenberg and Zelinsky in [33] and Faith in [12]. Faith and Walker for example proved in [13, Theorem 5.5] that a ring \(R\) is quasi-Frobenius if and only if any injective right \(R\)-module is a direct sum of cyclic modules which are isomorphic to principal indecomposable right ideals of \(R\). Furthermore, by Zorn's lemma one can easily prove that every injective module is the injective hull of a direct sum of cyclic modules. In [31], Osofsky studied the rings whose cyclic modules are injective and showed that such rings are precisely the semisimple Artinian ones. Inspired by this result, Caldwell in [4] studied a class of rings, called (right) hypercyclic rings, whose cyclic right modules have cyclic injective hulls. He proved that a left perfect, right hypercyclic ring is Artinian and uniserial. Hypercyclic rings were thoroughly investigated by Caldwell for commutative rings in [4], and by Osofsky for noncommutative rings in [30]. However, the only example of a hypercyclic ring that is not semisimple Artinian was provided by Caldwell in [4], and such a ring is commutative and self-injective. Furthermore, Caldwell has asked in his thesis [5, Page 53] whether every hypercyclic ring is self-injective, and conjectured yes as an answer to his question. The conjecture still remains open, among several other questions on the subject. For example it is not known if the notion of hypercyclic rings is left-right symmetric. Moreover, Osofsky asked in [30] if the (Jacobson) radical of local hypercyclic rings is nil. Motivated by Caldwell's conjecture, it was shown in [18] that if \(R\) is a ring such that \(E(R_{R})\) is cyclic and Dedekind-finite, then \(R\) is right self-injective. In
2306.00752
Robust covariance estimation with missing values and cell-wise contamination
Large datasets are often affected by cell-wise outliers in the form of missing or erroneous data. However, discarding any samples containing outliers may result in a dataset that is too small to accurately estimate the covariance matrix. Moreover, the robust procedures designed to address this problem require the invertibility of the covariance operator and thus are not effective on high-dimensional data. In this paper, we propose an unbiased estimator for the covariance in the presence of missing values that does not require any imputation step and still achieves near minimax statistical accuracy with the operator norm. We also advocate for its use in combination with cell-wise outlier detection methods to tackle cell-wise contamination in a high-dimensional and low-rank setting, where state-of-the-art methods may suffer from numerical instability and long computation times. To complement our theoretical findings, we conducted an experimental study which demonstrates the superiority of our approach over the state of the art both in low and high dimension settings.
Karim Lounici, Grégoire Pacreau
2023-06-01T14:49:20Z
http://arxiv.org/abs/2306.00752v3
# Robust covariance estimation with missing values and cell-wise contamination ###### Abstract Large datasets are often affected by cell-wise outliers in the form of missing or erroneous data. However, discarding any samples containing outliers may result in a dataset that is too small to accurately estimate the covariance matrix. Moreover, most robust procedures designed to address this problem are not effective on high-dimensional data as they rely crucially on invertibility of the covariance operator. In this paper, we propose an unbiased estimator for the covariance in the presence of missing values that does not require any imputation step and still achieves minimax statistical accuracy with the operator norm. We also advocate for its use in combination with cell-wise outlier detection methods to tackle cell-wise contamination in a high-dimensional and low-rank setting, where state-of-the-art methods may suffer from numerical instability and long computation times. To complement our theoretical findings, we conducted an experimental study which demonstrates the superiority of our approach over the state of the art both in low and high dimension settings. ## 1 Introduction Outliers are a common occurrence in datasets, and they can significantly affect the accuracy of data analysis. While research on outlier detection and treatment has been ongoing since the 1960s, much of it has focused on cases where entire samples are outliers, as demonstrated by Huber's work [8; 30; 10]. While sample-wise contamination is a common issue in many datasets, modern data analysis often involves combining data from multiple sources. For example, data may be collected from an array of sensors, each with an independent probability of failure, or financial data may come from multiple companies, where reporting errors from one source do not necessarily impact the validity of the information from the other sources. Discarding an entire sample as an outlier when only a few features are contaminated can result in the loss of valuable information, especially in high-dimensional datasets where samples are already scarce. It is important to identify and address the specific contaminated features, rather than simply treating the entire sample as an outlier. In fact, if each dimension of a sample has a contamination probability of \(\epsilon\), then the probability of that sample containing at least one outlier is given by \(1-(1-\epsilon)^{p}\), where \(p\) is the dimensionality of the sample. In high dimension, this probability can quickly exceed \(50\%\), surpassing the breakdown point of many robust estimators designed for the Huber sample-wise contamination setting. Hence, it is crucial to develop robust methods that can handle cell-wise contaminations and still provide accurate results. The issue of cell-wise contamination, where individual cells in a dataset may be contaminated, was first introduced in [2]. However, the issue of missing data due to outliers was studied much earlier, dating back to the work of [24]. Although missing values in a dataset are much easier to detect than outliers, they can still have a significant impact on the accuracy of statistical analysis and supervised learning tasks. Specifically, missing data can lead to errors in estimating the location and scale of the underlying distribution [16] and can negatively affect the performance of supervised learning algorithms [12]. Several robust estimation methods have been proposed to handle missing data, including Expectation Maximization (EM)-based algorithms [5], maximum likelihood estimation [11] and Multiple Imputation [16], among which we can find k-nearest neighbor imputation [28] and iterative imputation [31]. For the covariance matrix estimation in the Missing Completely At Random (MCAR) framework of [24], [17] provides suboptimal theoretical guarantees and a debiasing scheme for the estimation of the covariance. In comparison to data missingness or its sample-wise counterpart, the cell-wise contamination problem is less studied. The Detection Imputation (DI) algorithm of [21] is an EM type procedure combining a robust covariance estimation method with an outlier detection method to iteratively update the covariance estimation. Other methods include adapting methodology created for Huber contamination for the cell-wise problem, such as in [4] or [1]. In high dimensional statistics, however, most of these methods fail due to high computation time and numerical instability. They are simply not designed to work in this regime since they are based on the Mahalanobis distance, which requires an inversion of the estimated covariance matrix. This is a major issue since classical covariance matrix estimators have many eigenvalues close to zero or even exactly equal to zero in high-dimension. Furthermore, to the best of our knowledge, no theoretical result exists concerning the statistical accuracy of these methods in the cell-wise contamination setting contrarily to the extensive literature on Huber's contamination. **Contributions.** In this paper, we address the problem of high-dimensional covariance estimation in the presence of missing observations and cell-wise contamination. To formalize this problem, we adopt and generalize the setting introduced in [7]. We propose a computationally efficient and numerically stable procedure that avoids matrix inversion, making it well-suited for high-dimensional data. We derive non-asymptotic estimation bounds of the covariance with the operator norm and matching minimax lower bounds (up to log), which clarify the impact of the missing value rate and outlier contamination rate. Our theoretical results also provide a significant improvement over [17] in the MCAR and no contamination setting. Next, we conduct an experimental study on synthetic data, comparing our proposed method to the state-of-the-art (SOTA) methods. Our results demonstrate that SOTA methods fail in the high-dimensional regime due to matrix inversions, while our proposed method performs well in this regime, highlighting its effectiveness. Then we demonstrate the practical Figure 1: Left: Estimation error of the covariance matrix for \(n=100\), \(p=50\), \(\mathbf{r}(\Sigma)=2\) under a Dirac contamination (tailMV and DDCMV are our methods). Here \(\epsilon=1\) and \(\delta\) varies in \((0,1)\). Right: For each method, mean computation time (in seconds) over 20 repetitions and whether it uses matrix inversion. For \(p=100\), we had to raise \(r\left(\Sigma\right)\) to \(10\) otherwise both DI and TSGS would fail due to numerical instability. utility of our approach by applying it to real-life datasets, which highlights that the use of existing estimation methods significantly alters the spectral properties of the estimated covariance matrices. This implies that cell-wise contamination can significantly impact the results of dimension reduction techniques like principal component analysis (PCA) by completely altering the computed principal directions. Our experiments demonstrate that our method is more robust to cell-wise contamination than SOTA methods and produces reliable estimates of the covariance. ## 2 Missing values and cell-wise contamination setting Let \(X_{1},\ldots,X_{n}\) be \(n\) i.i.d. copies of zero mean vector random vector \(X\) admitting unknown covariance operator \(\Sigma=\mathbb{E}\left[X\otimes X\right]\), where \(\otimes\) is the outer product. Denote by \(X_{i}^{(j)}\) the \(j\)th component of vector \(X_{i}\) for any \(j\in[p]\). All our results are non-asymptotic and cover all configurations of \(n,p\) including the high-dimensional setting \(p\gg n\). In this paper, we consider the following two realistic scenarios where the measurements are potentially corrupted. Missing values.We assume that each component \(X_{i}^{(j)}\) is observed independently from the others with probability \(\delta\in(0,1]\). Formally, we observe the random vector \(Y\in\mathbb{R}^{p}\) defined as follows: \[Y_{i}^{(j)}=d_{i,j}X_{i}^{(j)},1\leq i\leq n,1\leq j\leq p \tag{1}\] where \(d_{ij}\) are independent realisations of a bernoulli random variable of parameter \(\delta\). This corresponds the Missing Completely at Random (MCAR) setting of [24]. Cell-wise contamination.Here we assume that some missing components \(X_{i}^{(j)}\) can be replaced with probability \(\varepsilon\) by some independent noise variables, representing either a poisoning of the data or random mistakes in measurements. The observation vector \(Y\) then satisfies: \[Y_{i}^{(j)}=d_{i,j}X_{i}^{(j)}+(1-d_{i,j})e_{i,j}\xi_{i}^{(j)},1\leq i\leq n,1 \leq j\leq p \tag{2}\] where \(\xi_{1},\ldots\xi_{n}\) are i.i.d. erroneous measurements and \(e_{i,j}\) are i.i.d. bernoulli random variables with parameter \(\varepsilon\). We also assume that all the variables \(X_{i}\), \(\xi_{i}\), \(d_{i,j}\), \(e_{i,j}\) are mutually independent. In this scenario, a component \(X_{i}^{(j)}\) is either perfectly observed with probability \(\delta\), replaced by a random noise with probability \(\varepsilon^{\prime}=\varepsilon(1-\delta)\) or missing with probability \((1-\delta)(1-\varepsilon)\). Cell-wise contamination as introduced in [2] corresponds to the case where \(\varepsilon=1\), and thus \(\varepsilon^{\prime}=1-\delta\). If we consider the mean estimation problem, then the cell-wise contamination problem is indistinguishable from the classical Huber contamination problem since estimation of a mean vector is equivalent to the estimation of each marginal mean independently from the others. Since cell-wise contamination is equivalent to contamination of each marginal independently from the other marginals following Huber's paradigm, we can use for instance Tuker's median as a robust estimator of the mean. However, as argued in [2], the situation is quite different for covariance estimation. Our proposal is based on a correction of the classical covariance estimator on \(Y_{1},\ldots,Y_{n}\) first introduced in [17] for the missing values scenario. The procedure is based on the following observation, with \(\Sigma^{Y}\) the covariance of the data with missing values and \(\Sigma\) the true covariance: \[\Sigma=\left(\delta^{-1}-\delta^{-2}\right)\text{diag}(\Sigma^{Y})+\delta^{-2} \Sigma^{Y} \tag{3}\] Note that this formula assumes the knowledge of \(\delta\). In the missing values scenario, \(\delta\) can be efficiently estimated by a simple count of the values exactly set to \(0\) or equal to NaN (not a number). However, in presence of contamination as in (2), one does not know the exact location and number of outliers. In our experiments, we will estimate \(\delta\) by the proportion of data remaining after a filtering procedure. Notations.We denote by \(\odot\) the Hadamard (or term by term) product of two matrices and by \(\otimes\) the outer product of vectors, i.e. \(\forall x,y\in\mathbb{R}^{d},x\otimes y=xy^{\top}\). We denote by \(\left\|.\right\|\) and \(\left\|.\right\|_{F}\) the operator and Frobenius norms of a matrix respectively. The operator norm is defined as \(\left\|A\right\|=\sup\{\left\|Au\right\|_{2},\left\|u\right\|_{2}\leq 1\}\) with \(\left\|.\right\|_{2}\) being the vector \(L_{2}\) norm. Optimal estimation of covariance matrices with missing values We consider the scenario outlined in (1) where the matrix \(\Sigma\) is of approximately low rank. To quantify this, we use the concept of effective rank, which provides a useful measure of the inherent complexity of a matrix. Specifically, the effective rank of \(\Sigma\) is defined as follows \[\mathbf{r}(\Sigma):=\frac{\mathbb{E}\left\|X\right\|_{2}^{2}}{\left\|\Sigma\right\| }=\frac{\text{tr}\left(\Sigma\right)}{\left\|\Sigma\right\|} \tag{4}\] We note that \(0\leq\mathbf{r}(\Sigma)\leq\text{rank}(\Sigma).\) Furthermore, for approximately low rank matrices with rapidly decaying eigenvalues, we have \(\mathbf{r}(\Sigma)\ll\text{rank}(\Sigma)\). This section presents a novel analysis of the estimator defined in equation (3), which yields a non-asymptotic minimax optimal estimation bound in the operator norm. Our findings represent a substantial enhancement over the suboptimal guarantees reported in [17]. Non-asymptotic upper-bound in the operator norm.We provide an upper bound of the estimation error in operator norm. We write \(Y_{i}=d_{i}\odot X_{i}\). Let \(\widehat{\Sigma}^{Y}=\sum_{i=1}^{n}Y_{i}\otimes Y_{i}/n\) be the classical covariance estimator of the covariance of \(Y\). When the dataset contains missing values and corruptions, \(\widehat{\Sigma}^{Y}\) is a biased estimator of \(\Sigma\). Exploiting (3), [17] proposed the following unbiased estimator of the covariance matrix \(\Sigma\): \[\widehat{\Sigma}=\delta^{-2}\widehat{\Sigma}^{Y}+(\delta^{-1}-\delta^{-2}) \text{diag}\left(\widehat{\Sigma}^{Y}\right). \tag{5}\] **Theorem 1**.: _Let \(X_{1},\ldots,X_{n}\) be i.i.d. subgaussian random variables in \(\mathbb{R}^{p}\), with covariance matrix \(\Sigma\), and let \(d_{ij},i\in[1,n],j\in[1,p]\) be i.i.d bernoulli random variables with probability of success \(\delta>0\). Then there exists an absolute constant \(C\) such that, for \(t>0\), with probability at least \(1-e^{-t}\):_ \[\left\|\widehat{\Sigma}-\Sigma\right\|\leq C\frac{\left\|\Sigma\right\|}{ \delta}\left(\sqrt{\frac{\mathbf{r}(\Sigma)}{n}}\vee\frac{\mathbf{r}(\Sigma)}{n}\lor \sqrt{\frac{t}{n}}\vee\frac{t}{n}\right) \tag{6}\] This bound improves upon [17, Proposition 3] which proved with probability at least \(1-e^{-t}\): \[\left\|\widehat{\Sigma}-\Sigma\right\|\leq C\left\|\Sigma\right\|\left(\sqrt{ \frac{\mathbf{r}(\Sigma)(t+\log(2p))}{\delta^{2}n}}\vee\frac{\mathbf{r}(\Sigma)(t+ \log(2p))}{\delta^{2}n}(\delta+t+\log n)\right)\] Contrarily to the previous display, the bound in (6) admits an improved dependence on the parameter \(\delta\) as we replaced \(\delta^{2}\) by \(\delta\) in the denominator. Actually, this bound is sharp minimax optimal as we will prove it in Theorem 2. The complete proof argument for Theorem 1 is provided in appendix E.2. It relies on a recent generic chaining result for quadratic processes. Comparatively, the bound in [17, Proposition 3] was based on non-commutative Bernstein inequality and is never minimax-optimal in any settings of \(\delta,n,p,\mathbf{r}(\Sigma)\). Sketch of proof.: We note that the Schur-Horn theorem gives: \(\left\|\text{diag}\left(\widehat{\Sigma}^{Y}-\Sigma^{Y}\right)\right\|\leq \left\|\widehat{\Sigma}^{Y}-\Sigma^{Y}\right\|\) Which in turn leads to \(\left\|\widehat{\Sigma}-\Sigma\right\|\leq 2\delta^{-2}\left\|\widehat{ \Sigma}^{Y}-\mathbb{E}[\widehat{\Sigma}^{Y}]\right\|.\) We bound \(\left\|\widehat{\Sigma}^{Y}-\mathbb{E}[\widehat{\Sigma}^{Y}]\right\|\) using a generic chaining argument. Minimax lower-bound.We now provide a minimax lower bound for the covariance estimation with missing values problem. Let \(\mathcal{S}_{p}\) the set of \(p\times p\) symmetric semi-positive matrices. Then, define \(\mathcal{C}_{\overline{r}}=\{S\in\mathcal{S}_{p}:\mathbf{r}(S)\leq\overline{r}\}\) the set of matrices of \(\mathcal{S}_{p}\) with effective rank at most \(\overline{r}\). **Theorem 2**.: _Let \(p,n,\overline{r}\) be strictly positive integers such that \(p\geq\max\{n,2\overline{r}\}\). Let \(X_{1},\ldots,X_{n}\) be i.i.d. random vectors in \(\mathbb{R}^{p}\) with covariance matrix \(\Sigma\in\mathcal{C}_{\overline{r}}\). Let \((d_{i,j})_{1\leq i\leq n,1\leq j\leq p}\) be an i.i.d. sequence of Bernoulli random variables with probability of success \(\delta\in(0,1]\), independent from the \(X_{1},\ldots,X_{n}\). We observe \(n\) i.i.d. vectors \(Y_{1},\ldots,Y_{n}\in\mathbb{R}^{p}\) such that \(Y_{i}^{(j)}=d_{i,j}X_{i}^{(j)}\), \(i\in[n]\), \(j\in[p]\). Then there exists two absolute constants \(C>0\) and \(\beta\in(0,1)\) such that:_ \[\inf_{\Sigma}\max_{\Sigma\in\mathcal{C}_{\overline{r}}}\mathbb{P}_{\Sigma} \left(\left\|\hat{\Sigma}-\Sigma\right\|\geq C\frac{\left\|\Sigma\right\|}{ \delta}\sqrt{\frac{\mathbf{r}(\Sigma)}{n}}\right)\geq\beta \tag{7}\] _where \(\inf_{\Sigma}\) represents the infimum over all estimators \(\hat{\Sigma}\) of matrix \(\Sigma\) based on \(Y_{1},\ldots,Y_{n}\)._ This lower bound improves upon [17, Theorem 2] as it relaxes the hypotheses on \(n\) and \(\overline{r}\). More specifically, the lower bound in [17] requires \(n\geq 2\overline{r}^{2}/\delta^{2}\) while we only need the mild assumption \(p\geq\max\{n,2\overline{r}\}\). Furthermore, the above lower bound matches the upper bound of Theorem 1 in the high-dimensional regime \(p\geq\max\{n,2\overline{r}\}\) and \(n\geq\mathbf{r}(\Sigma)\), hence clarifying the impact of missing data on the estimation rate via the parameter \(\delta\). Our proof argument leverages the properties of the Grassmann manifold, which has been previously utilized in different settings such as sparse PCA without missing values or contamination [33] and low-rank covariance estimation without missing values or contamination [14]. However, tackling missing values in the Grassmann approach is the main technical challenge as it modifies the distribution of observations and requires several additional nontrivial arguments to control the distribution divergences, which is a crucial step in deriving the minimax lower bound for our problem. Sketch of proof.: We first build a sufficiently large test set of hard-to-learn covariance operators exploiting entropy properties of the Grassmann manifold such that the distance between any two distinct covariance operator is at least of the order \(\frac{\|\Sigma\|}{\delta}\sqrt{\frac{\mathbf{r}(\Sigma)}{n}}\). Next, in order to control the Kullback-Leibler divergence of the observations with missing values, we exploit in particular interlacing properties of eigenvalues of the perturbed covariance operators [26]. Heterogeneous missingness.In the MCAR scenario, we assume now that each feature has a different missing value rate. We denote by \(\delta_{j}\in(0,1]\) the probability to observe feature \(X^{(j)}\), \(1\leq j\leq p\). We define next \(\bar{\delta}=\max_{j}\{\delta_{j}\}\) and \(\delta=\min_{j}\{\delta_{j}\}\) the largest and smallest probabilities to observe a feature. By a straightforward modification of \(\widehat{\Sigma}\) and the proof of Theorem 1, under the same assumptions on \(X\), we get, for any \(t>0\), with probability at least \(1-e^{-t}\) \[\left\|\widehat{\Sigma}-\Sigma\right\|\leq C\frac{\bar{\delta}\left\|\Sigma \right\|}{\delta^{2}}\left(\sqrt{\frac{\mathbf{r}(\Sigma)}{n}}\vee\frac{ \mathbf{r}(\Sigma)}{n}\vee\sqrt{\frac{t}{n}}\vee\frac{t}{n}\right). \tag{8}\] Similarly we also obtain the following lower bound. For \(\delta\in[1/2,1]\), let \(p,n,\overline{r}\) be strictly positive integers such that \(n\geq 2\overline{r}/\delta^{2}\) and \(p\geq 2\overline{r}\). Then we have \[\inf_{\Sigma}\max_{\Sigma\in\mathcal{C}\boldsymbol{\tau}}\mathbb{P}_{\Sigma} \left(\left\|\hat{\Sigma}-\Sigma\right\|\geq C\left\|\Sigma\right\|\sqrt{ \frac{\mathbf{r}(\Sigma)}{\bar{\delta}^{2}n}}\right)\geq\beta. \tag{9}\] If \(\bar{\delta}\asymp\delta\), then the rates in (8) and (9) are matching and the minimax optimality result remains valid. ## 4 Optimal estimation of covariance matrices with cell-wise contamination We consider the contamination scenario described in (2). We further assume that the \(\xi_{1},\ldots\xi_{n}\) are subgaussian r.v. and that \(\Lambda:=\mathbb{E}[\xi_{1}\otimes\xi_{1}]\) is diagonal. In the presence of cell-wise contaminations, the operator \(\Sigma^{Y}=\mathbb{E}\left(Y\otimes Y\right)\) satisfies \[\Sigma^{Y}=\delta^{2}\Sigma+(\delta-\delta^{2})\mathrm{diag}(\Sigma)+ \varepsilon(1-\delta)\Lambda.\] Note that the additional term \(\varepsilon(1-\delta)\Lambda\) in the cell-wise contamination setting becomes negligible when \(\delta\approx 1\) or \(\varepsilon\approx 0\). Using the DDC detection procedure of [21], we can detect the contaminations and accurately estimate both \(\delta\), \(\epsilon\) and the diagonal operator \(\Lambda\). We will not develop this aspect further and simply assume that these are known in the following result. Hence we propose the following unbiased estimator of \(\Sigma\). Let \(\widehat{\Sigma}^{Y}=n^{-1}\sum_{i=1}^{n}Y_{i}\otimes Y_{i}\) and \[\widehat{\Sigma}=(\delta^{-1}-\delta^{-2})\text{diag}\left(\widehat{\Sigma}^ {Y}\right)+\delta^{-2}\widehat{\Sigma}^{Y}-\frac{\varepsilon(1-\delta)}{ \delta}\Lambda. \tag{10}\] Non-asymptotic upper-bound in the operator norm.We prove the following result. **Theorem 3**.: _Let the assumptions of Theorem 1 be satisfied. We assume in addition that the observations \(Y_{1},\ldots,Y_{n}\) satisfy (2) with \(\varepsilon\in[0,1)\) and \(\delta\in(0,1]\). Then, for any \(t>0\), with probability at least \(1-e^{-t}\):_ \[\begin{split}\left\|\widehat{\Sigma}-\Sigma\right\|\lesssim& \frac{\left\|\Sigma\right\|}{\delta}\left(\sqrt{\frac{\mathbf{r}(\Sigma)}{n}}\lor \frac{\mathbf{r}(\Sigma)}{n}\lor\sqrt{\frac{t}{n}}\lor\frac{t}{n}\right)\\ &+\frac{\varepsilon(1-\delta)\left\|\Lambda\right\|}{\delta} \left(\sqrt{\frac{\mathbf{r}(\Lambda)}{n}}\lor\frac{\mathbf{r}(\Lambda)}{n}\lor\sqrt{ \frac{t}{n}}\lor\frac{t}{n}\right)\\ &+\left(\frac{1}{\delta}\sqrt{\varepsilon(1-\delta)}+\frac{ \varepsilon(1-\delta)\sqrt{\delta}}{\delta^{2}}\right)\sqrt{\left\|\Lambda \right\|\left\|\Sigma\right\|}\sqrt{\mathbf{r}(\Lambda)\lor\mathbf{r}(\Sigma)}\sqrt{ \frac{t+\log(\mathbf{r}(\Lambda)\lor\mathbf{r}(\Sigma))}{n}}\\ &+\frac{\sqrt{\delta\varepsilon(1-\delta)}}{\delta^{2}}\sqrt{ \left\|\Lambda\right\|\left\|\Sigma\right\|}\sqrt{\mathbf{r}(\Lambda)\lor\mathbf{r}( \Sigma)}\frac{(t+\log(\mathbf{r}(\Lambda)\lor\mathbf{r}(\Sigma)))\log n}{n}.\end{split} \tag{11}\] Sketch of proof.: We first note that \[\left\|\widehat{\Sigma}-\Sigma\right\|\leq\delta^{-2}\,\left\|\widehat{\Sigma }^{Y}-\Sigma^{Y}\right\|+\delta^{-2}\left\|\widehat{\Sigma}^{X,\xi,\delta, \varepsilon}\right\|. \tag{12}\] The triangular inequality gives \[\left\|\widehat{\Sigma}^{Y}-\Sigma^{Y}\right\|=\left\|\hat{\Sigma}^{\delta}- \Sigma^{\delta}+\widehat{\Lambda}^{\varepsilon}-\mathbb{E}\hat{\Lambda}^{ \varepsilon}+\widehat{\Sigma}^{X,\xi,\delta,\varepsilon}\right\|\leq\left\| \widehat{\Sigma}^{\delta}-\Sigma^{\delta}\right\|+\left\|\widehat{\Lambda}^{ \varepsilon}-\mathbb{E}\widehat{\Lambda}^{\varepsilon}\right\|+\left\| \widehat{\Sigma}^{X,\xi,\delta,\varepsilon}\right\|,\] where the three empirical matrices are 1. \(\widehat{\Sigma}^{\delta}=n^{-1}\sum_{i=1}^{n}(d_{i}\otimes d_{i})\odot(X_{i} \otimes X_{i})\), the empirical covariance matrix of the \(d_{i}\odot X_{i}\); 2. \(\widehat{\Lambda}^{\varepsilon}=n^{-1}\sum_{i=1}^{n}\left([(1-d_{i})\odot e_{ i}]\otimes[(1-d_{i})\odot e_{i}]\right)\odot(\xi_{i}\otimes\xi_{i})\), the empirical covariance of the \((1-d_{i})\odot e_{i}\odot\xi_{i}\) is such that \(\mathbb{E}\widehat{\Lambda}^{\varepsilon}=\frac{\varepsilon(1-\delta)}{ \delta}\Lambda\); 3. \(\widehat{\Sigma}^{X,\xi,\delta,\varepsilon}=n^{-1}\sum_{i=1}^{n}\left(d_{i} \otimes[(1-d_{i})\odot e_{i}]\right)\odot(X_{i}\otimes\xi_{i})+([(1-d_{i}) \odot e_{i}]\otimes d_{i})\odot(\xi_{i}\otimes X_{i})\) is the empirical covariance between the \(d_{i}\odot X_{i}\) and the \((1-d_{i})\odot e_{i}\odot\xi_{i}\). We tackle \(\left\|\widehat{\Sigma}^{\delta}-\Sigma^{\delta}\right\|\) and \(\left\|\widehat{\Lambda}^{\varepsilon}-\mathbb{E}\widehat{\Lambda}^{\varepsilon}\right\|\) similarly as in the proof of Thm 1. We tackle \(\left\|\widehat{\Sigma}^{X,\xi,\delta,\varepsilon}\right\|\) via a dimension-free non-commutative Bernstein inequality [18, 27]. See App E.3 for the full details. As emphasized in [13], the effective rank \(\mathbf{r}(\Sigma)\) provides a measure of the statistical complexity of the covariance learning problem in the absence of any contamination. However, when cell-wise contamination is present, the statistical complexity of the problem may increase if \(\mathbf{r}(\Lambda)\geq\mathbf{r}(\Sigma)\). Fortunately, if the filtering process reduces the proportion of cell-wise contamination from \(\epsilon\) to \(\epsilon^{\prime}\) such that \(\epsilon^{\prime}\operatorname{tr}(\Lambda)\leq\operatorname{tr}(\Sigma)\) and \(\epsilon^{\prime}\ \|\Lambda\|\leq\|\Sigma\|\), then we can effectively mitigate the impact of cell-wise contamination, as highlighted in the following result. **Corollary 1**.: _Let the assumptions of Theorem 3 be satisfied. Assume in addition that \(\epsilon\operatorname{tr}(\Lambda)\leq\operatorname{tr}(\Sigma)\) and \(\epsilon\ \|\Lambda\|\leq\|\Sigma\|\). Then, for any \(t>0\), with probability at least \(1-e^{-t}\)_ \[\begin{split}\left\|\widehat{\Sigma}-\Sigma\right\|\lesssim& \frac{\left\|\Sigma\right\|}{\delta}\left(\sqrt{\frac{\mathbf{r}(\Sigma)}{n}}\lor \frac{\mathbf{r}(\Sigma)}{n}\lor\sqrt{\frac{t}{n}}\lor\frac{t}{n}\right)+\left\| \Sigma\right\|\frac{\varepsilon(1-\delta)}{\delta^{2}}\\ &+\frac{\left\|\Sigma\right\|}{\delta}\frac{\mathbf{r}(\Sigma)}{n}(1- \delta)(t+\log p)+\frac{\left\|\Sigma\right\|}{\delta}\sqrt{\frac{\mathbf{r}(\Sigma) }{n}}\frac{\sqrt{1-\delta}\left[t+\log p\right]\log n}{\sqrt{\delta\,n}}. \end{split} \tag{13}\] Proof.: This is a straightforward consequence of Theorem 3. Minimax lower-bound.The lower bound for missing values still applies to the contaminated case as missing values are a particular case of contamination. However replacing missing values by adversarial contaminations and using the proof argument of [3] for Huber's contamination, we obtain in the cell-wise setting the following minimax lower bound. **Theorem 4**.: _Let \(p,n,\overline{r}\) be strictly positive integers such that \(p\geq\max\{n,2\overline{r}\}\). Let \(X_{1},\ldots,X_{n}\) be i.i.d. random vectors in \(\mathbb{R}^{p}\) with covariance matrix \(\Sigma\in\mathcal{C}_{\overline{r}}\). Let \((\bar{d}_{i,j})_{1\leq i\leq n,1\leq j\leq p}\) be i.i.d. sequence of bernoulli random variables of probability of success \(\delta\in(0,1]\), independent to the \(X_{1},\ldots,X_{n}\). We observe \(n\) i.i.d. vectors \(Y_{1},\ldots,Y_{n}\in\mathbb{R}^{p}\) satisfying (2) where \(\xi_{i}\) are i.i.d. of arbitrary distribution \(Q\). Then there exists two absolute constants \(C>0\) and \(\beta\in(0,1)\) such that:_ \[\inf_{\Sigma}\max_{\Sigma\in\mathcal{C}_{\overline{r}}}\max_{Q}\mathbb{P}_{ \Sigma,Q}\left(\left\|\hat{\Sigma}-\Sigma\right\|\geq C\frac{\left\|\Sigma \right\|}{\delta}\sqrt{\frac{\overline{r(\Sigma)}}{n}}\bigvee\varepsilon(1- \delta)\right)\geq\beta \tag{14}\] _where \(\inf_{\hat{\Sigma}}\) represents the infimum over all estimators of matrix \(\Sigma\) and \(\max_{Q}\) is the maximum over all contamination \(Q\)._ This lower bound combined with the upper bound of Corollary (1) clarifies the impact of the cell-wise contamination parameter \(\epsilon\). The proof can be found in Appendix F.3. ## 5 Experiments In our experiments, MV refers to the debiased covariance estimator (5). The synthetic data generation is described in Appendix A. We also performed experiments on real life datasets described in App. B. All experiments were conducted on a 2020 MacBook Air with a M1 processor (8 cores, 3.4 GHz). 1 Footnote 1: Code available at [https://anonymous.4open.science/r/MVCE-C82F](https://anonymous.4open.science/r/MVCE-C82F) ### Missing Values sklearn[20] provides two popular imputation methods: KNNImputer, which imputes the missing values based on the k-nearest neighbours [28], and IterativeImputer, which is inspired by the R package MICE[31]. In Figures 2, 3 and Table 1, we compare our estimator MV defined in (5) to these two imputation methods combined with the usual covariance estimator on synthetic data (see appendix A for details of data generation) in terms of statistical accuracy and execution time. We show that MV achieved a statistical accuracy similar to that of the SOTA IterativeImputer but is significantly faster even on moderately high dimensional data (less than \(10\) milliseconds for MV against about \(28\) minutes for IterativeImputer). MV also significantly beats KNNImputer both in term of statistical accuracy and computation time. We also see that trivial marginal imputation simply does not work. Based on these results, we can also argue that imputation of missing values is not mandatory for accurate estimation of the covariance operator : another viable option is to apply a debiasing correction to the empirical covariance computed on the original data containing missing values. The advantage of this approach is its low computational cost. The SOTA methods in the cell-wise contamination setting are the DI (Detection-Inputation) method [21] and the TSGS method (Two Step Generalised S-estimator) [1]. Both these methods were designed to work in the standard setting \(n>p\) but cannot handle the high-dimensional setting as we already mentioned. Nevertheless, we included comparisons of our methods to them in the standard setting \(n>p\). The code for DI and TSGS are from the R packages cellwise and GSE respectively. Our estimators are referenced as DDCMV (short for Detecting Deviating Cells Missing Values), which uses the DDC detection procedure of [23] to first remove outliers and then compute the debiased covariance of (5) on the filtered data, and tailMV, which detects outliers through thresholding and then uses again (5). We also combined the filtering step with KNNimpute and IterativeImputer to define two additional novel robust procedures which we call DDCKNN and DDCII. To the best of our knowledge, this second alternative approach combining filtering with missing values imputation has never been tested to deal with cell-wise contamination. A detailed description of each method is provided in appendix C. Outlier detection and estimation error under cell-wise contamination on synthetic data.We showed that the error of a covariance estimator under cell-wise contamination depends on the proportion of remaining outliers after a filtration. In table 2 we investigate the filtering power of the Tail Cut and DDC methods in presence of Dirac contamination. We consider the cell-wise contamination setting (2) in the most difficult case \(\epsilon=1\) which means that an entry is either correctly observed or replaced by an outlier (in other words, the dataset does not contain any missing value). For each values of \(\delta\) in a grid, the quantities \(\hat{\delta}\) and \(\hat{\epsilon}\) are the proportions of true entries and remaining contaminations after filtering averaged over \(20\) repetitions. The DDC based methods are particularly efficient since the proportion of Dirac contamination drops from \(1-\delta\) to virtually \(0\) for any \(\delta\geq 0.74\). In Fig. 2 and 4, we see that the performance of our method is virtually the same as the oracle OracleMV as long as the filtering procedure correctly eliminates the Dirac contaminations. As soon as the filtering procedure fails, the statistical accuracy brutally collapses and our DDC based estimators no longer do better than the usual empirical covariance. In Table 6 in App. H and Fig. 5, we repeated the same experiment but with a centered Gaussian contamination. Contrarily to the Dirac contamination scenario, we see in Fig. 5 that the statistical accuracy of our DDC based methods slowly degrades as the contamination rate increases but their performance remains significantly better than that of the usual empirical covariance. ### The effect of Cell-wise contamination on real-life datasets We tested the methods on \(8\) datasets from sklearn and Woolridge's book on econometrics [34]. These are low dimensional datasets (less than \(20\) features) representing various medical, social and economic phenomena. We also included \(2\) high-dimensional datasets. See App. B for the list of the datasets. One interesting observation is that the instability of Mahalanobis distance-based algorithms is not limited to high-dimensional datasets. Even datasets with a relatively small number of features can exhibit instability. This can be seen in the performance of DI on the Attend dataset, as depicted in Figure 9, where it fails to provide accurate results. Similarly, both TSGS and DI fail to perform well on the CEOSAL2 dataset, as shown in Figure 11, despite both datasets having fewer than \(15\) features. On the Abalone dataset, once we have removed 4 obvious outliers (which are detected by both DDC and the tail procedure), all estimators reached a consensus with the non-robust classical estimator, meaning that this dataset provides a ground truth against which we can evaluate and compare the performance of robust procedures in our study. To this end, we artificially contaminate \(5\%\) of the cells at random in the dataset with a Dirac contamination and compare the spectral error of the different robust estimators. As expected, TSGS and all our new procedures succeed at correcting the error, however DI becomes unstable (see Table 3). On Breast Cancer, DI also disagrees with every other procedures (see Figure 6), casting some doubt on the reliability of its estimate. We also performed experiments on 2 high-dimensional datasets, where our methods return stable estimates of the covariance (DDCMV99 and DDCMV95 are within \(\approx 3\%\) of each other) and farther away from the classical estimator (See Figures 13 and 14 in App. H). Note also that DDCII's computation time explodes and even returns out-of-memory errors due to the high computation cost of IterativeImputer that we already highlighted in Table 1. ## 6 Conclusion and future work In this paper, we have derived sharp theoretical upper bounds on the spectral error of our covariance estimator robust to missing data with matching minimax lower bounds in the missing value setting. We have also derived the first theoretical guarantees in the cell-wise contamination setting. We highlighted in our numerical experimental study that in the missing value setting, our debiased estimator designed to tackle missing values without imputation offers statistical accuracy similar to the SOTA IterativeImputer for a dramatic computational gain. We also found that SOTA algorithms in the cell-wise contamination setting often fail in the standard setting \(p<n\) for dataset with fast decreasing eigenvalues (resulting in approximately low rank covariance), a setting which is commonly encountered in many real life applications. This is due to the fact that these methods use matrix inversion which is unstable to small eigenvalues in the covariance structure and can even fail to return any estimate. In contrast, we showed that our strategy combining filtering with estimation procedures designed to tackle missing values produce far more stable and reliable results. In future work, we plan to improve our theoretical upper and lower bounds in the cell-wise contamination setting to fully clarify the impact of this type of contamination in covariance estimation.
2306.10541
Logistic Regression Modeling Based on Fractal Dimension Curves of Urban Growth
Fractal dimension is an effective scaling exponent of characterizing scale-free phenomena such as cities. Urban growth can be described with time series of fractal dimension of urban form. However, how to explain the factors behind fractal dimension sequences that affect fractal urban growth remains a problem. This paper is devoted to developing a method of logistic regression modeling, which can be employed to find the influencing factors of urban growth and rank them in terms of importance. The logistic regression model comprises three components. The first is a linear function indicating the relationship between time dummy and influencing variables. The second is a logistic function linking fractal dimension and time dummy. The third is a ratio function representing normalized fractal dimension. The core composition is the logistic function that implies the dynamics of spatial replacement. The logistic regression modeling can be extended to other spatial replacement phenomena such as urbanization, traffic network development, and technology innovation diffusion. This study contributes to the development of quantitative analysis tools based on the combination of fractal geometry and conventional mathematical methods.
Yanguang Chen
2023-06-18T12:12:03Z
http://arxiv.org/abs/2306.10541v1
# Logistic Regression Modeling Based on Fractal Dimension Curves of Urban Growth ###### Abstract Fractal dimension is an effective scaling exponent of characterizing scale-free phenomena such as cities. Urban growth can be described with time series of fractal dimension of urban form. However, how to explain the factors behind fractal dimension sequences that affect fractal urban growth remains a problem. This paper is devoted to developing a method of logistic regression modeling, which can be employed to find the influencing factors of urban growth and rank them in terms of importance. The logistic regression model comprises three components. The first is a linear function indicating the relationship between time dummy and influencing variables. The second is a logistic function linking fractal dimension and time dummy. The third is a ratio function representing normalized fractal dimension. The core composition is the logistic function that implies the dynamics of spatial replacement. The logistic regression modeling can be extended to other spatial replacement phenomena such as urbanization, traffic network development, and technology innovation diffusion. This study contributes to the development of quantitative analysis tools based on the combination of fractal geometry and conventional mathematical methods. **Key words:** logistic regression analysis; fractal dimension; urban growth and form; urbanization level; transport network; replacement dynamics ## 1 Introduction There are more than one factor that affects urban growth, and there are also various methods to reveal the influencing factors of urban growth. Qualitative analysis, simple statistical analysis, and mathematical modeling analysis can be used to identify the factors affecting city development. Qualitative analysis cannot determine the significance of influencing factors, and it is also difficult to prioritize the influencing factors. To solve this problem, scholars make use of multiple linear regression analysis, especially stepwise regression analysis. To this end, it is necessary to define a measure that reflects urban growth. Urban population, wealth, and urbanized area become the primary indicators (Arbesman, 2012; Dendrinos, 1992; Nordbeck, 1971Woldenberg, 1973). However, due to scale-free property of urban form (Batty and Longley, 1994; Frankhauser, 1994), urban size and area cannot be objectively determined, say nothing about urban wealth (Chen and Lin, 2009). In this case, fractal dimension can be utilized to measure the space filling degree of cities. The time series of fractal dimension of urban form in different years take on sigmoid curve. Thus logistic function can be employed to model urban growth (Chen, 2012). The prediction model of urban growth based on fractal dimension emerge (Chen, 2018). In this type of models, the independent variable is time, and the dependent variable is fractal dimension of urban form. If there is no real causal relationship between an independent variable and a dependent variable, then the independent variable belongs to a dummy variable. Dummy variables include time variable, distance variable, and categorical variables (nominal variable, indicator variables). Time variable is termed time dummy (Diebold, 2007). The time dummy variable usually suggests real influencing factors of urban growth underlying the temporal variable. These influencing factors do not form a direct linear relationship with the fractal dimension sequence, and simple multiple linear regression analysis cannot be used to determine the importance, priority, and order of the influencing factors. To solve this problem, this paper is devoted to deriving a logistic regression model based on fractal dimension series and multiple explanation variables. The research goal is to construct a framework for logistic regression analysis for geographical systems. In Section 2, two sets of logistic regression models are introduced into urban study by mathematical derivation and analogical analysis. In Section 3, the framework of logistic regression model is outlined, the method is extended to urbanization and transport network research, and several related questions are discussed. Finally, in Section 4, the discussion is concluded by summarizing the main points of this work. ## 2 Models ### Logistic regression modeling based on fractal dimension curve Due to the scaling invariance nature of urban structure, conventional measures such as perimeter, area, density, etc., cannot effectively describe urban morphology. Fractal dimension can be employed to characterize degrees of space filling, spatial evenness, and spatial dependence. Spatial evenness and spatial difference are two sides of the same coin. A sample path of a time series of fractal dimension can be gained by means of observational data of urban form at different times. Because of squashing effect, a sample path of fractal dimension sequence forms an S-liked curve, which is termed fractal dimension curve of urban growth (Figure 1). The fractal dimension curve can be modeled by a type of sigmoid function (Chen, 2012; Chen, 2018). The simplest and most common sigmoid function is the logistic function (Mitchell, 1997). Therefore, in many case, a fractal dimension curve of urban form and growth can be expressed as \[D(t)=\frac{D_{\text{max}}}{1+(D_{\text{max}}\ /\ D_{(0)}-1)e^{-kt}}\, \tag{1}\] in which \(D(t)\) refers to the fractal dimension of urban form at time \(t\), \(D_{(0)}\) is the initial value of fractal dimension at time \(t\)=0, \(D_{\text{max}}\) is the capacity parameter of fractal dimension, i.e., the upper limit of fractal dimension, and \(k\) is the initial growth rate of fractal dimension. The logistic model of fractal dimension curve of urban form based on time series can be used to predict urban growth. By using equation (1), it is possible to estimate the carrying capacity of urban fractal dimension and determine the peak of urban growth rate. Generally speaking, when \(D(t)\)=\(D_{\text{max}}\)/2, urban growth rate reaches its peak. Moreover, the urban growth process can be divided into four stages by means of equation (1). However, using equation (1) we cannot explain urban growth rate and different stages. In equation (1), the variable of time, \(t\), is a dummy variable, which is termed time dummy (Diebold, 2007). A dummy variable is usually not a true explanatory variable, but a substitute for the explanatory variables. The true explanatory variables may be hidden behind the dummy variable. Suppose that there are \(m\) real explanatory variables behind time dummy, that is, \(x_{j}\)(\(j\)=1, 2,..., \(m\)). To reveal the true explanatory variables, let's consider the simplest case where the dummy variable is a linear function of set of explanatory variables. Thus we have a linear decomposition relation as below: \[t=\frac{1}{k}\left(c+b_{1}x_{1}(t)+b_{2}x_{2}(t)+\cdots+b_{m}x_{m}(t)\right)\, \tag{2}\] where \(m\) denotes the number of influence factors, \(c\) is a constant, and \(b_{j}\) is the \(j\)th linear regressive coefficient. Substituting equation (2) into equation (1) yields a logistic regression model based on fractal dimension as below \[\frac{D(t)}{D_{\max}}=\frac{1}{1+e^{-a-b_{1}(t)-b_{2}(t)-\cdots-b_{n}x_{n}(t)}}= \frac{1}{1+\exp(-\sum_{j=0}^{m}b_{j}x_{j}(t))}\, \tag{3}\] in which \(a\) and \(b_{j}\) refers to logistic regression coefficients, \(a\)=\(b_{0}\), \(x_{0}\)=1 (\(j\)=0), and specially normalized fractal dimension \(D(t)\)/\(D_{\max}\) represents fractal dimension ratio, indicating space-filling ratio and spatial evenness degree. Fractal dimension ratio can be expressed as \(Q(t)\)= \(D(t)\)/\(D_{\max}\). The parameter \(a\) can be expressed as \[a=b_{0}=c-\ln(\frac{D_{\max}}{D_{0}}/-1). \tag{4}\] Using the symbol \(b_{0}\) is only for simple expression. Accordingly, fractal redundancy can be defined as \[1-\frac{D(t)}{D_{\max}}=\frac{\exp(-\sum_{j=0}^{m}b_{j}x_{j}(t))}{1+\exp(-\sum _{j=0}^{m}b_{j}x_{j}(t))}\, \tag{5}\] where fractal redundancy,1- \(D(t)\)/\(D_{\max}\), implies space-saving ratio and spatial difference degree. Based on equations (3) and (5), a logit transformation of fractal dimension odds can be obtained as follows \[\ln\frac{D(t)}{D_{\max}-D(t)}=\ln O(t)=a+\sum_{j=1}^{m}b_{j}x_{j}(t)\, \tag{6}\] where \(D(t)\)/(\(D_{\max}\) -\(D(t)\)) represent fractal dimension odds. It can be expressed as \[O(t)=\frac{D(t)}{D_{\max}-D(t)}\, \tag{7}\] in which \(O(t)\) denotes fractal dimension odds or space-saving ratio of time \(t\). For simplicity, let \(D_{\max}\) =\(d\), where \(d\) denotes the Euclidean dimension of embedding space. Thus, equations (6) changes to the following form \[\ln\frac{D(t)}{d-D(t)}=\sum_{j=0}^{m}b_{j}x_{j}(t)=a+\sum_{j=1}^{m}b_{j}x_{j}( t). \tag{8}\] Generally speaking, \(d\)=2. Using equation (6) or (8), we can make a logistic regression analysis based on time series of fractal parameters and related observed data of possible explanatory variables. Logistic modeling of fractal dimension curves based on time series can be generalized to cross sectional data. A cross-sectional dataset in a geographical region can be obtained according to city rank at given time. In theory, city size distribution in an urban system corresponds to urban growth process. Small cities represent youth cities, while large cities represent adult cities (Batty and Longley, 1994). Based on rank-size distribution, equation (5) can be revised as \[\ln\frac{D(r)}{D_{\text{max}}-D(r)}=a+\sum_{j=1}^{m}b_{j}x_{j}(r)\,. \tag{9}\] where \(r\) denotes city rank, \(D(r)\) is the fractal dimension of urban form of the city of rank \(r\). Accordingly, equation (6) can be rewritten as \[\ln\frac{D(r)}{2-D(r)}=\ln O(r)=a+\sum_{j=1}^{m}b_{j}x_{j}(r)\,. \tag{10}\] Using equation (9) or (10), we can make a logistic regression analysis based on rank series of fractal parameters and related observed data. By means of regression results, it is possible to determine which factors significantly affect urban growth and which factors do not have a significant impact on urban growth. Among the factors that have a significant impact, the primary and secondary factors can be identified and ranked. So, it becomes possible to explain the process of urban growth. Figure 1: **A diagrammatic sketch for sigmoid growth and squashing effect of fractal dimension** **Note:** The parameter values of both logistic model and quadratic logistic model of fractal dimension growth are as follows, \(D_{\text{max}}\)=2, \(D_{\text{0}}\)=0.25, \(k\)=0.035. The lower limit of fractal dimension is the topological dimension, \(d_{\text{t}}\)=0, and the upper limit is the Euclidean dimension of embedding space, \(d_{\text{E}}\)=2. The squashing of topological dimension and Euclidean dimension make fractal dimension increase along an S-liked curve. ### Logistic regression modeling based on quadratic fractal dimension curve A mathematical model has its effective scope of application. There is no absolutely general models for social and economic phenomena. The conventional logistic function can be used to describe the fractal dimension curves of European and American cities, as well as the fractal dimension curves of some cities along the southeast coast of China. However, this model is not suitable for most mainland Chinese cities. It is not suitable for the majority of Chinese cities, especially northern Chinese cities. The fractal dimension curves of great majority of Chinese cities can be modeled by quadratic logistic function as follows (Chen, 2018) \[D(t)=\frac{D_{\text{max}}}{1+(D_{\text{max}}\ /\ D_{0}-1)e^{-(k\cdot t)^{2}}}\,, \tag{11}\] which is similar in macro structure to equation (1). If square of time, \(t^{2}\), is a linear function of sets of dependent variables, we have \[t^{2}=\frac{1}{k^{2}}\left(c+b_{1}x_{1}(t)+b_{2}x_{2}(t)+\cdots+b_{m}x_{m}(t) \right)\,. \tag{12}\] Substituting equation (12) into equation (11) yields equation (3). The corresponding logistic regressive model is the same as equation (6), which can be replaced by equation (8) for simplicity. The mathematical expressions of logistic regression models for quadratic logistic growth remain to be determined by further research. Another possibility is that time, \(t\), rather than square of time, \(t^{2}\), is a linear function of multiple explanatory variables, that is, the relation between time dummy and arguments can be expressed by equation (2) instead of equation (12). Substituting equation (2) into equation (11) yields \[\sqrt{\ln(\frac{D(t)}{D_{\text{max}}-D(t)}\ /\ \frac{D_{0}}{D_{\text{max}}-D_{0}})}=c +\sum_{j=1}^{m}b_{j}x_{j}(t)\,. \tag{13}\] For simplicity, equation (13) can be replaced by \[\sqrt{\ln(\frac{D(t)}{d-D(t)}\ /\ \frac{D_{0}}{d-D_{0}})}=\sqrt{\ln(\frac{O(t) }{O_{0}})}=c+\sum_{j=1}^{m}b_{j}x_{j}(t)\,. \tag{14}\] Only through statistic experiments based on observation data can we determine whether to use equations (6) and (8) or equations (13) and (14). The time series data can be replaced by cross-sectional data, thus equation (13) can be re-expressed as \[\sqrt{\ln(\frac{D(r)}{D_{\max}-D(r)}/\frac{D_{0}}{D_{\max}-D_{0}})}=c+\sum_{j=1}^ {m}b_{j}x_{j}(r)\,. \tag{15}\] For simplicity, equation (15) can be changed to the following form \[\sqrt{\ln(\frac{D(r)}{d-D(r)}/\frac{D_{0}}{d-D_{0}})}=\sqrt{\ln(\frac{O(r)}{O_ {0}})}=c+\sum_{j=1}^{m}b_{j}x_{j}(r)\,. \tag{16}\] Equation (15) and (16) can be applied to fractal dimension dataset of an urban system to make horizontal logistic regression analysis. ## 3 Discussion The basic framework of logistic modeling for fractal dimension curves have been outlined above. The methodological framework comprises three components, which can be abstracted as three mathematical equations. The first component is a linear equation, that is \[z=b_{0}+b_{1}x_{1}(t)+b_{2}x_{2}(t)+\cdots+b_{m}x_{m}(t)\,. \tag{17}\] which indicates the relationship between time dummy and influencing factors. In equation (17), \(z\) proved to be the logarithm of fractal dimension odds. The second component, a key part, is a logistic function \[y=\frac{1}{1+e^{-z}}\,, \tag{18}\] which indicates the squashing effect of fractal dimension growth. The third component is a ratio function, which represents the definition of output variable. Normalizing fractal dimension yields a probability measure, \(p(t)\), which serves as a response variable, \(y\), as follows \[y=p(t)=\frac{D(t)}{D_{\max}}\,, \tag{19}\] in which \(D_{\max}\) can be let \(d\)=2 for simplicity. The relationships between logarithm of fractal dimension odds and time dummy can be expressed as \[z=\ln(\frac{D(t)}{D_{\max}-D(t)})=\begin{cases}kt\\ (kt)^{2}\end{cases}\,. \tag{20}\] For the common logistic growth, we have \(z\)=\(kt\), and for the quadratic logistic growth, we have \(z\)= \((kt)^{2}\). All analysis processes of logistic regression form a three-layer artificial neural network model, which is termed error back propagation (EPB) network (Figure 2). The fractal dimension value of a complex system depends on measurement and calculation methods. So the method of fractal dimension determination influences the effect of logistic modeling of urban growth. There are at least five sets of methods to define fractal dimension (Takayasu, 1990). All these methods can be applied to different aspects of fractal cities. Among all these methods, two methods are commonly used to characterize urban morphology and growth. One is box-counting method (Benguigui _et al_, 2000; Feng and Chen, 2010; Jiang and Liu, 2012; Shen, 2002), and the other is radius-area scaling method (Batty and Longley, 1994; Frankhauser, 1998a; Jiang and Zhou, 2006; White and Engelen, 1993). The former is mathematically equivalent to grid method (Frankauser, 1998b), and the latter is equivalent to radius-density scaling method and can be replaced by radius-number scaling method (Batty and Longley, 1994; Longley _et al_, 1991). The fractal dimension defined by radius-area scaling is termed radial dimension, which is actually a local fractal parameter since it relies on the selection of measurement center. Comparatively speaking, the fractal dimension defined by box-counting method is a global dimension, which can also be termed grid dimension (Frankhauser, 1998b). For a regular fractal, the box dimension may be equal to the radial dimension (Batty and Longley, 1994). However, for a random fractal, the two types of dimension values are not the same. Empirical studies suggests that the fractal dimension sequence measured using the box method can better exhibit S-shaped curve features. The box method can be Figure 2: A sketch map for three components of logistic modeling of fractal dimension curve of urban form and growth divided into two measurement methods: one is fixed box method (Batty and Longley, 1994; Jiang and Liu, 2012; Shen, 2002), and the other variable box method (Bengugui _et al_, 2000; Feng and Chen, 2010). For the first method, use the same largest box for different years; for the second method, the largest box is determined by the city size in different years. The fractal dimension sequence based on fixed box method can better reflect the logistic process of space-filling in an urban administrative district, while the fractal dimension sequence based on variable box method can better reflect the logistic process of space filling in urbanized area. It can be seen that logistic regression modeling needs to be carried out according to specific research objectives. It is necessary to compare fractal logistic regression based on fractal theory with logistic regression based on statistics. In multivariate statistical analysis, there is a method called logistic regression, including binary logistic regression and multinomial logistic regression. The multinomial logistic regression can be decomposed into binary logistic regression. There is an analogy between the binary logistic regression in multivariate statistical analysis and the logistic regression modeling for fractal dimension curves. Drawing a comparison between the conventional logistic regression and fractal-based logistic regression is helpful for understanding the principle and methods develop in this work. The similarities and differences between the two types of logistic regression are tabulated as follows (Table 1). Both the conventional logistic regression and fractal-based logistic regression bear an analogy with the above-mentioned three-layer EPB artificial neural network model. \begin{table} \begin{tabular}{|p{85.4pt}|p{113.8pt}|p{113.8pt}|} \multicolumn{3}{c}{**regression analysis**} \\ \hline **Type** & **Logistic regression in multivariable statistics** & **Logistic regression for fractal dimension analysis** \\ \hline **Rationale** & Logistic regression modeling & Logistic regression modeling \\ \hline **Functions** & Three functions: step function, logistic function, linear function & Three function: ratio function, logistic function, linear function \\ \hline **Input variables** & Three types: metric variable, rank variable, categorical variable & Three types: metric variable, categorical variable \\ \hline **Output variable** & Categorical variable & Metric variable \\ \hline **Algorithm** & Maximum likelihood estimate (MLE) method & Ordinary least square (OLS) method \\ \hline \end{tabular} \end{table} Table 1: A comparison between conventional logistic regression analysis and fractal-based logistic regression analysis The above modeling and analysis methods can be extended to other branches of geography. Similar logistic modeling analysis can be conducted as long as there is logistic growth or generalized logistic growth phenomena. Therefore, the methods can be generalized to analyze urbanization curve, traffic network development, technology innovation diffusion, and so on. Urban form and growth belong to intraurban geography (De Keersmaecker _et al_, 2003), while transport network more belong to interurban geography (De Blij and Muller, 1997). Both intraurban geography and interurban geography involve urbanization. Urbanization is a process of urban-rural population replacement (Rao _et al_, 1988; Rao _et al_, 1989). There is analogy between fractal dimension curve and urbanization curve. The increase of urbanization level takes on squashing effect, and urbanization curves can be modeled by using logistic function or quadratic logistic function (Cadwallader, 1996; Davis, 1969; Pacione M (2009; Zhou, 1995). Define the level of urbanization as follows (Karmeshu, 1988; United Nations, 1980) \[L(t)=\frac{u(t)}{u(t)+r(t)}\,, \tag{21}\] where \(L(t)\) denotes the level of urbanization of time \(t\), \(u(t)\) and \(r(t)\) represents urban population and rural population, respectively. Thus we have a logistic regression model as follows \[\ln\frac{L(t)}{1-L(t)}=\ln V(t)=b_{0}+\sum_{j=1}^{m}b_{j}x_{j}(t)\,, \tag{22}\] in which \(V(t)\)= \(L(t)\)/(\(1\)-\(L(t))=u(t)\)/\(r(t)\) refers to urban-rural ratio, representing another measure of urbanization (United Nations, 2004). Other notation is the same as equation (6). Using equation (22) to make logistic regression analysis, we can reveal the influencing factors of urbanization and distinguish between primary and secondary factors. There is an inherent relationship between the development of transportation networks and the level of urbanization. In a similar way, the modeling methods can also be generalized to the growth curve of \(\beta\) index of transport network. A network is composed of nodes and edges. The \(\beta\) index is defined as the ratio of edge number \(u\) to node number \(v\), that is \(\beta\)=\(u\)/\(v\). The largest number of edges is \(u\)=\(v\)(\(v\)-\(1\))/\(2\). Thus the maximum value of index is \(\beta_{\text{max}}\)=(\(v\)-\(1\))/\(2\). The growth curve of \(\beta\) index can be modeled with Boltzmann equation or quadratic Boltzmann equation. Based on normalized variables, Boltzmann equation and quadratic Boltzmann equation change to logistic function and quadratic logistic function, respectively. Thus, a logistic regression model can be given as below \[\ln\frac{\beta(t)}{\beta_{\max}-\beta(t)}=b_{0}+\sum_{j=1}^{m}b_{j}x_{j}(t)\,. \tag{23}\] By using equation (23) for logistic regression analysis, we can reveal the influencing factors on the development of transport networks and then rank them in order of priority. Further, this method can be extended to the study of the diffusion process of technological innovation. In a region, cumulative acceptance an innovation in time takes on an S-liked curve (Morrill _et al_, 1988). If the sigmoid curve can be described with a logistic function, it indicates a replacement process. The process of technological innovation diffusion is actually a process of replacing old and new technologies (Hermann and Montroll, 1972; Fisher and Pry, 1971). Therefore, logistic regression modeling can be used to find the influence factors of technology innovation diffusion. The model is as follows \[\ln\frac{\varphi(t)}{1-\varphi(t)}=b_{0}+\sum_{j=1}^{m}b_{j}x_{j}(t)\,. \tag{24}\] where \(\varphi(t)\) refers to the ratio of new technology to all technologies. The novelty of this article lies in the invention of an urban growth analysis model based on fractal dimension and multiple explanatory variables. Similar studies seem to have not been reported before. The shortcomings of this study include three aspects. Firstly, the explanatory variables for urban growth in China are all statistical data. Compared with census data and big data generated from bottom to top, the confidence of statistical data is low. However, as an example of an analysis method, the problem is not significant. Secondly, there are no analysis cases of Western cities. The fractal dimension curve of urban growth in Europe and America meets a conventional logistic function, while the fractal dimension curve of urban growth in China satisfies a quadratic logistic function (Chen, 2018). Comparing the two modeling results between European and American cities and Chinese cities is more enlightening. Unfortunately, there is no system data for European and American cities. Thirdly, only the linear relationship between time dummy and influencing variables are taken into account. The variable relationships may be nonlinear, and the logistic regression may be replaced by other type of nonlinear models. All these problems remain to be explored in future. ## 4 Conclusions So far, the framework construction of the logistic regression modeling method for the fractal dimension curve of urban form and growth has been completed preliminarily. The main points of this study can be summarized as follows. First, based on the sigmoid functions of fractal dimension curves, a logistic regression modeling method can be developed for multivariate analysis of urban growth. The dependent variable is the logarithm of fractal dimension odds, and the covariates include varied possible factors which influence city development. By making stepwise regression analysis, we can determine the significant factors affecting urban growth and rank them in order of importance. The statistical testing method is readily available and can be judged directly using the statistics of multiple regression analysis. Second, logistic regression analysis of fractal dimension of urban form and growth can be divided into longitudinal analysis and transverse analysis. By means of time series of fractal dimension of a city, we can make a longitudinal logistic regression analysis for urban growth. This type of study belongs to intraurban geography. By using cross-sectional dataset of an urban system, we can make a transverse logistic regression analysis. This type of study belongs to interurban geography. For a system of cities, these two methods can complement each other. Third, the method of modeling can be generalized to other phenomena of logistic growth. The level of urbanization can be modeled by using logistic function or quadratic logistic function. Therefore, the modeling method can be applied to urbanization curve. The \(\beta\) index of transport network can be described with Boltzmann's equation or quadratic Boltzmann's equation. Based on normalized variable, Boltzmann's equation and quadratic Boltzmann's equation can be turned into logistic function and quadratic logistic function, respectively. So, the logistic modeling method can be applied to the \(\beta\) index of transport network. Moreover, technology innovation diffusion is a type of replacement dynamics, the increase in the proportion of new technologies is manifested in an S-shaped curve and can be analyzed by similar method of logistic modeling. **Acknowledgement:** This research was sponsored by the National Natural Science Foundation of China (Grant No. 42171192). The support is gratefully acknowledged.
2310.13976
Advancing Requirements Engineering through Generative AI: Assessing the Role of LLMs
Requirements Engineering (RE) is a critical phase in software development including the elicitation, analysis, specification, and validation of software requirements. Despite the importance of RE, it remains a challenging process due to the complexities of communication, uncertainty in the early stages and inadequate automation support. In recent years, large-language models (LLMs) have shown significant promise in diverse domains, including natural language processing, code generation, and program understanding. This chapter explores the potential of LLMs in driving RE processes, aiming to improve the efficiency and accuracy of requirements-related tasks. We propose key directions and SWOT analysis for research and development in using LLMs for RE, focusing on the potential for requirements elicitation, analysis, specification, and validation. We further present the results from a preliminary evaluation, in this context.
Chetan Arora, John Grundy, Mohamed Abdelrazek
2023-10-21T11:29:31Z
http://arxiv.org/abs/2310.13976v2
# Advancing Requirements Engineering through Generative AI: Assessing the Role of LLMs ###### Abstract Requirements Engineering (RE) is a critical phase in software development including the elicitation, analysis, specification, and validation of software requirements. Despite the importance of RE, it remains a challenging process due to the complexities of communication, uncertainty in the early stages and inadequate automation support. In recent years, large-language models (LLMs) have shown significant promise in diverse domains, including natural language processing, code generation, and program understanding. This chapter explores the potential of LLMs in driving RE processes, aiming to improve the efficiency and accuracy of requirements-related tasks. We propose key directions and SWOT analysis for research and development in using LLMs for RE, focusing on the potential for requirements elicitation, analysis, specification, and validation. We further present the results from a preliminary evaluation, in this context. **Keywords.** Requirements Engineering, Generative AI, Large Language Models (LLMs), Natural Language Processing, Software Engineering ## 1 Introduction Requirements Engineering (RE) is arguably the most critical task in the software development process, where the needs and constraints of a system are identified, analyzed, and documented to create a well-defined set of requirements [20]. Organizations and project teams often overlook or do not understand the significance of RE and its impact on project success [14]. Some underlying reasons for the lack of effort and resources spent in RE include (i) time, budget and resource constraints, (ii) inadequate training and skills, (iii) uncertainty and ambiguity in early stages, which teams consider as challenging, causing them to cut corners in the RE process; (iv) inadequate tools and automation support [5], and (v) emphasis on an implementation-first approach instead [15]. These lead to significant challenges in the later stages of development as issues related to inconsistent, incomplete and incorrect requirements become increasingly difficult to resolve, resulting in increased development costs, delays, and lower-quality software systems [20]. In this chapter, we contend that the recent advances in Large-Language Models (LLMs) [13] might be revolutionary in addressing many of these RE-related challenges noted above, though with some caveats. LLMs are advanced AI models designed to process and generate human language by learning patterns and structures from vast amounts of text data. These models have made significant strides in natural language processing (NLP) tasks and are particularly adept at handling complex language-based challenges. LLMs including OpenAI's Generative Pre-trained Transformer (GPT) series and Google's Bidirectional Encoder Representations from Transformers (BERT) [8] and LaMDA [24], learn to comprehend and generate human language by predicting the most probable next word in a given sequence, capturing the probability distribution of word sequences in natural language (NL). OpenAI's ChatGPT1 and Google's Bard2, built on the advancements of the LLMs, are examples of chatbot platforms designed to facilitate interactive and dynamic text-based conversations. When a user provides input to ChatGPT or Bard, the model processes the text and generates a contextually appropriate response based on the patterns learned during the training process. Footnote 1: [https://chat.openai.com/](https://chat.openai.com/) Footnote 2: [https://bard.google.com/](https://bard.google.com/) A large majority of requirements are specified using natural language (NL). LLMs thus have the potential to be a 'game-changer' in the field of RE. This could be by automating and streamlining several crucial tasks and helping to address many of the RE challenges mentioned earlier. With the focus on automated code generation using LLMs, delivering concise and consistently unambiguous specifications to these models (as prompts) becomes paramount. This underscores the ever-growing significance of RE in this new era of generative AI-driven software engineering. This chapter explores the potential of LLMs to transform the RE processes. We present a SWOT (Strengths, Weaknesses, Opportunities and Threats) analysis for applying LLMs in all key RE stages, including requirements elicitation, analysis, and specification. We also discuss examples from a preliminary evaluation as motivation for using LLMs in all RE stages. _Preliminary Evaluation Context._ We performed a preliminary evaluation on a real-world app (pseudonym ActApp), encouraging patients with type-2 diabetes (T2D) to remain active. To ensure that the app is effective, engaging, and personalized, the ActApp team implemented a machine learning (ML) model in the background to learn from user behaviour and preferences and suggest appropriate reminders and activities. The team has a mix of experienced engineers and an ML scientist (with little understanding of RE). Our preliminary evaluation and the examples in the chapter are done using ChatGPT (GPT-3.5). _Structure._ Section 2 provides an overview of our vision of the role of LLMs in RE process. Sections 3, 4, 5 and 6 cover the four major RE stages, i.e., elicitation, specification, analysis and validation, respectively. Section 7 presents our preliminary evaluation results. Section 8 covers the lessons learned, and Section 9 concludes the chapter. ## 2 LLMs-driven RE Process Fig. 1 provides an overview of our vision of an LLMs-driven RE process (an adaptation of RE process by Van Lamsweerde [14]). The RE process can be broadly divided into four stages: requirements elicitation (domain understanding and elicitation), specification (specification and documentation), analysis (evaluation and negotiation), and validation (quality assurance). We note that the exact instantiation and contextualization of LLMs in RE will depend on the problem domain and the project. For instance, implementing the LARRE framework for ActApp might be different from a safety-critical system. We, in this book chapter, provide a broad perspective on the role of LLMs in RE, which should be applicable to a wide range of projects, as the RE stages discussed are common and can be generalized across domains and systems, with finer refinements required in some cases. LLMs can be employed differently for automating RE tasks, e.g., as they have been successfully applied for ambiguity management [9]. In this chapter, we specifically focus on prompting by requirements analysts or other stakeholders directly on generative AI agents, e.g., ChatGPT or fine-tuned LLMs RE agents built on top of these agents. One would generate multiple agents based on LLMs for interaction (via prompting) with the stakeholders (e.g., domain experts, engineering teams, clients, requirements engineers and end users) and potentially with each other for eliciting, specifying, negotiating, analysing, validating requirements, and generating other artefacts for quality assurance. Prompting is a technique to perform generative tasks using LLMs [11]. Prompts are short text inputs to the LLM that provide information about the task the LLM is being asked to perform. Prompt engineering is designing and Figure 1: LLMs-driven RE Process Overview. testing prompts to improve the performance of LLMs and get the desired output quality. Prompt engineers use their knowledge of the language, the task at hand, and the capabilities of LLMs to create prompts that are effective at getting the LLM to generate the desired output [26]. Prompt engineering involves selecting appropriate prompt patterns and prompting techniques [26]. Prompt patterns refer to different templates targeted at specific goals, e.g., Output Customization pattern focuses on tailoring the format or the structure of the output by LLMs. Other generic templates, include formatting your prompts consistently in "Context, Task and Expected Output" format. For instance, one can use a _persona_ for output customization, wherein the agent plays a certain role when generating the output, e.g., the patient in ActApp. Prompting technique refers to a specific strategy employed to get the best output from the LLM agents. Some of the well-known prompting techniques include zero-shot prompting [19], few-shot prompting [17], chain-of-thought prompting [25] and tree-of-thought prompting [27]. In this context, prompt engineering combinations must be empirically evaluated in RE for different systems and domains. In each section, we explore the role of LLMs in each RE stage with a SWOT analysis. The insights for the SWOT analysis were systematically derived from a combination of our direct experiences with LLMs, feedback gathered from practitioner interactions, and our preliminary evaluation. ## 3 Requirements Elicitation ### Elicitation Tasks Requirements Elicitation encompasses pre-elicitation groundwork (as-is analysis and stakeholder analysis) and core elicitation activities with stakeholders (interviews and observations) [20]. The main objective is to identify and document the project information, system needs, expectations, and constraints of the solution under development. The key tasks in elicitation include domain analysis, as-is analysis, stakeholders analysis, feasibility analysis, and conducting elicitation sessions with the identified stakeholders using techniques such as interviews and observations. While the elicitation process is methodical, it is inherently dynamic, often necessitating iterative sessions as requirements evolve and new insights emerge from stakeholders. Requirements elicitation is also intensely collaborative, demanding constant feedback and validation from diverse stakeholders to ensure clarity and alignment. Some prevalent challenges associated with requirements elicitation involve the lack of domain understanding [22], unknowns (i.e., known and unknown unknowns) [23], communication issues due to language barriers or technical jargon [6], and lack of a clear understanding of what needs to be built in early stages [10]. In addition, the current elicitation techniques fall short in human-centric software development, i.e., ensuring adequate representation from all potential user groups based on their human-centric factors, such as age, gender, culture, language, emotions, preferences, accessibility and capabilities [12]. External influences, such as evolving legal stipulations and legal compliance, also play a pivotal role in shaping the elicitation process. Furthermore, with the rapidly advancing technological landscape, the existing elicitation processes often fail to capture the system requirements precisely, e.g., in the case of AI systems, bias, ethical considerations, integration of underministic AI components in larger software systems [2]. ### Role of LLMs LLMs can address numerous key challenges in the elicitation phase, including domain analysis. LLMs can rapidly absorb vast amounts of domain-specific literature, providing a foundational structuring and acting as a proxy for domain knowledge source [16]. They can assist in drawing connections, identifying gaps, offering insights based on the existing literature, and based on automated tasks such as as-is analysis, domain analysis and regulatory compliance. In addition to stakeholder communication, leveraging LLMs would require other inputs such as existing domain or project-specific documentation (e.g., fine-tuning LLMs) and regulations (e.g., GDPR). While LLMs have access to the domain knowledge, it is difficult to replace domain specialists' intuition, experience, and expertise. For example, in ActApp the nuanced understanding of how specific exercises influence a patient's glucose or hormonal levels rests with medical professionals such as endocrinologists, who are irreplaceable in RE. LLMs help identify unknowns by analyzing existing documentation and highlighting areas of ambiguity or uncertainty. LLMs can help with the completion or suggest alternative ideas that the requirements analysts might have otherwise missed, drawing on their large corpus of training data and connections. LLMs can assist with translating complex technical jargon into plain language and aiding stakeholders from different linguistic backgrounds, e.g., translating medical terminology in ActApp for requirements analysts or translating domain information from one language to another. LLMs play a vital role in human-centric RE. They can analyze diverse user feedback, like app reviews, ensuring all user needs are addressed. LLMs can also simulate user journeys considering human-centric factors, but this necessitates resources such as app reviews, persona-based use cases, and accessibility guidelines. For emerging technologies, LLMs need regular updates, a challenging task since automated solutions might be affected by these updates. The use of LLMs in requirements elicitation also warrants ethical scrutiny. LLMs may introduce or perpetuate biases as they are trained on vast internet data. Ensuring the ethical use of LLMs means avoiding biases and guaranteeing that the stakeholders' inputs are managed according to the data privacy and security guidelines. LLMs output should be viewed as complementary to human efforts. Requirements analysts bring domain expertise, cultural awareness, nuanced understanding, and empathetic interactions to the table, ensuring that software requirements cater to the diverse and evolving needs of end-users. This synergy of humans and generative AI is crucial in human-centric software development. **Example Prompt for requirements generation.**_I am developing an app called ActApp. ActApp is a real-time application for T2D patients to ensure an active lifestyle. The app gives timely reminders for working out, health & disease management. Act and respond as an ActApp user with the persona provided below in JSON format. The main aim is to elicit the requirements from your perspective. The generated requirements should each be associated with a unique id, and rationale._ _{"persona":_{_"name":_"_"_Jane Doe"_,_"age"_:_"65"_,_"gender"_:_"Female"_,_"location"_:_"Canada"_,_"occupation"_:_"_Retired"_,_"medical info"_:_...,_"lifestyle"_:_...,_"goals"_:_..._"work"_:_"_sedentary"_,_"challenges"_:_..._._. _Example._ For the ActApp, LLMs are used to gather information from various stakeholders, including patients and carers. The agent can conduct virtual interviews with the stakeholders (for a given persona, as exemplified below), asking targeted questions to identify their needs, preferences, and concerns. For instance, the agent may inquire users about desired features and data privacy concerns. Additionally, LLMs can analyze and synthesize information from online forums, social media, reviews from similar apps, and research articles on disease management to extract insights into common challenges patients face and best practices for care. This information can generate preliminary requirements (e.g., R1 and R2 below), which can be refined in later stages. **ActApp Example Information and Early Requirements.** _Key stakeholders (identified based on initial app ideas):_ Patients, carers, app developers, ML scientists, and healthcare professionals, e.g., endocrinologists. _R1. The patients should receive a notification to stand up and move around if they have been sitting for long._ _R2. The patients should not receive notifications when busy._ [Strengths] * leading to uncovering unknowns. * _Efficient Data Processing_: Facilitate round-the-clock elicitation, rapidly processing large volumes of elicitation data in varied formats. * _Domain Knowledge_: Can rapidly absorb and understand domain-specific literature and automate tasks based on the absorbed literature. * _Assisting Multilingual and Multicultural Stakeholders_: Can accurately translate complex technical jargon into plain language and aid stakeholders' communication even with diverse backgrounds. [Weaknesses] * _Lack of Empathy and Nuance_: Do not possess human empathy and might miss out on emotional cues or implicit meanings. * _Lack of Domain Expertise_: While LLMs understand domain knowledge, they cannot replace the intuition and experience of domain experts. * _Misinterpretation Risks_: The potential for misinterpreting context or over-relying on existing training data without considering unique project nuances. [Opportunities] * _Real-time Documentation and Processing_: Can document requirements and analyze feedback in real time, ensuring thoroughness and accuracy. * _Human-centric Elicitation_: By analyzing diverse user feedback, LLMs can ensure all user needs are considered, promoting a holistic approach to elicitation. [Threats] * _Over-reliance and Trust Issues_: Excessive dependence might lead to missing human-centric insights, and some stakeholders might hesitate to engage with AI. * _Data Security and Privacy Concerns_: Eliciting requirements via LLMs could raise data confidentiality issues, especially with sensitive information (e.g., in public LLMs-based agents like ChatGPT and Bard). * _Potential Biases_: May inadvertently introduce or perpetuate biases in the elicitation process if trained on biased data or past flawed projects. * _Regular Updates and Compatibility_: Given the stochastic nature of LLMs, the regular updates might lead to technical issues and inconsistency in project requirements. On the other hand, outdated LLMs are suboptimal for RE. ## 4 Requirements Specification ### Specification Tasks Requirements Specification translates the raw, elicited requirements information into structured and detailed documentation, serving as the system design and implementation blueprint. LLMs can contribute to this process by helping to generate well-structured requirements documents that adhere to established templates and guidelines, e.g., the'shall' style requirements, user story formats, EARS template [18], or specific document templates, e.g., VOLERE [21]. Given a project's context, the informal NL requirements need to be converted into structured specifications - both what the system should do (functional requirements) and the quality attributes or constraints the system should possess (non-functional requirements). Requirements analysts must maintain consistency in terminology and style throughout the document to enhance readability and clarity. In this stage, requirements can be prioritized considering stakeholder needs, project constraints, and strategic objectives. This phase is exacting, as ambiguities or errors can lead to significant project delays and escalated costs in later stages. Moreover, it is essential to balance the level of detail (too granular or too abstract) and ensure that non-functional requirements like security and usability are adequately addressed and not sidelined. Additional tasks such as generating requirements glossary, examples and rationale, and developing user personas to ensure that the human-centric aspects are duly covered are often performed during or immediately after requirements specification. ### Role of LLMs LLMs can streamline the specification process. The unstructured requirements from the elicitation stage can be automatically formatted into structured templates like EARS or user stories (see the example prompt below for EARS and the example for user stories). They can further assist in categorizing requirements into functional and non-functional and classifying NFRs like performance, ethical requirements, and usability. LLMs can automate other tasks during specification, e.g., generating glossary, rationale and examples, developing personas [29]. Another advantage of LLMs is their ability to cross-check requirements against existing standards, regulatory guidelines, or best practices. For a health-focused app like ActApp, LLMs can ensure alignment with health data privacy standards and medical device directives. LLMs can also suggest requirements prioritization by analyzing technical dependencies, project goals, and historical data. However, generating requirements prioritization requires several SE roles and deep-seeded expertise. Hence, the results produced by LLMs might be inaccurate. On similar lines, while LLMs can enhance the speed and consistency of specification, there is a risk of 'over-automation', i.e., overlooking some crucial aspects or over-trusting the requirements produced by LLMs. For instance, determining the criticality of specific NFRs--like how secure a system needs to be or how scalable--often requires human expertise. LLMs can aid the process, but decisions should be validated by domain experts familiar with the project context. Similarly, for compliance issues, it is essential to have domain experts validate the results. **Example Prompt.** Using the EARS template defined by the BNF grammar below, generate the \(<\)requirement\(>\) from the unformatted requirement - "The patients should not receive notifications when busy." \(<\)requirement\(>::=\)\(<\)ubiquitous\(>|<\)event-driven\(>|<\)state-driven\(>|<\)optional\(>|<\)unwanted\(>\)\(<\)ubiquitous\(>::=\)"The system shall \(<\)action\(>\)." \(<\)event-driven\(>::=\)"When \(<\)event\(>\), the system shall \(<\)action\(>\)." \(<\)state-driven\(>::=\)"While \(<\)state\(>\), the system shall \(<\)action\(>\)." \(<\)optional\(>::=\)"The system shall \(<\)action\(>\)." \(<\)unwanted\(>::=\)"The system shall \(<\)preventive-action\(>\) to \(<\)unwanted-outcome\(>\)." \(<\)action\(>::=\)\(<\)verb-phrase\(>\) \(<\)event\(>::=\)\(<\)noun-phrase\(>\) \(<\)state\(>::=\)\(<\)noun-phrase\(>\) \(<\)preventive-action\(>::=\)\(<\)verb-phrase\(>\) \(<\)unwanted-outcome\(>::=\)\(<\)noun-phrase\(>\) \(<\)verb-phrase\(>::=\)"a verb phrase" \(<\)noun-phrase\(>::=\)"a noun phrase" **Example Output:** "When patient is driving, ActApp shall not send notifications." _Example._ In ActApp, the LLMs can generate refined requirements as user stories (desired by ActApp team members). The requirements document may include sections such as an introduction, a description of ActApp stakeholders, a list of functional and non-functional requirements, a list of ActApp features with priorities, and any constraints or assumptions related to the development process. For non-functional requirements, such as data privacy for patients' health information, LLMs can cross-reference with regulations, e.g., HIPAA or GDPR to ensure compliance [1]. **ActApp Example (as user story for functional requirements).** _R1.1. As a user, I want to receive a notification to move if I have been sitting for 60 minutes, so that I will be active._ _R1.2. As a carer, I want ActApp to notify me if the patient continues to remain inactive after receiving a notification to move, so that I can intervene._ _NFR1.1: The app shall encrypt all data during transmission and storage to ensure patient privacy and comply with GDPR guidelines._ We note that for SWOT analysis in subsequent phases, we attempt to reduce overlap (to the best extent possible). For instance, almost all threats from elicitation are also applicable for specification. [Strengths] * _Automation_: Can streamline converting raw requirements into structured formats, such as EARS or user stories. Can generate additional artefacts, e.g., glossaries and personas, from converted requirements and domain information. * _Compliance Check_: Can cross-reference requirements against standards and regulatory guidelines, ensuring initial compliance. * _Requirement Classification_: Can categorize requirements into functional and non-functional, further classifying them. * _Initial Prioritization_: Can suggest requirement prioritization based on dependencies, project goals, and historical data. [Weaknesses] * _Depth of Domain Expertise_: While LLMs have vast knowledge, they might not fully capture the nuances of specialized domains. * _Over-Automation Risk_: Sole reliance on LLMs might lead to overlooking crucial requirements or business constraints. * _Ambiguity Handling_: May sometimes struggle with ambiguous or conflicting requirements, necessitating human intervention. [Opportunities] * _Continuous Feedback_: Can aid in real-time documentation and specification updates as requirements evolve. * _Human-Centric Focus_: Can help maintain a human-centric outlook in the specification stage by generating alternate requirements for different user groups. [Threats] * _Ambiguities in Structured Requirements_: Can generate requirements in specific formats. However, the generated requirements can have unintentional ambiguities (if the model is not fine-tuned adequately) or other quality issues (e.g., inconsistencies due to the limited'memory' of LLMs). * _Over-specification_: Known to be verbose [31], which can easily lead to over-defined requirements, and consequently lead to a rigid system design. * _Missing Non-functional Requirements_: Non-functional (unlike functional) requirements rely on a deeper understanding of the system's context, which LLMs might miss or inadequately address. ## 5 Requirements Analysis ### Analysis Tasks Requirements analysis focuses on understanding, evaluating, and refining the gathered requirements to ensure they are of high quality, i.e., coherent, comprehensive, and attainable, before moving to the design and implementation stages. An integral component of this phase is the automated evaluation of requirements quality. This includes addressing defects like ambiguity resolution, ensuring consistency, and guaranteeing completeness. Deficiencies in this phase can affect subsequent artefacts, leading to project delays, budget overruns, and systems misaligned with stakeholder expectations. The main challenges of NL requirements are ambiguity, incompleteness, inconsistency and incorrectness, which lead to misinterpretations, untestable requirements, untraced requirements to their origin, no consensus among stakeholders on what needs to be built, and conflicting requirements. Constantly evolving requirements further exacerbate all these issues. At times, documented requirements or underlying assumptions might inadvertently overlook potential risks or dependencies. In such instances, it becomes crucial to identify these risks and introduce new requirements as countermeasures. Analysis of requirements, for instance, getting an agreement on conflicting requirements requires negotiation. Negotiation is the key to resolving all conflicts, and the stakeholders converge on a unified set of requirements. From a human-centric RE perspective, the analysis stage must prioritize users' emotional, cultural and accessibility needs. This entails scrutinizing user feedback for inclusivity, vetting ethics and bias concerns--especially in AI-based software systems [3]--and analyzing requirements against prevailing accessibility guidelines. ### Role of LLMs LLMs come into play as powerful tools to automate the quality evaluation process: 1. **Automated Evaluation for Quality Assurance:** LLMs can automatically assess the quality of requirements, flagging any ambiguities, vague terms, inconsistencies, or incompleteness, and highlight gaps or overlaps. 2. **Risk Identification and Countermeasure Proposal:** LLMs, when equipped with domain knowledge, can identify potential risks associated with requirements or their underlying assumptions. Drawing from historical data or known risk patterns, LLMs can suggest new requirements that act as countermeasures to mitigate those risks, ensuring system design and operation robustness. 3. **Conflict Resolution and Negotiation:** By identifying areas of contention, LLMs can facilitate the negotiation process. Multiple LLM agents can be employed to negotiate the requirements, suggest compromises, and simulate various scenarios, helping stakeholders converge on a unified set of requirements. 4. **Human-centric Requirements Enhancement**: LLMs can evaluate requirements to ensure they cater to diverse user needs, accessibility standards, and user experience guidelines. LLMs can also suggest requirements that enhance the software's usability or accessibility based on user personas or feedback. Moreover, they can evaluate requirements for biases or potential ethical concerns, ensuring that the software solution is inclusive and ethically sound. 5. **Change Impact Analysis:** LLMs offer real-time feedback in requirements refinement, enhancing the efficiency of the iterative analysis and maintaining stakeholder alignment. The change impact analysis process implemented as continuous feedback cycle via LLMs ensures consistency. LLMs can further proactively predict requirements changes improving the quality of requirements. **Example Prompt.** _Context:_ For the ActApp system, we need to negotiate and prioritize requirements (FR1-FR7 and NFR1-NFR5) that ensure the system caters to the patient's health needs while maintaining usability and data privacy. _Task:_ Create two agents: Agent1 (A1) represents the primary user (a T2D patient). Agent2 (A2) represents the system's software architect. A1 and A2 will negotiate and discuss FR1 - FR7 to determine a priority list. During this negotiation, A1 will focus on the user experience, health benefits, and practical needs, while A2 will consider technical feasibility, integration with existing systems, and the architectural perspective. The agents can sometimes have differing opinions, leading to a more nuanced and realistic discussion. No decisions should violate NFR1 - NFR5. _Expected Output Format:_ FRs in decreasing order of priority, and include the rationale for priority order based on the negotiation outcomes between A1 and A2. _Example._ In the context of ActApp, LLMs can (i) identify and resolve ambiguities or inconsistencies in the requirements, such as conflicting preferences of patients or unclear feature descriptions; (ii) highlight any dependencies and requisites, e.g., a secure data storage system to support medical data storage; and (iii) generate missed ethical and regulatory concerns related to data storage. **ActApp Analysis Examples.** Identify the missing information from R1.2 and NFR1.1 in Section 4, wherein in R1.2 - the information on how long after the initial notification the system should wait before notifying the carer is missing, and in NFR1.1, no information about data retention and deletion were specified, with regards to GDPR. [Strengths] _Automation Support_: Can automatically and efficiently assess and enhance the quality of requirements, addressing ambiguities, inconsistencies, incompleteness, potential risks and countermeasures, and conflicts. _Consistency_: Unlike human analysts who might have varying interpretations or might overlook certain aspects due to fatigue or bias, LLMs provide consistent analysis, ensuring uniformity in the analysis process. _Historical Data Analysis_: Can draw insights from historical project data, identifying patterns, common pitfalls, or frequently occurring issues and provide proactive analysis based on past experiences. _Support Evolution and Continuous Learning_: Provide real-time feedback during iterative requirements analysis, predicting possible changes and ensuring consistency. As LLMs are exposed to more data, they can continuously learn and improve, ensuring their analysis is refined. [Weaknesses] _Lack of Nuanced Domain Understanding_: Can process vast amounts of information but might miss or get confused on subtle nuances or domain context that a human analyst would catch, leading to potential oversights. * _Difficulty with Ambiguities_: Struggle with inherently ambiguous or conflicting requirements, potentially leading to misinterpretations for all analysis tasks. * _Limited Context/Memory_: Have a limited "window" of context they can consider at any given time. This means that when analyzing large requirements documents as a whole, they might lose context on earlier parts of the document, leading to potential inconsistencies or oversights. They don't inherently "remember" or "understand" the broader context beyond this window, which can be challenging when ensuring coherence and consistency across the document. [Opportunities] * _Continuous Refinement_: As requirements evolve, LLMs can provide real-time feedback on the quality and consistency of these requirements. * _Integration with Development Tools_: Can be integrated with software development environments, offering real-time requirement quality checks during the software development lifecycle. * _Collaborative Platforms_: Can facilitate better stakeholder collaboration by providing a unified platform for requirements analysis, negotiation, and refinement. [Threats] * _Over-automation_: Risk of sidelining human expertise in favor of automated checks, potentially leading to overlooked requirements defects. * _Regulatory Issues_: Certain industries, domains or certification bodies might have regulatory or compliance concerns related to using LLMs for critical RE tasks. ## 6 Requirements Validation ### Validation Tasks Requirements validation ensures that the documented requirements accurately represent the stakeholders' needs and are ready for the subsequent design and implementation stages. Validating requirements often involves intricate tasks like reviewing them with stakeholders, inspecting them for defects, ensuring their traceability to their origins (or other artefacts), and defining clear acceptance criteria and test scenarios. The primary challenge in the validation phase revolve around ensuring the requirements are devoid of gaps due to stakeholders'real' expectations and tacit assumptions. Requirements might be interpreted differently by stakeholders, leading to potential misalignments. The dynamic nature of projects means that requirements evolve, further complicating the validation process. Occasionally, requirements or their underlying assumptions might inadvertently miss certain constraints or dependencies. This leads to further issues for validation tasks. In such cases, it is imperative to identify these gaps and refine the requirements accordingly. ### Role of LLMs LLMs can assist in the validation phase in several nuanced ways. As highlighted in the Analysis phase, LLMs can aid in the manual review and inspections by flagging potential ambiguities, inconsistencies, or violations based on pre-defined validation heuristics. LLMs can be utilized to simulate stakeholder perspectives, enabling analysts to anticipate potential misinterpretations or misalignments. For instance, by analyzing historical stakeholder feedback, LLMs can predict potential areas where clarifications might be sought from the perspective of a given stakeholder. With their ability to process vast amounts of data quickly, LLMs can assist in requirements traceability to other artefacts, e.g., design documents and regulatory codes. LLMs can further assist in formulating clear and precise acceptance criteria based on the documented requirements. They can also propose test scenarios, ensuring a comprehensive validation suite. Furthermore, LLMs can scan the requirements to identify and flag any overlooked human-centric aspects, constraints or dependencies, ensuring a more comprehensive validation. While LLMs can facilitate most validation tasks, as noted above, a major weakness of LLMs in this context is that the validation tasks often require an overall picture of the project, domain and stakeholders' viewpoints - it is extremely difficult for LLMs to work at that level of abstraction, which typically requires manual effort from numerous stakeholders. [Strengths] * _Alternate Perspectives_: Can simulate multiple stakeholder perspectives and ensure that all requirements are vetted from different viewpoints. * _Proactive Feedback_: Can provide real-time feedback during validation sessions, enhancing stakeholder engagement. * [Weaknesses] * _Depth of Context Understanding_: While adept at processing text, LLMs are not able to process the tacit knowledge in RE, the domain and the business context. [Opportunities] * _Interactive Validation Workshops_: Can be integrated into workshops to provide instant feedback, enhancing the validation process. * _Gap Analysis Enhancements_: Can assist in refining requirements by highlighting overlooked aspects or potential improvements. * _(Semi-)Automated Acceptance and Testing Artefacts Generation_: Can lead to a substantial effort saving in V&V activities and concomitantly higher quality software products by generating acceptance criteria and test scenarios. * [Threats] * rendering the added value for automation moot. * _Stakeholder Misrepresentation_: Might not accurately capture the unique concerns or priorities of specific stakeholders (when simulating stakeholder perspectives), leading to a skewed validation process. [style=] **Example Prompt.** _Context:_ For the ActApp system, we need to perform the validation on all the requirements specified in the system (FR1 - FR50) and (NFR1 - NFR28). The goal is to identify the gaps in all the requirements from three different stakholders' perspectives, the software developer, the ML scientist and the product owner. _Task:_ Imagine all three stakeholders are answering this question. For each requirement, all stakeholders will write down the gaps in the requirement based on their role, and then share it with the group. Then all stakeholders will review the inputs in the group and move to the next step. If any expert doesn't have any gap identified or a concern they can skip the discussion on that requirement. _Expected Output Format:_ For all gaps agreed upon by all stakeholders, export the issue with the requirement id. _Example._ In ActApp, LLMs can generate acceptance criteria. Also, LLMs can uncover gaps - in our preliminary LLMs evaluation, the ActApp team figured it needed to comply with Australia's Therapeutic Goods Act (TGA) regulation. **Example Acceptance Criteria.** _R1.1-AC1 Accurately detect when the user has been sitting for 60 continuous mins._ _R1.1-AC2 Notifications can be toggled on or off by user._ _R2-AC1 Accurately identifies when the user is driving._ ## 7 Preliminary Evaluation We conducted a preliminary evaluation of LLMs in RE on a real-world system (ActApp). We note that the purpose of this evaluation was not to conduct a comprehensive assessment of LLMs in RE. Instead, we focused primarily on the feasibility of integrating LLMs into requirements elicitation. The rationale is that the applicability of LLMs to the remaining RE stages is relatively intuitive, thanks in part to the extensive history and well-established methodologies of applying NLP techniques to these stages [28, 30]. Thus we deemed exploring LLMs in requirements elicitation to be the essential first step. _Data Collection Procedure._ The main goal of our data collection procedure was to establish the user requirements in ActApp and analyze the performance of ChatGPT for requirements elicitation. Our team had access to three ActApp experts - the project manager, an ML scientist, and a software engineer. These experts met with a researcher, Katelyn (pseudonym), to articulate the project's focus. The meetings were part of a broader context of understanding the RE processes. ChatGPT was not mentioned to experts to avoid bias. Katelyn engaged in four two-hour meetings with the experts, where they presented an overview of the project, system users, user requirements, and software features. We used ChatGPT to simulate the initial stages of requirements elicitation, wherein requirements engineers acquire project knowledge from stakeholders, review existing documentation, and formulate user requirements and core functionalities. The process involved four participants: Jorah and Jon, both seasoned software/requirements engineers, and Arya and Aegon, both early-stage RE and NLP research students. They were given a project overview from Katelyn and asked to start a ChatGPT session, introducing themselves as developers of the ActApp project. Guided by the project brief, they interacted with ChatGPT to elicit user-story-style requirements over a 45-minute session. Subsequently, Katelyn examined the requirements generated by the participants using ChatGPT against the actual project requirements. _Results._ Overall, 20 key user requirements were identified in ActApp by Katelyn with the experts. Katelyn mapped the requirements Jorah, Jon, Arya and Aegon elicited against these 20 requirements. Each requirement in the elicited set was categorized as a full match, partial match, or no match. It should be noted that a 'full match' did not imply an exact syntactic duplication of the original requirement but rather captured its essence effectively. Likewise, a 'partial match' indicated that only a part of the original requirement's essence was captured. We note that in our calculation of precision and recall, each full match is weighted as 1 true positive (TP) and each partial match is weighted as 0.5 TP. Katelyn further classified all 'no match' requirements as superfluous or potentially relevant (for further expert vetting if required). Table 1 shows the overall results from the four participants. The results clearly show the significance of experience while using ChatGPT in this preliminary evaluation. While none of the participants could elicit most requirements, it is important to note that with a project brief and one interaction session, the experienced participants could get almost half the relevant requirements, emphasising the feasibility of LLMs for RE. ## 8 Lessons Learned Our preliminary evaluation provided insights and highlighted challenges noted below. _Role of Prompts and Contextual Information._ LLMs depend heavily on comprehensive prompts and the availability of contextual information to generate meaningful output. Slightly different prompts can produce very different outputs. A thorough empirical evaluation of prompt engineering is necessary for employing LLM agents. _Experience Matters._ Experienced requirements engineers were more successful in formulating prompts, interpreting responses, and getting quality output, despite the project background being uniform across participants. This highlights the importance of experience and training in RE teams. _LLMs Capabilities._ Our preliminary evaluation underlined the capability of LLMs to discover 'unknown' requirements, addressing a significant challenge in RE. We found four 'potentially relevant' requirements for future stages in ActApp, which were not part of the original set, from three participants. We surmise that LLMs may assist interpreting and generating text for varied stakeholders, which can be key in reducing communication barriers inherent in diverse project teams. However, managing many 'false positive' candidate requirements will require care to ensure engineers are not overloaded with many irrelevant or semi-inaccurate requirements. \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline **Participant** & **Elicited** & **Full** & **Partial** & **Potentially** & **Superfluous/** & **Precision** & **Recall** \\ & & **Match** & **Match** & **Relevant** & **Redundant** & & \\ \hline Jorah & 14 & 11 & 1 & 0 & 2 & 82\% & 58\% \\ \hline Jon & 17 & 7 & 4 & 2 & 4 & 53\% & 45\% \\ \hline Arya & 14 & 3 & 2 & 1 & 8 & 29\% & 20\% \\ \hline Aegon & 27 & 2 & 4 & 1 & 20 & 15\% & 20\% \\ \hline \end{tabular} \end{table} Table 1: Evaluation Results _LLM Problems._ LLMs have some inherent issues, such as systematic inaccuracies or stereotypes in the output (influenced by the training data [7]), and the limited context length, e.g., ChatGPT has a limit of 32K tokens, which although enough but still might make it difficult to process large documents or maintain task context in a session. All participants reported issues with maintaining the context of ActApp system in the evaluation session and noticed inaccuracies. _Domain Understanding._ RE requires an excellent understanding of the underlying domain for eliciting and specifying correct and complete requirements. An LLM's training on specific domain knowledge may be limited and requires addressing to incorporate domain knowledge via experts, other sources, or fine-tuned LLMs. Access to large amounts of training data to fine-tune a custom LLM may be a challenge. _Automation Bias._ Humans often display unfounded trust in AI [4], e.g., the LLMs-generated requirements in our case. For example, upon completing the session, Arya and Aegon displayed a remarkable degree of confidence in their elicited requirements. _Security, Privacy and Ethical Issues._ Requirements are by their very nature mission-critical for software engineering and incorporate much sensitive information. Disclosure via public LLMs may result in IP loss, security breaches in deployed systems, organisational and personal privacy loss, and other concerns. Who 'owns' requirements generated by LLMs from training data from unknown sources? ## 9 Conclusion In this chapter, we explored the transformative potential of LLMs at various stages of RE. Our exploration positioned that LLMs have the potential to enhance several RE tasks by automating, streamlining, and augmenting human capabilities. Their capability to simulate stakeholder perspectives, generate alternative requirements, address requirements quality, cross-reference with standards, and generate structured documentation is revolutionary. However, we also cautioned against unchecked optimism (using detailed SWOT analysis) that LLMs are not the'silver bullet' for solving all RE problems and in fact have threats of their own. Specific challenges and threats associated with their application in RE include understanding deep-seeded domain nuances, understanding the overall context, over-automation, over-specification and losing the human-centric view of requirements. The chapter further outlines lessons learned from applying LLMs in a real-world project on RE for ActApp app for T2D patients.
2307.08027
Multi-Object Discovery by Low-Dimensional Object Motion
Recent work in unsupervised multi-object segmentation shows impressive results by predicting motion from a single image despite the inherent ambiguity in predicting motion without the next image. On the other hand, the set of possible motions for an image can be constrained to a low-dimensional space by considering the scene structure and moving objects in it. We propose to model pixel-wise geometry and object motion to remove ambiguity in reconstructing flow from a single image. Specifically, we divide the image into coherently moving regions and use depth to construct flow bases that best explain the observed flow in each region. We achieve state-of-the-art results in unsupervised multi-object segmentation on synthetic and real-world datasets by modeling the scene structure and object motion. Our evaluation of the predicted depth maps shows reliable performance in monocular depth estimation.
Sadra Safadoust, Fatma Güney
2023-07-16T12:35:46Z
http://arxiv.org/abs/2307.08027v1
# Multi-Object Discovery by Low-Dimensional Object Motion ###### Abstract Recent work in unsupervised multi-object segmentation shows impressive results by predicting motion from a single image despite the inherent ambiguity in predicting motion without the next image. On the other hand, the set of possible motions for an image can be constrained to a low-dimensional space by considering the scene structure and moving objects in it. We propose to model pixel-wise geometry and object motion to remove ambiguity in reconstructing flow from a single image. Specifically, we divide the image into coherently moving regions and use depth to construct flow bases that best explain the observed flow in each region. We achieve state-of-the-art results in unsupervised multi-object segmentation on synthetic and real-world datasets by modeling the scene structure and object motion. Our evaluation of the predicted depth maps shows reliable performance in monocular depth estimation. ## 1 Introduction Finding objects on visual data is one of the oldest problems in computer vision, which has been shown to work to great extent in the presence of labeled data. Achieving it without supervision is important given the difficulty of obtaining pixel-precise masks for the variety of objects encountered in everyday life. In the absence of labels, motion provides important cues to group pixels corresponding to objects. The existing solutions use motion either as input to perform grouping or as output to reconstruct as a way of verifying the predicted grouping. The current methodology fails to incorporate the underlying 3D geometry creating the observed motion. In this work, we show that modeling geometry together with object motion significantly improves the segmentation of multiple objects without supervision. Unsupervised multi-object discovery is significantly more challenging than the single-object case due to mutual occlusions. Therefore, earlier methods in unsupervised segmentation focused on separating a foreground object from the background whereas multi-object methods have been mostly limited to synthetic datasets or resorted to additional supervision on real-world data such as sparse depth [15]. While sparse-depth supervision can be applied to driving scenarios [15], depth information is not typically available on common video datasets. Moreover, video segmentation datasets such as DAVIS [49, 50] contain a wide variety of categories under challenging conditions such as appearance changes due to lighting conditions or motion blur. The motion information can be obtained from video sequences via optical flow. Optical flow not only provides motion cues for grouping [65] but can also be used for training on synthetic data without suffering from the domain gap while transferring to real data [64]. The problems in optical flow prediction on real-world data can be mitigated to some extent by relating flow predictions from multiple frames [64]. In addition to problems in predicting optical flow, requiring flow as input prohibits the application of the method on static images. Another line of work [11, 33] uses motion for supervision at train time only. Based on the observation that objects create distinctive patterns in flow, initial work [11] fits a simple parametric model to the flow in each object region to capture the object motion. This way, the network can predict object regions that can potentially move coherently from a single image at test time. There is an inherent ambiguity in predicting motion from a single image. Therefore, the follow-up work [33] predicts a distribution of possible motion patterns to reduce this ambiguity. This also allows extending it to the multi-object case by mitigating the over-segmentation problem of the initial work [11]. In this work, we propose to model pixel-wise geometry to remove ambiguity in reconstructing flow from a single image. Optical flow is the difference between the 2D projections of the 3D world in consecutive time steps. By modeling the 3D geometry creating these projections, we directly address the mutual occlusion problem due to interactions of multiple objects. This problem has been crudely addressed by previous work with a depth-ordered layer representation [64]. Instead of assuming a single depth layer per object, we predict pixel-wise depth which provides more expressive power in explaining the observed motion. Furthermore, we do not use flow as input during inference, allowing us to evaluate our method on single-image datasets. Recent work [5] showed that motion resides in a low-dimensional subspace, and its reconstruction can be used to supervise monocular depth prediction. Despite many possible flow fields, the space of possible flow fields is spanned by a small number of basis flow fields related to depth and independently moving objects. While [5] focuses on modeling camera motion for quantitatively evaluating depth in static scenes, it also points to the fact that the object motion can be similarly modeled in a low-dimensional subspace by simply masking the points in the object region. Given the difficulty of predicting pixel-wise masks, simple object embeddings are used to cluster independently moving objects. We instead predict the object regions jointly with depth to find the low-dimensional object motion that best explains the observed flow in each region. Our approach works extremely well on synthetic Multi-Object Video (MOVi) datasets [23], significantly outperforming previous work, especially in more challenging MOVi-{C,D,E} partitions and performing comparably on visually simpler MOVi-A due to difficulty of estimating depth. We use motion only for supervision at train time, therefore our method can be successfully applied to still images of CLEVR [31] and ClevrTex[34] and shows state-of-the-art performance. More impressively, our method can segment multiple objects on real-world videos of DAVIS-2017 [50] from a single image at test time, exceeding the performance of the state-of-the-art that uses flow from multiple frames as input [64]. In addition to evaluating segmentation, we show that our method can also reliably predict depth in real-world self-driving scenarios on KITTI [21]. ## 2 Related Work Basis Learning.Early work showed that optical flow estimation due to camera motion can be constrained using a subspace formulation for flow [28]. Basis learning has been used as a regularization in low-level vision, unifying tasks such as depth, flow, and segmentation [56]. PCAFlow [62] builds a higher dimensional flow subspace from movies to represent flow as a weighted sum of flow bases. Recent work [68] learns the coefficients to combine eight pre-defined flow bases for homography estimation. Motion as Input.Most of the work in motion segmentation focuses on the single-object case. While earlier work uses traditional methods to cluster pixels into similar motion groups [6, 35, 48, 63], later methods train deep neural networks which take flow as input and predict segmentation as output [13, 58, 59]. Another work [67] uses the distinctiveness of motion in the case of foreground objects by proposing an adversarial setting to predict motion from context. Segmenting objects in camouflaged settings can be achieved by modeling background motion to remove its effect and highlight the moving foreground object [3, 4, 38]. Recent work uses consistency between two flow fields computed under different frame gaps for self-supervision [65]. The most relevant to our work is OCLR [64] which extends motion segmentation to multiple objects by relating motion extracted from multiple frames using a transformer in a layered representation. In this work, we show that better results can be achieved on real data even from a single image by modeling pixel-wise geometry. Motion for Supervision.While using motion only as input works well where appearance fails, e.g. the camouflage datasets, RGB carries important information that might be missing in flow. DyStaB [66] trains a dynamic model by exploiting motion for temporal consistency and then uses it to bootstrap a static model which takes a single image as input. A single image network is used to predict a segmentation in [40] and then the motion of each segment is predicted with a two-frame motion network. While image warping loss is used in [40] for self-supervision, recent work [11, 33] uses flow reconstruction loss by assuming the availability of flow at train time only. GWM [11] segments foreground objects by fitting an approximate motion model to each segment and then merging them using spectral clustering. The follow-up work [33] extends it to multiple objects by predicting probable motion patterns for each segment with a distribution. We also reconstruct flow for supervision but differently, we account for 3D to remove the ambiguity in reconstructing motion from a single image. The most relevant to our work is the previous work that uses flow as a source of supervision for depth [5] or segmentation [11, 34]. In this work, we model both depth and segmentation with supervision from motion. Multi-Object Scene Decomposition.Our work is also related to scene decomposition approaches which are mostly evaluated on synthetic datasets. The earlier image-based decomposition approaches such as MONet [7] and IO-DINE [24] use a sequential VAE structure where the decomposition at a step can affect the remaining parts to be explained in the next step. GENESIS [17] follows an object-centric approach by accounting for component interactions, which is extended to more realistic scenarios with an autoregressive prior in the follow-up work [18]. Slot Attention [42] uses an iterative attention mechanism to decompose the image into a set of slot representations. A hierarchical VAE is used in [16] to extract symmetric and disentangled representations. There are also video-based approaches to multi-object scene decomposition. SCALOR [30] focuses on scaling generative approaches to crowded scenes in terms of object density. SIMNe [32] learns a factorized latent space to separate object semantics that is constant in the sequence from the background which changes at each frame according to camera motion. SAVi [36] extends Slot Attention [42] to videos and SAVi++ [15] extends it to real-world driving scenarios with sparse depth supervision. **Self-Supervised Monocular Depth Estimation.** Zhou et al. [70] train a pose network to estimate the pose between the frames in a sequence and jointly train it with the depth network. Godard et al. [22] improves the results with a better loss function and other design choices. Guizilini et al. [25] learn detail-preserving representations using 3D packing and unpacking blocks. Given instance segmentation masks, a line of work [9, 39] models the motion of objects in the scene in addition to the camera motion to go beyond the static-scene assumption. While the object masks are supervised using ground truth masks in [37], the masks are learned without supervision as an auxiliary output in [54] for better depth estimation. While they require multiple frames during inference, our approach can estimate masks from a single image. Additionally, our method does not use camera intrinsics. ## 3 Depth-Aware Multi-Object Segmentation The observed motion in 2D is the result of 3D scene structure and independently moving objects. By predicting the scene structure in terms of depth and locating independently moving objects, we can accurately reconstruct the optical flow corresponding to the observed motion in 2D. Towards this purpose, we use a low-dimensional parameterization of optical flow based on depth (Section 3.1). In this low-dimensional representation, we can accurately reconstruct flow from a rigid motion. We extend this parametrization to a number of rigidly moving objects to find the regions corresponding to objects (Section 3.2). See Fig. 1 for an overview of our approach. ### Low-Dimensional Motion Representation The space of all possible optical flow fields is very high-dimensional, i.e. in \(\mathbb{R}^{H\times W\times 2}\). However, conditioned on the scene structure, only a small fraction of all flow fields are possible. Previous work [27, 5] has shown that the set of possible instantaneous flows for a moving camera in a static scene forms a linear space with six basis vectors: \[\mathcal{B}_{0}=\{\mathbf{b}_{\mathbf{T}x},\mathbf{b}_{\mathbf{T}y},\mathbf{b }_{\mathbf{T}z},\mathbf{b}_{\mathbf{R}x},\mathbf{b}_{\mathbf{R}y},\mathbf{b}_ {\mathbf{R}z}\} \tag{1}\] These basis vectors correspond to translation and rotation along the \(x,y\), and \(z\) axes, respectively. For an image \(\mathbf{I}\in\mathbb{R}^{H\times W\times 3}\), the values of each basis vector \(\mathbf{b}_{i}\in\mathbb{R}^{H\times W\times 2}\) for a given pixel \((u,v)\) can be calculated as follows: \[\mathbf{b}_{\mathbf{T}x}=\begin{bmatrix}f_{x}\ d\\ 0\end{bmatrix},\quad\mathbf{b}_{\mathbf{R}x}=\begin{bmatrix}f_{y}^{\,-1}\,\bar {u}\,\bar{v}\\ f_{y}+f_{y}^{\,-1}\,\bar{v}^{2}\end{bmatrix}\] \[\mathbf{b}_{\mathbf{T}y}=\begin{bmatrix}0\\ f_{y}\ d\end{bmatrix},\quad\mathbf{b}_{\mathbf{R}y}=\begin{bmatrix}f_{x}+f_{x }^{\,-1}\,\bar{u}^{2}\\ f_{x}^{\,-1}\,\bar{u}\,\bar{v}\end{bmatrix}\] \[\mathbf{b}_{\mathbf{T}z}=\begin{bmatrix}-\bar{u}\ d\\ -\bar{v}\ d\end{bmatrix},\quad\mathbf{b}_{\mathbf{R}z}=\begin{bmatrix}f_{x}\ f_{y}^{\,-1}\,\bar{v}\\ -f_{y}\ f_{x}^{\,-1}\,\bar{u}\end{bmatrix} \tag{2}\] where \(f_{x},f_{y}\) are the focal lengths of the camera. For brevity, we define \(\bar{u}=u-c_{x}\) and \(\bar{v}=v-c_{y}\) to be the centered pixel coordinates according to \((c_{x},c_{y})\), the principal point of the camera. With a slight abuse of notation, we Figure 1: **Overview of our Approach.** From a single image, we use a segmentation and a depth network to predict a segmentation mask \(\mathbf{m}\) and a disparity map \(\mathbf{d}\). Based on these predictions, we construct the bases for the space of the possible optical flows for \(K\) distinctly moving regions on the image. Each moving region \(i\) is represented with a separate basis \(\mathcal{B}_{i}\). Given optical flow \(\mathbf{F}\) as input, either ground truth or estimated by an off-the-shelf method, we project it into \(\text{span}(\bigcup_{i=1}^{K}\mathcal{B}_{i})\). We use the distance between the input flow \(\mathbf{F}\) and the projected flow \(\hat{\mathbf{F}}\) to supervise depth and segmentation. During inference, our networks can be used to predict depth and segmentation from a single image. do not write the basis vectors as a function of \((u,v)\) and use \(d\) to denote the disparity \(\mathbf{d}(u,v)\) at a pixel \((u,v)\). We train a monocular depth network to predict inverse depth, disparity \(\mathbf{d}\) from a single image. Then, the predicted disparity at each pixel is used to form the translation part of the basis vectors as shown in Eq. (2). Note that predicted disparity values do not affect the rotation but form the low-dimensional motion representation via translation. The depth network receives gradients directly from the flow reconstruction loss as explained next in Section 3.2. In Eq. (2), camera parameters including the principal point \((c_{x},c_{y})\) and the focal lengths \(f_{x}\), \(f_{y}\) are required to calculate the basis vectors. We assume the principal points to be at the center of the image. However, we do not assume to know the values of focal lengths. Instead, we only assume that \(f_{x}=f_{y}\). In this case, as demonstrated by [5], we can rewrite \(\mathcal{B}_{0}\) as a set of \(8\) vectors that do not depend on the values of \(f_{x}\) and \(f_{y}\). For more details, please see Supplementary. Even without knowing the actual values of the focal lengths, we can obtain quite accurate depth predictions with supervision from flow (Section 4). ### Segmentation by Object Motion We extend the formulation introduced in Section 3.1 to handle the instantaneous flow from multiple object motion. As stated in [5], for a rigidly moving object in the scene, there is an equivalent camera motion. Therefore the space of optical flow from a rigidly moving object is the same as the space of optical flow from camera motion restricted to points in the object. Consider a scene with \(K\) regions corresponding to moving parts including the background and multiple objects. If we represent each region \(i\in\{1,\dots,K\}\) with ones on a binary mask \(\mathbf{m}_{i}\in\{0,1\}^{H\times W\times 1}\), then a basis for the space of possible flows can be defined as follows: \[\mathcal{B}=\{\mathcal{B}_{1}\cup\mathcal{B}_{2}\cup\dots\mathcal{B}_{K}\} \tag{3}\] where \(\mathcal{B}_{i}\) refers to the basis for the space of possible flows restricted to region \(i\) as: \[\mathcal{B}_{i}=\{\mathbf{m}_{i}\mathbf{b}\mid\mathbf{b}\in\mathcal{B}_{0}\}. \tag{4}\] We train a segmentation network to divide the image into coherently moving regions \(\mathbf{m}\in[0,1]^{H\times W\times K}\), representing soft assignments over \(K\) regions. We use the predicted mask \(\mathbf{m}_{i}\in[0,1]^{H\times W\times 1}\) of the region \(i\) to obtain the basis corresponding to that region according to Eq. (4). **Training Objective.** Based on the predicted disparity map \(\mathbf{d}\) and the segmentation map \(\mathbf{m}\), we form the basis \(\mathcal{B}\) for the space of possible optical flows for the image according to Eq. (3) and Eq. (4). We denote the optical flow where the input image is the source frame as \(\mathbf{F}\in\mathbb{R}^{H\times W\times 2}\). It can be either ground truth flow or the output of a two-frame flow network such as RAFT [57]. We project \(\mathbf{F}\) into the space spanned by \(\mathcal{B}\) in a differentiable manner to obtain \(\hat{\mathbf{F}}\). For the details of the projection, please refer to Supplementary. We define our loss function as the \(L_{2}\) distance between the given flow \(\mathbf{F}\) and the reconstructed flow \(\hat{\mathbf{F}}\) and use it to train depth and segmentation networks jointly: \[\mathcal{L}=\|\mathbf{F}-\hat{\mathbf{F}}\|_{2} \tag{5}\] ## 4 Experiments ### Datasets **Synthetic Datasets.** For comparison to image-based methods, we evaluate our method on the CLEVR [31] and ClevrTex[34] datasets. CLEVR is a dataset of still images depicting multiple objects of random shape, size, color, and position. ClevrTex is similar to CLEVR but contains more diverse textures and shapes. Because our method needs optical flow for training, we train our model on the MovingCLEVR and MovingClevrTex datasets [33], which are video extensions of CLEVR and ClevrTex scenes. We train on the video versions but evaluate on the original test sets of CLEVR and ClevrTex. For comparison to video-based methods, we use the Multi-Object Video (MOVi) datasets [23]. Similar to [33], we use the MOVi-{A, C, D, E} variants. MOVi-A is based on CLEVR [31] and contains scenes with a static camera and multiple objects with simple textures and uniform colors tossed on a gray floor. MOVi-C is more challenging due to realistic everyday objects with rich textures on a more complex background. MOVi-D increases the complexity by increasing the number of objects. MOVi-E is even more challenging as it features camera motion as well. In all our experiments on synthetic datasets, we use a resolution of \(128\times 128\) and the ground truth optical flow. **Real-World Datasets.** We use the common video segmentation dataset DAVIS-2017 [50] containing 90 video sequences where each sequence has one or more moving objects. We follow the evaluation protocol of [64] where the ground truth objects are reannotated by assigning the same label to the jointly moving objects. We resize the images to a resolution of \(128\times 224\) during training and use the flow from RAFT [57] with \(\{-8,-4,4,8\}\) gaps between frames. Additionally, we evaluate our method on the KITTI driving dataset [21, 20]. Following [2], we train on the whole training set and evaluate the segmentation results on the instance segmentation benchmark consisting of 200 frames. We use a resolution of \(128\times 416\) and the flow from RAFT [57] with a gap of \(+1\). Additionally, we evaluate our depth results on KITTI. Following prior work [70, 22], we evaluate depth on the Eigen split [14] of the KITTI dataset using improved ground truth [60] to be comparable to self-supervised monocular depth estimation approaches. ### Architecture Details We use the same architecture used in [51] for depth and Mask2Former [10] for segmentation, using only the segmentation head. We use different backbones for the segmentation network on the synthetic and real datasets. On synthetic datasets, we follow [33, 36, 42] and utilize a 6-layer CNN. On real-world datasets, following [11], we use a ViT-B/8 transformer pre-trained self-supervised using DINO [8] on ImageNet [53]. On all of the datasets, we use \(6\) object queries in the segmentation network, which translates to \(K=6\), except for CLEVR, where we use \(K=8\). We use a fixed learning rate of \(5\times 10^{-5}\) for the depth network and use \(1.5\times 10^{-4}\) with a linear warm-up for the first 5K iterations for the segmentation network, reduced to \(1.5\times 10^{-5}\) after 200K iterations. We train both networks with AdamW optimizer [43] for 250K iterations. See Supplementary for further details, we will also share the code. ### Evaluation Details Metrics.Following prior work [34, 36, 33], we evaluate segmentation on synthetic datasets using Adjusted Rand Index on foreground pixels (FG-ARI) and mean Intersection over Union (mIoU). ARI measures how well predicted and ground truth segmentation masks match in a permutation-invariant manner. For mIoU, we first apply Hungarian matching and calculate the mean over the maximum between the number of ground-truth and predicted segments. On DAVIS-2017 [50], we use the standard \(\mathcal{J}\), \(\mathcal{F}\) metrics and perform the Hungarian matching per frame, similar to other datasets. Note that we focus on the multi-object segmentation task without using any labels for segmentation at train or test time. For KITTI, we use the FG-ARI metric, following [33, 2]. For evaluating depth, we use the standard metrics used in monocular depth estimation [14, 19]. Post-processing.We also report the results on segmentation using the post-processing method introduced in [33]. They extract the connected components in the model output and choose the largest \(K\) masks and discard any masks that take up less than \(0.1\%\) of the image. Then they combine the discarded masks with the largest mask. The results with post-processing are marked with \({}^{\dagger}\) in the tables. ### Results on Synthetic Datasets We evaluate our method on synthetic datasets and compare its performance to both image-based and video-based methods. Our method uses motion during training only. Therefore, it can also be evaluated on the image datasets. Figure 2: **Visualization of Depth and Segmentation Results on MOVi datasets. Our method performs accurate segmentations, while PPMP suffers from over-segmentation and also mistakenly segments parts of the background as objects.** Video-Based Methods.We compare our method to other video-based methods on the MOVi video datasets in Table 1 and Fig. 2. All of the methods in Table 1 use optical flow for supervision. Differently, SCALOR [30] and SAVi [36] use all frames in a video, whereas PPMP [33] and our method perform single-image segmentation, one frame at a time without using any motion information at test time. On the simpler MOVi-A dataset, the performance of our method falls behind SAVi [36] and PPMP [33]. PPMP with Swin transformer [41] performs the best overall. With the same 6-layer CNN backbone and without post-processing, SAVi performs the best. Despite the advantage of motion information, the success of SCALOR [30] and SAVi [36] is limited to visually simpler MOVi-A. On the more challenging MOVi-{C,D,E} datasets, our method, even without post-processing, significantly outperforms all the previous methods in both metrics, with or without post-processing. The previous state-of-the-art, PPMP [33] uses the same backbone in their segmentation network as ours. Even with a more powerful backbone (Swin transformer [41]) and post-processing, the results of PPMP are still far behind our results without any post-processing. From MOVi-C to MOVi-E, the performance gap between our method and the others increases as the complexity of the dataset increases. Please see Supplementary for qualitative results with post-processing and evaluation of our estimated depth for objects in these datasets. Image-Based Methods.We compare our method to other image-based methods on the CLEVR and ClevrTex datasets in Table 2. Our method outperforms the state-of-the-art method PPMP [33] in all metrics on both datasets except for mIoU on CLEVR without postprocessing and mIoU on ClevrTex with postprocessing. We point to **+9.01** improvement in mIoU on the more challenging ClevrTex dataset without postprocessing. See Supplementary for qualitative results on CLEVR and ClevrTex. ### Results on Real-World Datasets We compare our method to multi-object segmentation methods on real-world datasets including driving scenarios on KITTI and unconstrained videos on DAVIS-2017. Results on DAVIS.Our method is the first image-based method to report performance in multi-object segmentation without using any labels during training or testing on \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline & \multicolumn{2}{c}{MOVi-A} & \multicolumn{2}{c}{MOVi-C} & \multicolumn{2}{c}{MOVi-D} & \multicolumn{2}{c}{MOVi-E} \\ \cline{2-7} & **FG-ARI\(\uparrow\)** & **mIoU\(\uparrow\)** & **FG-ARI\(\uparrow\)** & **mIoU\(\uparrow\)** & **FG-ARI\(\uparrow\)** & **mIoU\(\uparrow\)** \\ \hline GWM [10] & 70.30 & 42.27 & 49.98 & 30.17 & 39.78 & 18.38 & 42.50 & 18.74 \\ SCALOR [30] & 59.57 & 44.41 & 40.43 & 22.54 & - & - & - & - \\ SAVi [36] & 88.30 & 62.69 & 43.26 & 31.92 & 43.45 & 10.60 & 17.39 & 5.75 \\ PPMP [33] & 84.01 & 60.08 & 61.18 & 34.72 & 55.74 & 23.50 & 62.62 & 25.78 \\ PPMP\({}^{\dagger}\) & 85.41 & 76.19 & 61.24 & 37.26 & 55.18 & 25.21 & 63.11 & 28.59 \\ PPMP\({}^{\dagger}\) (Swin) & **90.08** & **84.76** & 67.67 & 52.17 & 66.41 & 30.40 & 72.73 & 35.30 \\ Ours & 56.09 & 36.48 & 73.80 & 54.48 & 76.41 & 58.82 & 78.33 & 47.38 \\ Ours\({}^{\dagger}\) & 70.15 & 46.26 & **74.64** & **59.24** & **77.15** & **59.68** & **80.83** & **50.48** \\ \hline \hline \end{tabular} \end{table} Table 1: **Segmentation Results on MOVi Datasets.** The best result in each column is shown in **bold**, and the second best is underlined. \({}^{\dagger}\) indicates post-processing, and (Swin) denotes using a Swin transformer as the backbone. \begin{table} \begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{CLEVR} & \multicolumn{2}{c}{ClevrTex} \\ \cline{2-5} & **FG-ARI\(\uparrow\)** & **mIoU\(\uparrow\)** & **FG-ARI\(\uparrow\)** & **mIoU\(\uparrow\)** \\ \hline SPAIR [12] & 77.13 & 65.95 & 0.00 & 0.00 \\ MN [55] & 72.12 & 56.81 & 38.31 & 10.46 \\ MONet [7] & 54.47 & 30.66 & 36.66 & 19.78 \\ SA [42] & 95.89 & 36.61 & 62.40 & 22.58 \\ IODINE [24] & 93.81 & 45.14 & 59.52 & 29.17 \\ DTI-S [47] & 59.54 & 48.74 & 79.90 & 33.79 \\ GNM [29] & 65.05 & 59.92 & 53.37 & 42.25 \\ SAVi [36] & - & - & 49.54 & 31.88 \\ PPMP [32] & 91.69 & **66.70** & 90.80 & 55.07 \\ Ours & **95.03** & 63.36 & **94.66** & **64.08** \\ \hline SPAIR\({}^{\dagger}\)[12] & 77.05 & 66.87 & 0.00 & 0.00 \\ MN\({}^{\dagger}\)[55] & 72.08 & 57.61 & 38.34 & 10.34 \\ MONet\({}^{\dagger}\)[7] & 61.36 & 45.61 & 35.64 & 23.59 \\ SA\({}^{\dagger}\)[42] & 94.88 & 37.68 & 61.60 & 21.96 \\ IODINE\({}^{\dagger}\)[24] & 93.68 & 44.20 & 60.63 & 29.40 \\ DTI-S\({}^{\dagger}\)[47] & 89.86 & 53.38 & 79.86 & 32.20 \\ GNM\({}^{\dagger}\)[29] & 65.67 & 63.38 & 53.38 & 44.30 \\ PPMP\({}^{\dagger}\)[33] & 95.94 & 84.86 & 92.61 & **77.67** \\ Ours\({}^{\dagger}\) & **96.95** & **86.38** & **95.32** & 70.28 \\ \hline \hline \end{tabular} \end{table} Table 2: **Segmentation Results on CLEVR and ClevrTex Datasets.** The lower part with \(\dagger\) shows the results with post-processing. The best result in each column is shown in **bold**, and the second best is underlined. DAVIS-2017. Therefore, in Table 3, we compare it to video-based approaches which use motion as input. We also compare to a simple baseline proposed in [64] based on Mask R-CNN [26] using optical flow as input. We use the labels re-annotated by [64] for evaluation, as explained in Section 4.1. Motion Grouping refers to [65] trained on DAVIS-2017. Motion Grouping (sup.), Mask R-CNN (flow) and OCLR are models trained on synthetic data from [64] in a supervised way using optical flow as input. Our method, which uses a single RGB image as input at test time, outperforms previous methods, including the state-of-the-art OCLR [64] that uses flow from multiple time steps. Ours-M refers to a version of our model where we adapt the spectral clustering approach of [11] to merge our predicted regions into the ground truth number of regions in each frame. Although not necessary to achieve state-of-the-art, this improves the results significantly. We visualize the results of our model in comparison to OCLR [64] in Fig. 3. Our method can correctly segment a wide variety of objects such as the bike and the person in the first column and the multiple fish in the third, multiple people walking or fighting in the second and fourth. OCLR is highly affected by the inaccuracies in flow, unlike our method, as can be seen from the last two columns. **Results on KITTI.** Since KITTI has ground truth depth, we evaluate our method in terms of both segmentation and monocular depth prediction on KITTI. The segmentation results on KITTI are presented in Table 3(a). Our method is among the top-performing methods, outperforming earlier approaches. In addition to segmentation, our method can also predict depth from a single image. We present the evaluation of our depth predictions in comparison to self-supervised monocular depth estimation methods in Table 3(b). Our depth network can predict reliable depth maps even without camera intrinsics with comparable results to recent self-supervised monocular depth estimation approaches [22, 25] that are specifically designed for that task and that use camera intrinsics. We visualize our segmentation and depth results on KITTI in Supplementary. Our method can segment multiple moving objects such as car, bike, bus without using any semantic labels for training or motion information at test \begin{table} \begin{tabular}{l c c c} \hline \hline Model & \(\mathcal{J}\)\& \(\mathcal{F}\uparrow\) & \(\mathcal{J}\uparrow\) & \(\mathcal{F}\uparrow\) \\ \hline Motion Grouping [65] & 35.8 & 38.4 & 33.2 \\ Motion Grouping (sup.) & 39.5 & 44.9 & 34.2 \\ Mask R-CNN (flow) & 50.3 & 50.4 & 50.2 \\ OCLR [64] & 55.1 & 54.5 & 55.7 \\ Ours & 55.3 & 55.3 & 55.3 \\ Ours-M & **59.2** & **59.3** & **59.2** \\ \hline \hline \end{tabular} \end{table} Table 3: **Multi-Object Segmentation Results on DAVIS-2017. We evaluate by using the motion labels from [64].** Figure 3: **Qualitative Comparison on DAVIS-2017**. While OCLR [64] misses some objects completely and suffers from relying on only optical flow as input, our method can segment a wide variety of multiple objects in everyday scenes. time. Furthermore, it can predict high-quality depth, capturing thin structures and sharp boundaries around objects. ### Ablation Study To evaluate the contribution of different types of flow basis, we perform an experiment by considering only translation or rotation and compare it to the full model on MOVi datasets in Table 5. Note that in the rotation-only case (Only-R), the depth predictions are not used and the depth network is not trained. Overall, the rotation-only model outperforms the translation-only model and the full model with both rotation and translation works the best on MOVi-{C, D, E} datasets with reliable depth predictions. The trend is different on the simpler MOVi-A dataset. Only-R outperforms all models including the state-of-the-art in Table 1. We found that in the translation-only case, the depth cannot be predicted on MOVi-A due to missing texture and enough detail for the depth network to learn a mapping from a single image to depth. The rotation-only model, on the other hand, learns to group pixels in a region, based on their rotational motion which does not depend on depth. This ability explains the success of Only-R on simpler datasets. The importance of pixel-wise depth increases with the complexity of the dataset. On MOVi-E, for example, which has the most complex setup with a large number of objects and camera motion, predicting depth, from Only-R to Full, improves the performance the most. ## 5 Conclusion and Future Work We presented a motion-supervised approach for multi-object segmentation that can work with a single RGB image at test time, therefore still applicable to image datasets. Our method is the first to consider geometry to remove ambiguity for multi-object segmentation from a single image without using any labels for segmentation. Modeling geometry significantly advances the state-of-the-art on commonly used synthetic datasets. We also evaluated our method on real-world datasets. Our method is the first image-based multi-object segmentation method to report state-of-the-art results on DAVIS-2017 without using motion at test time. We also report comparable results for depth prediction on KITTI and MOVi datasets where depth can be evaluated. Predicting objects that can potentially move independently from a single image requires observing examples of various objects moving in the training set. Moreover, static objects send a mixed signal to the model. The coherent changes in the flow can be captured with the help of geometry as shown in our work. The remaining uncertainty can be addressed with a probabilistic formulation as done in previous state-of-the-art [33]. Another problem is scenes without enough information to predict depth as we observed on textureless MOVi-A. However, the lack of information to this extent rarely happens on real-world data. **Acknowledgements.** Sadra Safadoust was supported by KUIS AI Fellowship and UNVEST R&D Center. \begin{table} \end{table} Table 4: **Results on KITTI. We evaluate the segmentation on the instance segmentation benchmark, and the depth on the KITTI Eigen split [14] with improved ground truth [60].** \begin{table} \end{table} Table 5: **Ablation Study. We perform an ablation study by using only the translation (Only-T) or rotation (Only-R) component and compare it to our model with both (Full). See text for details.**
2303.03485
On subtensors of high partition rank
We prove that for every positive integer $d \ge 2$ there exist polynomial functions $F_d, G_d: \mathbb{N} \to \mathbb{N}$ such that for each positive integer $r$, every order-$d$ tensor $T$ over an arbitrary field and with partition rank at least $G_d(r)$ contains a $F_d(r) \times \cdots \times F_d(r)$ subtensor with partition rank at least $r$. We then deduce analogous results on the Schmidt rank of polynomials in zero or high characteristic.
Jan Draisma, Thomas Karam
2023-03-06T20:33:49Z
http://arxiv.org/abs/2303.03485v1
# On Subtensors of High Partition Rank ###### Abstract. We prove that for every positive integer \(d\geq 2\) there exist polynomial functions \(F_{d},G_{d}:\mathbb{N}\to\mathbb{N}\) such that for each positive integer \(r\), every order-\(d\) tensor \(T\) over an arbitrary field and with partition rank at least \(G_{d}(r)\) contains a \(F_{d}(r)\times\cdots\times F_{d}(r)\) subtensor with partition rank at least \(r\). We then deduce analogous results on the Schmidt rank of polynomials in zero or high characteristic. JD is partially supported by Swiss National Science Foundation (SNSF) project grant 200021_191981 and by Vici grant 639.033.514 from the Netherlands Organisation for Scientific Research (NWO). He thanks the Institute for Advanced Study for excellent working conditions, under which part of this project was carried out. TK is supported by the European Research Council (ERC) grant 883810. He thanks the Mathematical Institute, University of Oxford, for the very pleasant research environment, and thanks the CRM Montreal for organising the conference "Tensors: Quantum Information, Complexity and Combinatorics" during which several of the main ideas of the present paper arose. If \(T\) is an order-\(d\) tensor and \(X_{1},\ldots,X_{d}\) are subsets of \([n_{1}],\ldots,[n_{d}]\) respectively then we write \(T[X_{1},\ldots,X_{d}]\) for the subtensor of \(T\) obtained by restricting the entries of \(T\) to the product \(X_{1}\times\cdots\times X_{d}\). Set \(r:=\operatorname{pr}(T)\). We want to show that \(T\) contains a subtensor of size some (not too large) function of \(r\) and partition rank at least some (not too small) function of \(r\). The contrapositive says that if all small subtensors of \(T\) have bounded partition rank, then \(T\) itself has bounded partition rank. This is what we will prove. **Theorem 1.1**.: _Let \(d\geq 2\) be a positive integer. There exist functions \(F_{d},G_{d}:\mathbb{N}\to\mathbb{N}\) such that if \(r\geq 1\) is a positive integer and \(T\) is an order-\(d\) tensor over an arbitrary field such that every \(F_{d}(r)\times\cdots\times F_{d}(r)\) subtensor of \(T\) has partition rank at most \(r\), then \(T\) has partition rank at most \(G_{d}(r)\). Furthermore, we may take the bounds \(F_{d}(r)\leq(2^{d+3}r)^{2d}\) and \(G_{d}(r)\leq(2^{d+3}r)^{2d^{2}}\) for all \(r\)._ We remark that the bounds on the quantities \(F_{d}(r)\) and \(G_{d}(r)\) in the theorem hold for any field \(K\), though it is conceivable that "optimal" functions \(F_{d}\) and \(G_{d}\) do depend on \(K\). Furthermore, we believe that it is likely that Theorem 1.1 could still be true with both bounds \(F_{d}(r)\) and \(G_{d}(r)\) taken to grow linearly in \(r\) for every fixed \(d\), although we do not have a proof of that. In Section 4 we shall prove that we may take \(F_{3}(r)=O(r^{3/2})\) and \(G_{3}(r)=O(r^{3})\). The proof is inspired by an attempt to extend the following matrix argument to higher-dimensional tensors. If \(A\) is a matrix and \(r\) is the largest nonnegative integer such that there exists an \(r\times r\) submatrix \(A[X,Y]\) of \(A\) with rank \(r\), then for every \(x\in X^{c}\) and \(y\in Y^{c}\) we have that \(\det A[X,Y]\neq 0\) but \(\det A[X\cup\{x\},Y\cup\{y\}]=0\), so we can express all coefficients \(A(x,y)\) with \(x\in X^{c}\) and \(y\in Y^{c}\) in the simple way \[A(x,y)=A[\{x\},Y]A[X,Y]^{-1}A[X,\{y\}] \tag{1}\] in terms of the entries in the \(r\) rows \(A[\{x\},[n_{2}]]\) with \(x\in X\) and the \(r\) columns \(A[[n_{1}],\{y\}]\) with \(y\in Y\); this expression in turn this shows that the matrix \(A[X^{c},Y^{c}]\) has rank at most \(r\). Although the resulting bound is not optimal, this argument allows us to deduce that \(A\) must have rank at most \(3r\), since each of the \(r\) rows \(A[\{x\},[n_{2}]]\) with \(x\in X\) and each of the \(r\) columns \(A[[n_{1}],\{y\}]\) with \(y\in Y\) has rank at most \(1\). We may hence try to imitate this argument for the partition rank of order-\(d\) tensors, and ask the following question. **Question 1.2**.: _Let \(d\geq 2\), \(r\geq 1\) be positive integers. Does there exist a positive integer \(C_{d}(r)\) satisfying the following? If \(T\in K^{n_{1}}\otimes\cdots\otimes K^{n_{d}}\) is an order-\(d\) tensor, \(X_{1},\ldots,X_{d}\) are subsets of \([n_{1}],\ldots,[n_{d}]\) respectively, each with size \(r\), such that \(\operatorname{pr}T[X_{1},\ldots,X_{d}]=r\) and_ \[\operatorname{pr}T[X_{1}\cup\{x_{1}\},\ldots,X_{d}\cup\{x_{d}\}]=r\] _is satisfied for all \(x_{1}\in[n_{1}]\setminus X_{1},\ldots,x_{d}\in[n_{d}]\setminus X_{d}\), then we have_ \[\operatorname{pr}T[[n_{1}]\setminus X_{1},\ldots,[n_{d}]\setminus X_{d}]\leq C _{d}(r).\] As with the matrix argument, because every order-\((d-1)\) slice of \(T\) has partition rank at most \(1\), a positive answer to Question 1.2 implies Theorem 1.1 with \(F_{d}(r)=r\) and \(G_{d}(r)=C_{d}(r)+dr\). In particular, it would suffice that \(C_{d}\) be linear in \(r\) for \(G_{d}\) to be linear in \(r\). It is however not obvious to us that Question 1.2 has a positive answer. Unlike in the case of matrices, where all smallest-length rank decompositions of a full-rank matrix can be deduced from one another via a change of basis, the set of partition rank decompositions of a given full-rank tensor is richer in general, already in the \(d=3\) case; this makes it harder to obtain an analogue of the expression (1). In the absence of such a direct expression for \(T(x_{1},\ldots,x_{d})\), we may ask for a weaker description: an equation of which \(T(x_{1},\ldots,x_{d})\) is a solution, which leads the way to the arguments that our proof will involve. A second difficulty which we have to circumvent is that we do not know that the set of tensors in \(K^{n_{1}}\otimes\cdots\otimes K^{n_{d}}\) with partition rank at most \(r\) is Zariski-closed in general: although this is true for algebraically closed fields \(K\) and \(d=3\) (as shown by Sawin and Tao [15]), this is likely false in general, even over algebraically closed \(K\). (Indeed, the corresponding notion of bounded strength for quartics is not closed [1].) This will nonetheless not interfere with our argument, as it will suffice for us to find a polynomial for which the zero-set merely contains the set of tensors in \(K^{n_{1}}\otimes\cdots\otimes K^{n_{d}}\) with partition rank at most \(r\) rather than being equal to it. Indeed, our main stepping stone towards proving Theorem 1.1 will be the following statement. **Theorem 1.3**.: _Let \(d\geq 2\), \(r\geq 1\) be positive integers. Let \(m\geq 1\) be a positive integer such that there exist positive integers \(n_{1},\ldots,n_{d}\) and a nonzero polynomial \(f\) of degree \(m\) in \(\mathbb{C}[x_{i_{1},\ldots,i_{d}}\mid i_{j}\in[n_{j}]]\) that vanishes on all tensors in \(\mathbb{C}^{n_{1}}\otimes\cdots\otimes\mathbb{C}^{n_{d}}\) of partition rank at most \(r\)._ _Then for any field \(K\), any positive integers \(n_{1},\ldots,n_{d}\geq m\), and any tensor \(T\in K^{n_{1}}\otimes\cdots\otimes K^{n_{d}}\), if all \(m\times\cdots\times m\)-subtensors of \(T\) have partition rank at most \(r\), then \(T\) has partition rank at most_ \[d(m-1)+\sum_{s=1}^{\lfloor d/2\rfloor}\binom{d}{s}(m-1)^{d-s},\] _which for \(d\geq 3\) is at most \(m^{d}\)._ We note that in the case of matrices, the determinant of the top-left submatrix is such a polynomial, and we may hence take \(m=r+1\). This yields \(4r\), almost recovering the bound \(3r\) discussed earlier. In the case of high-characteristic fields, we may deduce from Theorem 1.1 an analogue for polynomials. If \(P\) is a homogeneous polynomial in several variables over a field \(K\) and with degree at least \(2\), then we let \(\operatorname{rk}P\) be the smallest positive integer \(k\) such that we may write \[P=Q_{1}R_{1}+\cdots+Q_{k}R_{k}\] for some homogeneous polynomials \(Q_{i},R_{i}\) satisfying \(\deg P_{i},\deg Q_{i}<\deg P\) and \(\deg Q_{i}+\deg R_{i}=\deg P\) for each \(i\in[k]\). This notion of rank is known as the _Schmidt rank_ or _strength_ of \(P\). For every subset \(U\subset[n]\), we write \(P[U]\) for the polynomial in \(K[x_{u}|u\in U]\) obtained by substituting in the polynomial \(P\) the value \(0\) for all variables in \([n]\setminus U\). **Theorem 1.4**.: _Let \(K\) be a field, let \(d\geq 2\), \(r\geq 1\) be positive integers and set \(D:=\binom{d}{\lfloor d/2\rfloor}\leq 2^{d}\). Assume that \(\operatorname{char}K=0\) or \(\operatorname{char}K>d\). If \(P\) is a homogeneous polynomial in variables \(x_{1},\ldots,x_{n}\) over \(K\) with \(\deg P=d\) that satisfies \(\operatorname{rk}P[U]\leq r\) for every subset \(U\subset[n]\) with size at most \(dF_{d}(r\cdot D)\), then_ \[\operatorname{rk}P\leq G_{d}(r\cdot D).\] ### Relations to the literature The main results and techniques present paper may be contrasted to those of existing works in the literature. A general framework for studying restriction-closed properties had been started in [10], but the techniques in that paper assume that the property is Zariski-closed, which as we have explained does not appear to be the case for the set of tensors in \(K^{n_{1}}\otimes\cdots\otimes K^{n_{d}}\) with partition rank at most \(r\). The more recent work [1] assumes that the field is finite. High-rank subtensors were also studied in [11], using very different arguments: Theorem 4.1 from that paper there is similar to (our) Theorem 1.1, but assumes that the field is finite, an assumption which is heavily used in the proof, through a connection between the partition rank and the analytic rank; although the qualitative part of Theorem 1.1 follows as a special case of the first main theorem stated in the introduction of that paper, polynomial bounds in the functions \(F_{d}\) and \(G_{d}\) are a novelty of the present paper. Again letting aside the matter of the bounds, one can deduce from a universality theorem of Kazhdan and Ziegler [13] a qualitative version of Theorem 1.4 where the assumption is replaced by the requirement that every image \(T^{\prime}\in K^{F_{d}(r)}\otimes\cdots\otimes K^{F_{d}(r)}\) of \(T\) under any \(d\)-tuple of linear transformations has partition rank at most \(r\). This condition is much stronger than our requirement on subtensors. A general method for passing from a result about linear maps to a result about subtensors, which involves fundamental results about finitely generated **FI**-modules, is described in [1]. Let us finally mention a paper of Briet and Castro-Silva [1] on random restrictions of tensors and polynomials, which provides an additional motivation for the present line of work. Although none of their results imply ours or the other way around, they identify linear bounds in Theorem 1.1 and in Theorem 1.4 as respectively providing a natural route to recover a random restriction theorem for tensors and for polynomials. Our proof of Theorem 1.4 shows that linear bounds in Theorem 1.1 would suffice to obtain linear bounds in Theorem 1.4. ### Organisation of this paper In Section 2 we prove Theorem 1.3, in Section 3 we find a bound on \(m\) in Theorem 1.3 in terms of \(r\) and derive Theorem 1.1. In Section 4 we use classical invariant theory to derive a slightly better bound on \(m\) in the special case of \(d=3\), and in Section 5 we deduce Theorem 1.4 from Theorem 1.1. ## Acknowledgements We thank Jop Briet for useful discussions, and Harm Derksen for suggesting to us the argument in Section 4. ## 2. Proof of Theorem 1.3 We begin with the following elementary observation; this is inspired by Snowden's alternative proof in [14] of many of the results in [1] by working with equations of weight \((1,\ldots,1)\). **Lemma 2.1**.: _We may assume that the polynomial \(f\) in Theorem 1.3 lies in_ \[\mathbb{C}[x_{i_{1},\ldots,i_{d}}\mid i_{j}\in[m]],\] _has coefficients in \(\mathbb{Z}\) with gcd \(1\), and has weight \((1^{m},\ldots,1^{m})\) for the torus \(((\mathbb{C}^{*})^{m})^{d}\)._ Proof.: Since tensors of partition rank at most \(r\) are the image of a map defined over \(\mathbb{Z}\), we may assume that \(f\) has integer coefficients. Since they are preserved by coordinate scalings, we may further assume that \(f\) is a weight vector, i.e., \(f\) gets scaled by \((t_{1}^{\alpha_{1}},\ldots,t_{d}^{\alpha_{d}})\), for certain \(\alpha_{i}\in\mathbb{Z}_{\geq 0}^{n_{i}}\), when the tensor gets acted upon by \((\operatorname{diag}(t_{1}),\ldots,\operatorname{diag}(t_{d}))\in\prod_{i=1}^ {d}\operatorname{GL}_{n_{i}}(\mathbb{C})\). Now if the \(j\)-th entry of \(\alpha_{i}\) is strictly greater than \(1\), then acting with the Lie algebra element \(E_{j,n_{i}+1}\) on \(f\) we get another polynomial that vanishes on tensors of partition rank \(r\) and which has weight \((\alpha_{1},\ldots,\alpha_{i}^{\prime},\ldots,\alpha_{d})\), where \[\alpha_{i}^{\prime}=(\alpha_{i1},\ldots,\alpha_{ij}-1,\ldots,\alpha_{in_{i}}, 1)\in\mathbb{Z}_{\geq 0}^{n_{i}+1}.\] We replace \(n_{i}\) by \(n_{i}+1\) and \(\alpha_{i}\) by \(\alpha_{i}^{\prime}\). Continue in this manner until all \(\alpha_{i}\) only have \(1\)s and \(0\)s as entries. The \(0\)s correspond to slices of variables that do not occur in \(f\). After removing these, \(f\) has weight \((1^{n_{1}},\ldots,1^{n_{d}})\), where we note that the \(n_{i}\) may have changed. Each variable has weight \((e_{j_{1}},\ldots,e_{j_{d}})\) for some \(j_{i}\in[n_{i}]\), and it follows that all \(n_{i}\) are equal to a common number \(m\). Finally, divide the resulting \(f\) by an integer to ensure that the coefficients have gcd \(1\). Now consider the image of \(f\) in \(K[x_{i_{1},\ldots,i_{d}}\mid i_{1},\ldots,i_{d}\in[m]]\). This is nonzero, since the coefficients of \(f\) have gcd \(1\), and it is still a weight vector of weight \((1^{m},\ldots,1^{m})\). From now on, we write \(f\) for the image. Note that \(f\) vanishes on tensors in \(K^{m}\otimes\ldots\otimes K^{m}\) of partition rank at most \(r\). Since \(f\) has weight \((1^{m},\ldots,1^{m})\), after applying permutations in the \(d\) directions, we can write \[f=:h_{0}=x_{m,\ldots,m}h_{1}+r_{1}\] where \(h_{1}\) is a nonzero weight vector of weight \((1^{m-1}0,\ldots,1^{m-1}0)\) and where \(r_{1}\) is a weight vector of weight \((1^{m},\ldots,1^{m})\) that does not involve \(x_{m,\ldots,m}\). Note that the variables in \(h_{1}\) have all indices at most \(m-1\), and all variables in \(r_{1}\) have at least one index at most \(m-1\). Similarly, after further permutations on the first \((m-1)\) indices, for \(k=1,\ldots,m-1\) we have \[h_{k}=x_{m-k,\ldots,m-k}h_{k+1}+r_{k+1}\] where \(h_{k+1}\) is a nonzero weight vector of weight \((1^{m-k-1}0^{k+1},\ldots,1^{m-k-1}0^{k+1})\) and \(r_{k+1}\) a weight vector of weight \((1^{m-k}0^{k},\ldots,1^{m-k}0^{k})\) that does not involve \(x_{m-k,\ldots,m-k}\). Note that \(h_{m}\) is a nonzero constant and that \(r_{m}=0\). Furthermore, for each \(i=1,\ldots,d\), every term of \(r_{k+1}\) contains precisely one variable that has an index \(m-k\) on position \(i\). These \(d\) indices \(m-k\) are distributed over at least two and at most \(d\) of the variables in the term. So \(r\) is a linear combination of terms of the following form (illustrated for \(d=4\)): \[x_{m-k,i_{2},i_{3},i_{4}}\cdot x_{j_{1},m-k,m-k,j_{4}}\cdot x_{l_{1},l_{2},l_{3 },m-k}\] where \(i_{2},\ldots,l_{3}\in[m-k-1]\) and where the coefficients are polynomials in the variables all of whose indices are in \([m-k-1]\). **Proposition 2.2**.: _Let \(n_{1},\ldots,n_{d}\geq m\) be positive integers and let \(T\in K^{n_{1}}\otimes\cdots\otimes K^{n_{d}}\). Suppose that, for some \(k=0,\ldots,m-1\), the whole \(G:=\prod_{i}\operatorname{Sym}([n_{i}])\)-orbit of \(h_{k}\) vanishes at \(T\), but not the whole \(G\)-orbit of \(h_{k+1}\) vanishes at \(T\). Then \(T\) has partition rank at most_ \[d(m-k-1)+\sum_{s=1}^{\lfloor d/2\rfloor}\binom{d}{s}(m-k-1)^{d-s}. \tag{2}\] Proof.: Without loss of generality, we may assume that \(h_{k+1}(T)\) is nonzero and the whole \(G\)-orbit of \(h_{k}\) is zero on \(T\). We then have \[t_{m-k,\ldots,m-k}=-r_{k+1}(T)/h_{k+1}(T). \tag{3}\] For instance, if \(d=4\), then \(t_{m-k,\ldots,m-k}\) is a linear combination of terms such as \[t_{m-k,i_{2},i_{3},i_{4}}\cdot t_{j_{1},m-k,m-k,j_{4}}\cdot t_{l_{1},l_{2},l_{3 },m-k}\] where the coefficients only depend on the subtensor \(T[[m-k-1],\ldots,[m-k-1]]\). Since the whole \(G\)-orbit of \(h_{k}\) vanishes on \(T\), we may apply to (3) arbitrary elements of the subgroup \(G^{\prime}:=\prod_{i=1}^{d}\operatorname{Sym}([n_{i}]\setminus[m-k-1])\) of \(G\). Note that this fixes all indices up to \(m-k-1\). As a consequence, we find that the subtensor \[T[[n_{1}]\setminus[m-k-1],\ldots,[n_{d}]\setminus[m-k-1]]\] of \(T\) admits a decomposition as a sum of tensor products of tensors in which every term is divisible by some tensor like (for \(d=4\)) \[(t_{j_{1},a_{2},a_{3},j_{4}})_{a_{2}\in[n_{2}]\setminus[m-k-1],a_{3}\in[n_{3} ]\setminus[m-k-1]}.\] for some choice of \((j_{1},j_{4})\in[m-k-1]^{2}\). In each term of this decomposition, there is at least one factor which is a tensor of order at most \(\lfloor d/2\rfloor\). The number of these tensors equals \[\sum_{s=1}^{\lfloor d/2\rfloor}\binom{d}{s}(m-k-1)^{d-s}.\] where \(\binom{d}{s}\) counts the choices of positions \(i\) where one puts the indices varying in \([n_{i}]\setminus[m-k-1]\) (positions \(2,3\) in the example) and the factor \((m-k-1)^{d-s}\) counts the number of choices for the remaining indices \((j_{1},j_{4}\) in the example). Finally, the remainder of \(T\) admits a _slice rank_ decomposition where each term is divisible by some standard basis vector in \(K^{m-k-1}\), in one of the \(d\) factors. This is accounted for by the first term in (2). Proof of Theorem 1.3.: By construction of \(f=h_{0}\), its entire \(G\)-orbit vanishes on the given tensor \(T\). On the other hand, \(h_{m}\) is a nonzero constant. Hence there exists a \(k\in\{0,\ldots,m-1\}\) such that the entire \(G\)-orbit of \(h_{k}\) vanishes at \(T\) but not the entire \(G^{\prime}\)-orbit of \(h_{k+1}\) vanishes at \(T\). Hence Proposition 2.2 applies with this \(k\). Now the bound in Theorem 1.3 follows from the bound in (2) by taking the worst \(k\), namely, \(k=0\). ## 3. A degree bound for \(f\) and proof of Theorem 1.1 **Theorem 3.1**.: _Let \(d\geq 2\) be a positive integer. Then for every positive integer \(r\geq 1\) and for \(n=8r\), there exists a polynomial \(f\) with degree at most \(m\leq(2^{d+3}r)^{2d}\) in \(\mathbb{C}[x_{i_{1},\ldots,i_{d}}\mid i_{j}\in[n]]\) that vanishes on all tensors in \(\mathbb{C}^{n}\otimes\cdots\otimes\mathbb{C}^{n}\) (\(d\) factors) with partition rank at most \(r\)._ Proof.: We write \(X_{n,r}\) for the set of order-\(d\) tensors in \(\mathbb{C}^{n}\otimes\cdots\otimes\mathbb{C}^{n}\) with partition rank at most \(r\). The set \(X_{n,r}\) is contained in the set \(X^{\prime}_{n,r}\) of order-\(d\) tensors \(T\) that have a partition rank decomposition of the type \[T=\sum_{I\in\mathcal{P}([d])\setminus\{\emptyset,[d]\}}\sum_{i=1}^{r}A_{I,i} \otimes B_{I^{c},i} \tag{4}\] for certain tensors \(A_{I,i}\in\bigotimes_{j\in I}\mathbb{C}^{n}\) and \(B_{I^{c},i}\in\bigotimes_{j\in I^{c}}\mathbb{C}^{n}\). Note that we could take the summation over half of these \(I\), but for simplicity we will not do so here. From now on, the range of \(I\) in summations, indexations etc. will always be taken to be \(\mathcal{P}([d])\setminus\{\emptyset,[d]\}\) unless indicated otherwise, where \(\mathcal{P}([d])\) is the power set of \([d]\). Let \(\pi_{r}=\prod_{I}(\mathbb{C}^{n^{I}}\times\mathbb{C}^{n^{I^{c}}})^{r}\) be the parameter space for decompositions as in (4). We let \(P_{2m}(\pi_{r})\) be the vector space of homogeneous polynomials of degree \(2m\) in the variables \(A_{I,i},B_{I,i}\) and let \(P_{m}(\mathbb{C}^{[n]^{d}})\) be the linear space of homogeneous polynomials of degree \(m\) in the \(n^{d}\) entries of tensors of \(\mathbb{C}^{[n]^{d}}\). Letting \(\varphi:\pi_{r}\to\mathbb{C}^{[n]^{d}}\) be the parametrisation defined by taking \[\varphi(A_{I,i},B_{I,i}:I,1\leq i\leq r)\] to be the right-hand side of (4), the pull-back \(\varphi^{\#}:P_{m}(\mathbb{C}^{[n]^{d}})\to P_{2m}(\pi_{r})\) with \(\varphi^{\#}(f)=f\circ\varphi\) is well-defined. If \(f\) in an element of \(\ker\varphi^{\#}\), then \(f\) vanishes on the image \(X^{\prime}_{n,r}\) of \(\varphi\), so it now suffices to check that \(\ker\varphi^{\#}\neq\{0\}\). In turn, to show this, it suffices to show \[\dim P_{2m}(\pi_{r})<\dim P_{m}(\mathbb{C}^{[n]^{d}}).\] Defining \(S_{n,r}=\sum_{I}r(n^{|I|}+n^{d-|I|})=\dim\pi_{r}\), we can write \[\dim P_{2m}(\pi_{r}) =\binom{2m+S_{n,r}-1}{2m}\] \[=\binom{2m+S_{n,r}-1}{S_{n,r}-1}\] \[\leq(2m+S_{n,r})^{S_{n,r}-1}/(S_{n,r}-1)!.\] Meanwhile \[\dim P_{m}(\mathbb{C}^{[n]^{d}}) =\binom{m+n^{d}-1}{m}\] \[=\binom{m+n^{d}-1}{n^{d}-1}\] \[\geq m^{n^{d}-1}/(n^{d}-1)!.\] It hence suffices to show \[m^{n^{d}-1}/(2m+S_{n,r})^{S_{n,r}-1}>(n^{d}-1)!/(S_{n,r}-1)!. \tag{5}\] For every \(1\leq|I|\leq d-1\) we have \(n^{|I|}+n^{d-|I|}\leq 2n^{d-1}\), so \(S_{n,r}\leq 2^{d+1}rn^{d-1}\). Assuming that \(m\geq n^{d}/4\geq 3\) and \(n=2^{d+3}r\), the left-hand side of (5) is therefore at least \[m^{n^{d}-1}/(2m+2^{d+1}rn^{d-1})^{2^{d+1}rn^{d-1}-1} \geq m^{n^{d}-1}/(2m+n^{d}/4)^{n^{d}/4-1}\] \[\geq m^{n^{d}-1}/(3m)^{n^{d}/4-1}\] \[\geq m^{n^{d}/2}.\] Meanwhile, the right-hand side of (5) is at most its numerator, and hence at most \((n^{d})^{n^{d}}\). Therefore, for (5) to hold, it suffices that \[m^{n^{d}/2}>(n^{d})^{n^{d}},\] which simplifies to \(m\geq n^{2d}\). Since \(n=2^{d+3}r\), it suffices that \(m\geq(2^{d+3}r)^{2d}\) for there to exist a polynomial \(f\) which is zero on \(X^{\prime}_{n,r}\) and hence on \(X_{n,r}\). Proof of Theorem 1.1.: Take \(F_{d}(r)=m:=(2^{d+3}r)^{2d}\). By Theorem 3.1, there exists a nonzero polynomial \(f\) of degree \(m\) that vanishes on order-\(d\) tensors of partition rank \(r\), and by Theorem 1.3, any \(n_{1}\times\cdots\times n_{d}\)-tensor all of whose \(m\times\cdots\times m\)-subtensors have partition rank at most \(r\), has itself partition rank at most \(m^{d}=(2^{d+3}r)^{2d^{2}}=:G_{d}(r)\), as desired. Here we used \(d\geq 3\) in the last step, but as discussed in the beginning of the paper, for \(d=2\) even much better bounds work. Finally, note that if some \(n_{i}\) happens to be smaller than \(m\), then the tensor has partition rank at most \(m<G_{d}(r)\). ## 4. Order-3 tensors and invariant theory In this section, we focus on \(d=3\) and follow a construction suggested to us by Harm Derksen. In this case, partition rank equals slice rank. The following is well-known, but we include a quick proof. **Lemma 4.1**.: _The tensors \(T\in\mathbb{C}^{n}\otimes\mathbb{C}^{n}\otimes\mathbb{C}^{n}\) of slice rank strictly less than \(n\) are contained in the nullcone for the action of the group \(G:=\operatorname{SL}_{n}\times\operatorname{SL}_{n}\times\operatorname{SL}_{n}\)._ Here the nullcone is the set of all vectors on which all \(G\)-invariant polynomials vanish. Proof.: For such a tensor there exist linear subspaces \(V_{1},V_{2},V_{3}\subseteq\mathbb{C}^{n}\) with \(\dim(V_{1})+\dim(V_{2})+\dim(V_{3})<n\) such that \[T\in V_{1}\otimes\mathbb{C}^{n}\otimes\mathbb{C}^{n}+\mathbb{C}^{n}\otimes V _{2}\otimes\mathbb{C}^{n}+\mathbb{C}^{n}\otimes\mathbb{C}^{n}\otimes V_{3}.\] After linear coordinate changes in the individual tensor factors, we may assume that \(V_{i}\) is spanned by the first \(n_{i}\) basis vectors. Now consider the triple \(\lambda:=(\lambda_{1},\lambda_{2},\lambda_{3})\) of \(1\)-parameter subgroups in \(\operatorname{SL}_{n}\) defined by \[\lambda_{i}(t)=\operatorname{diag}(t^{(n-n_{i})},\ldots,t^{(n-n_{i})},t^{-n_{i }},\ldots,t^{-n_{i}})\] where there are \(n_{i}\) copies of the \(t^{(n-n_{i})}\) and \(n-n_{i}\) copies of \(t^{n_{i}}\), so that \(\det(\lambda_{i}(t))=1\) as desired. Then one sees that \(\lambda(t)\) acts by some power \(t^{a}\) on each standard basis vector in \(\mathbb{C}^{n}\otimes\mathbb{C}^{n}\otimes\mathbb{C}^{n}\), and that \(a\geq n-n_{1}-n_{2}-n_{3}>0\) for all basis vectors that have a nonzero coefficient in \(T\). Hence \(\lambda(t)T\to 0\) for \(t\to 0\) and all \(G\)-invariant polynomials vanish on \(T\). For the following result we refer to [1], where this invariant is called \(F_{k}\). **Proposition 4.2**.: _If \(n=k^{2}\), then there exists a nonzero, homogeneous, \(G\)-invariant polynomial on \(\mathbb{C}^{n}\otimes\mathbb{C}^{n}\otimes\mathbb{C}^{n}\) of degree \(k^{3}\). _ **Corollary 4.3**.: _For \(d=3\), in Theorem 1.3, we can take \(m=O(r\sqrt{r})\) for \(r\to\infty\)._ Proof.: Let \(k\) be the smallest positive integer satisfying \(k^{2}\geq r+1\) and set \(n:=k^{2}\). By Proposition 4.2, there exists an nonzero \(G\)-invariant polynomial \(f\) of degree \(k^{3}\) on \(\mathbb{C}^{n}\otimes\mathbb{C}^{n}\otimes\mathbb{C}^{n}\). By Lemma 4.1, this polynomial vanishes on tensors of slice rank at most \(n-1\), hence in particular in tensors of slice rank at most \(r\). Clearly, \(\deg(f)=O(r\sqrt{r})\). ## 5. Deduction of the restriction result for the rank of polynomials In this section we deduce Theorem 1.4 from Theorem 1.1. We begin by establishing that we can deduce bounds on the partition rank of a tensor from bounds on the Schmidt rank of a polynomial and conversely. This correspondence is well known from the work of Kazhdan and Ziegler [10]; we include it here for convenience of the reader. As in the statement of Theorem 1.4 we assume that \(\operatorname{char}K=0\) or \(\operatorname{char}K>d\). Recall that the space of homogeneous polynomials of degree \(d\) in \(x_{1},\dots,x_{n}\) has a natural isomorphism to the symmetric power \(S^{d}K^{n}\), and consider the natural linear map determined by \[\varphi:K^{n}\otimes\dots\otimes K^{n}\to S^{d}K^{n},\ v_{1}\otimes\dots \otimes v_{d}\mapsto v_{1}\cdots v_{d}.\] It clearly maps any tensor of partition rank at most \(1\) to a polynomial of Schmidt rank at most \(1\), and hence, by linearity, a tensor of partition rank at most \(r\) to a polynomial of Schmidt rank at most \(r\). Conversely, we have a linear map determined by \[\psi:S^{d}K^{n}\to K^{n}\otimes\dots\otimes K^{n},\ v_{1}\cdots v_{d}\mapsto \sum_{\pi\in S_{d}}v_{\pi(1)}\otimes\dots\otimes v_{\pi(d)},\] which is a linear isomorphism to the space of _symmetric_ tensors in \(K^{n}\otimes\dots\otimes K^{n}\) with inverse \(\varphi/d!\) restricted to that space of symmetric tensors. This maps a polynomial of Schmidt rank at most \(1\), given as \(Q\cdot R\) with \(Q\) of degree \(e\) and \(R\) of degree \(d-e\), to a tensor of partition rank at most \(\binom{d}{e}\leq\binom{d}{\lfloor d/2\rfloor}=:D\). Again by linearity, this map sends a polynomial of Schmidt rank at most \(r\) to a tensor of partition rank at most \(r\cdot D\). Proof of Theorem 1.4.: Suppose that \(P\) is a homogeneous polynomial of degree \(d\) in \(x_{1},\dots,x_{n}\) and \(\operatorname{rk}(P[U])\leq r\) for all \(U\subseteq[n]\) of size at most \(dF_{d}(rD)\). Then \(\psi(P)=:T\) is a (symmetric) tensor. Consider any \(d\)-tuple \(U_{1},\dots,U_{d}\) of subsets of \([n]\), each of size at most \(F_{d}(rD)\), and take \(U:=U_{1}\cup\dots\cup U_{d}\), a set of size at most \(dF_{d}(rD)\). Then by our assumption \(\varphi(T[U,\dots,U])=d!P[U]\) has Schmidt rank at most \(r\). It follows that \(T[U,\dots,U]=\psi(P[U])\) has partition rank at most \(rD\), and hence _a fortiori_ so does \(T[U_{1},\dots,U_{d}]\). We conclude that \(T\) itself has partition rank at most \(G_{d}(rD)\), and therefore \(P=\varphi(T)/d!\) has Schmidt rank at most \(G_{d}(rD)\).
2307.16219
Unsupervised Decomposition Networks for Bias Field Correction in MR Image
Bias field, which is caused by imperfect MR devices or imaged objects, introduces intensity inhomogeneity into MR images and degrades the performance of MR image analysis methods. Many retrospective algorithms were developed to facilitate the bias correction, to which the deep learning-based methods outperformed. However, in the training phase, the supervised deep learning-based methods heavily rely on the synthesized bias field. As the formation of the bias field is extremely complex, it is difficult to mimic the true physical property of MR images by synthesized data. While bias field correction and image segmentation are strongly related, the segmentation map is precisely obtained by decoupling the bias field from the original MR image, and the bias value is indicated by the segmentation map in reverse. Thus, we proposed novel unsupervised decomposition networks that are trained only with biased data to obtain the bias-free MR images. Networks are made up of: a segmentation part to predict the probability of every pixel belonging to each class, and an estimation part to calculate the bias field, which are optimized alternately. Furthermore, loss functions based on the combination of fuzzy clustering and the multiplicative bias field are also devised. The proposed loss functions introduce the smoothness of bias field and construct the soft relationships among different classes under intra-consistency constraints. Extensive experiments demonstrate that the proposed method can accurately estimate bias fields and produce better bias correction results. The code is available on the link: https://github.com/LeongDong/Bias-Decomposition-Networks.
Dong Liang, Xingyu Qiu, Kuanquan Wang, Gongning Luo, Wei Wang, Yashu Liu
2023-07-30T12:58:59Z
http://arxiv.org/abs/2307.16219v1
# Unsupervised Decomposition Networks for Bias Field Correction in MR Image ###### Abstract Bias field, which is caused by imperfect MR devices or imaged objects, introduces intensity inhomogeneity into MR images and degrades the performance of MR image analysis methods. Many retrospective algorithms were developed to facilitate the bias correction, to which the deep learning-based methods outperformed. However, in the training phase, the supervised deep learning-based methods heavily rely on the synthesized bias field. As the formation of the bias field is extremely complex, it is difficult to mimic the true physical property of MR images by synthesized data. While bias field correction and image segmentation are strongly related, the segmentation map is precisely obtained by decoupling the bias field from the original MR image, and the bias value is indicated by the segmentation map in reverse. Thus, we proposed novel unsupervised decomposition networks that are trained only with biased data to obtain the bias-free MR images. Networks are made up of: a segmentation part to predict the probability of every pixel belonging to each class, and an estimation part to calculate the bias field, which are optimized alternately. Furthermore, loss functions based on the combination of fuzzy clustering and the multiplicative bias field are also devised. The proposed loss functions introduce the smoothness of bias field and construct the soft relationships among different classes under intra-consistency constraints. Extensive experiments demonstrate that the proposed method can accurately estimate bias fields and produce better bias correction results. The code is available on the link [https://github.com/LeongDong/Bias-Decomposition-Networks](https://github.com/LeongDong/Bias-Decomposition-Networks). Keywords:Bias field Unsupervised learning MRI Intensity inhomogeneity. ## 1 Introduction Magnetic resonance imaging (MRI) techniques provide abundant anatomical details, which are critical to precise diagnosis and prognosis. The bias field is a common phenomenon in MR image created by imperfect MR devices or imaged objects. It brings artifactual signal inhomogeneity that intensity within the same tissue varies smoothly across the MR image, which could degrade the subsequent
2303.13696
Adaptive Multi-scale Online Likelihood Network for AI-assisted Interactive Segmentation
Existing interactive segmentation methods leverage automatic segmentation and user interactions for label refinement, significantly reducing the annotation workload compared to manual annotation. However, these methods lack quick adaptability to ambiguous and noisy data, which is a challenge in CT volumes containing lung lesions from COVID-19 patients. In this work, we propose an adaptive multi-scale online likelihood network (MONet) that adaptively learns in a data-efficient online setting from both an initial automatic segmentation and user interactions providing corrections. We achieve adaptive learning by proposing an adaptive loss that extends the influence of user-provided interaction to neighboring regions with similar features. In addition, we propose a data-efficient probability-guided pruning method that discards uncertain and redundant labels in the initial segmentation to enable efficient online training and inference. Our proposed method was evaluated by an expert in a blinded comparative study on COVID-19 lung lesion annotation task in CT. Our approach achieved 5.86% higher Dice score with 24.67% less perceived NASA-TLX workload score than the state-of-the-art. Source code is available at: https://github.com/masadcv/MONet-MONAILabel
Muhammad Asad, Helena Williams, Indrajeet Mandal, Sarim Ather, Jan Deprest, Jan D'hooge, Tom Vercauteren
2023-03-23T22:20:56Z
http://arxiv.org/abs/2303.13696v2
# Adaptive Multi-scale Online Likelihood Network for AI-assisted Interactive Segmentation ###### Abstract Existing interactive segmentation methods leverage automatic segmentation and user interactions for label refinement, significantly reducing the annotation workload compared to manual annotation. However, these methods lack quick adaptability to ambiguous and noisy data, which is a challenge in CT volumes containing lung lesions from COVID-19 patients. In this work, we propose an adaptive multi-scale online likelihood network (MONet) that adaptively learns in a data-efficient online setting from both an initial automatic segmentation and user interactions providing corrections. We achieve adaptive learning by proposing an adaptive loss that extends the influence of user-provided interaction to neighboring regions with similar features. In addition, we propose a data-efficient probability-guided pruning method that discards uncertain and redundant labels in the initial segmentation to enable efficient online training and inference. Our proposed method was evaluated by an expert in a blinded comparative study on COVID-19 lung lesion annotation task in CT. Our approach achieved 5.86% higher Dice score with 24.67% less perceived NASA-TLX workload score than the state-of-the-art. Source code is available at: [https://github.com/masadcv/MONet-MONAILabel](https://github.com/masadcv/MONet-MONAILabel) ## 1 Introduction Deep learning methods for automatic lung lesion segmentation from CT volumes have the potential to alleviate the burden on clinicians in assessing lung damage and disease progression in COVID-19 patients [20, 21, 22]. However, these methods require large amounts of manually labeled data to achieve the level of robustness required for their clinical application [5, 8, 25, 23]. Manual labeling of CT volumes is time-consuming and may increase the workload of clinicians. Additionally, applying deep learning-based segmentation models to data from new unseen sources can result in suboptimal lesion segmentation due to unseen acquisition devices/parameters, variations in patient pathology, or future coronavirus variants resulting in new appearance characteristics or new lesion pathologies [16]. To address this challenge, interactive segmentation methods that can quickly adapt to such changing settings are needed. These can be used either by end-users or algorithm developers to quickly expand existing annotated datasets and enable agile retraining of automatic segmentation models [4]. _Related work._ Interactive segmentation methods for Artificial Intelligence (AI) assisted annotation have shown promising applications in the existing literature [14, 18, 26, 24]. BIFSeg [24] utilizes a bounding box and scribbles with convolutional neural network (CNN) image-specific fine-tuning to segment potentially _unseen_ objects of interest. MIDeepSeg [14] incorporates user-clicks with the input image using exponential geodesic distance. However, BIFSeg, MIDeepSeg, and similar deep learning-based methods exploit large networks that do not adapt rapidly to new data examples in an online setting due to the elevated computational requirements. Due to their quick adaptability and efficiency, a number of existing online likelihood methods have been applied as interactive segmentation methods [2, 3, 27]. DybaORF [27] utilizes hand-crafted features with dynamically changing weights based on interactive labels' distribution to train a Random Forest classifier. ECONet [2] improves online learning with a shallow CNN that jointly learns both features and classifier to outperform previous online likelihood inference methods. While ECONet is, to the best of our knowledge, the only online learning method that addresses COVID-19 lung lesion segmentation, it is limited to learning from user scribbles only. This means that it requires a significant amount of user interaction to achieve expert-level accuracy. Additionally, the model uses a single convolution for feature extraction, limiting its accuracy to a specific scale of pathologies. For each CT volume, the model is trained from scratch, resulting in lack of prior knowledge about lesions. _Contributions._ To overcome limitations of existing techniques, we propose adaptive multi-scale online likelihood network (MONet) for AI-assisted interactive segmentation of lung lesions in CT volumes from COVID-19 patients. Our contributions are three-fold: 1. Our multi-scale online likelihood network (MONet), consisting of a multi-scale feature extractor, extracts relevant features at different scales for improved accuracy; 2. Our adaptive online loss uses weights from a scaled negative exponential geodesic distance from user-scribbles, enabling adaptive learning from both initial segmentation and user-provided corrections (Fig. 1); Figure 1: Adaptive online training weights: (a) input, (b) foreground / background scribbles, (c) foreground and (d) background weights using \(\tau\)=0.2 in Eq. (2). 3. Our probability-guided pruning approach, where uncertainty from initial segmentation model is used, prunes ambiguous online training data. Our experimental results show that adaptively learned MONet outperforms existing state-of-the-art, achieving 5.86% higher Dice score with 24.67% less perceived NASA-TLX workload score evaluated by an expert annotator. ## 2 Method Given an input image volume, \(I\), a pre-trained CNN segmentation model generates an automatic segmentation \(C\) with associated probabilities \(P\). When using data from a new domain, the automated network may fail to properly segment foreground/background objects. To improve this, the user provides scribbles-based interaction indicating corrected class labels for a subset of voxels in the image \(I\). Let \(\mathcal{S}=\mathcal{S}^{f}\cup\mathcal{S}^{b}\) represent these set of scribbles, where \(\mathcal{S}^{f}\) and \(\mathcal{S}^{b}\) denote the foreground and background scribbles, respectively, and \(\mathcal{S}^{f}\cap\mathcal{S}^{b}=\emptyset\). Fig. 2 (a) shows scribbles \(\mathcal{S}\), along with the initial segmentation \(C\) and probabilities \(P\). #### 2.0.1 Multi-scale Online Likelihood Network. Our proposed multi-scale online likelihood network (MONet), shown in Fig. 2 (b), uses a multi-scale feature extractor that applies a 3D convolution at various kernel sizes to capture spatial information at different scales. The output of each scale is concatenated and fed to a fully-connected classifier, which infers the likelihood for background/foreground classification of the central voxel in the input patch. Each layer in MONet is followed by a batch normalization and ReLU activation. #### 2.0.2 Adaptive Loss for Online Learning. The scribbles \(\mathcal{S}\) only provide sparse information for online learning. However, these corrections are likely also applicable to neighboring voxels with similar appearance features, thereby providing an extended source of training information. Concurrently, the initial automated Figure 2: Adaptive learning for interactive segmentation: (a) training and inference of MONet using adaptive loss and probability-guided pruning; (b) architecture of our multi-scale online likelihood network (MONet). segmentation \(C\) will often provide reliable results away from the scribbles. To extend the influence of the scribbles \(\mathcal{S}\) while preserving the quality of the initial segmentation \(C\), we propose a spatially-varying adaptive online loss: \[\mathcal{L}=-\sum_{i}\left[(1-W_{i})\mathcal{L}_{i}^{C}+W_{i}\mathcal{L}_{i}^{ \mathcal{S}}\right], \tag{1}\] where \(i\) is a voxel index, \(\mathcal{L}^{C}\) and \(\mathcal{L}^{\mathcal{S}}\) are individual loss terms for learning from the automated segmentation \(C\) and the user-provided correction scribbles \(\mathcal{S}\) respectively. \(W\) are spatially-varying interaction-based weights defined using the geodesic distance \(D\) between voxel \(i\) and the scribbles \(\mathcal{S}\): \[W_{i}=\exp\left(\frac{-D(i,S,I)}{\tau}\right), \tag{2}\] where the temperature term \(\tau\) controls the influence of \(W\) in \(I\). The geodesic distance to the scribbles is defined as \(D(i,\mathcal{S},I)=\min_{j\in\mathcal{S}}d(i,j,I)\) where \(d(i,j,I)=\min_{p\in\mathcal{P}_{i,j}}\int_{0}^{1}\|\nabla I(p(x))\cdot\mathbf{ u}(x)\|\)\(dx\) and \(\mathcal{P}_{i,j}\) is the set of all possible differentiable paths in \(I\) between voxels \(i\) and \(j\). A feasible path \(p\) is parameterized by \(x\in[0,1]\). We denote \(\mathbf{u}(x)=p^{\prime}(x)/\left\|p^{\prime}(x)\right\|\) the unit vector tangent to the direction of the path \(p\). We further let \(D=\infty\) for \(\mathcal{S}=\emptyset\). _Dynamic Label-balanced Cross-Entropy Loss._ User-scribbles for online interactive segmentation suffer from dynamically changing class imbalance [2]. Moreover, lung lesions in CT volumes usually occupy a small subset of all voxels, introducing additional label imbalance and hence reducing their impact on imbalanced online training. To address these challenges, we utilize a label-balanced cross-entropy loss [2, 10, 11], with dynamically changing class weights from segmentations and scribbles distribution. Given an online model with parameters \(\theta\), the foreground likelihood from this model is \(p_{i}=P(s_{i}=1|I,\theta)\). Then, the segmentations-balanced and scribbles-balanced cross-entropy terms are: \[\mathcal{L}_{i}^{C} =\alpha^{f}y_{i}^{C}\log p_{i}+\alpha^{b}(1-y_{i}^{C})\log(1-p_{i}), \tag{3}\] \[\mathcal{L}_{i}^{S} =\beta^{f}y_{i}^{\mathcal{S}}\log p_{i}+\beta^{b}(1-y_{i}^{ \mathcal{S}})\log(1-p_{i}), \tag{4}\] where \(\alpha\) and \(\beta\) are class weights for labels \(C\) and scribbles \(\mathcal{S}\) that are defined by labels and scribbles distributions during online interaction as: \(\alpha^{f}=|\mathcal{T}|/\big{|}C^{f}\big{|}\), \(\alpha^{b}=|\mathcal{T}|/\big{|}C^{b}\big{|}\), \(\beta^{f}=|\mathcal{T}|/\big{|}\mathcal{S}^{f}\big{|}\), \(\beta^{b}=|\mathcal{T}|/\big{|}\mathcal{S}^{b}\big{|}\) and \(|\mathcal{T}|=|C|+|\mathcal{S}|\). \(y_{i}^{C}\) and \(y_{i}^{\mathcal{S}}\) represent labels in \(C\) and \(\mathcal{S}\), respectively. The patch-based training approach from [2] is used to first extract K\(\times\)K\(\times\)K patches from \(I\) centered around each voxel in \(\mathcal{S}\) and \(C\) and train MONet using Eq. (1). Once learned, efficient online inference from MONet is achieved by applying it to the whole input CT volumes as a fully convolutional network [12]. #### 2.0.2 Improving Efficiency with Probability-guided Pruning. MONet is applied as an online likelihood learning method, where the online training happens with an expert human annotator in the loop, which makes online training efficiency critical. We observe that the automatic segmentation models provide dense labels \(C\) which may significantly impact online training and inference performance. \(C\) may contain ambiguous predictions for new data, and a number of voxels in \(C\) may provide redundant labels. To improve online efficiency while preserving accuracy during training, we prune labels as \(C^{*}=\mathcal{M}\odot C\) where \(\mathcal{M}_{i}\) is set to 1 if \(P_{i}\geq\zeta\) and \(U_{i}\geq\eta\) and 0 otherwise. \(\zeta\in[0,1]\) is the minimum confidence to preserve a label, \(U_{i}\in[0,1]\) is a uniformly distributed random variable, and \(\eta\in[0,1]\) is the fraction of samples to prune. ## 3 Experimental Validation Table 1 outlines the different state-of-the-art interactive segmentation methods and their extended variants that we introduce for fair comparison. We compare our proposed MONet with ECONet [2] and MIDeepSeg [14]. As our proposed Eq. (2) is inspired by the exponential geodesic distance from MIDeepSeg [14], we introduce MIDeepSegTuned, which utilizes our proposed addition of a temperature term \(\tau\). Moreover, to show the importance of multi-scale features, we include MONet-NoMS which uses features from a single 3D convolution layer. We utilize MONAI Label to implement all online likelihood methods [7]. For methods requiring an initial segmentation, we train a 3D UNet [6] using MONAI [17] with features [32, 32, 64, 128, 256, 32]. Output from each method is regularized using GraphCut optimization. We also compare against a baseline interactive GraphCut (IntGraphCut) implementation, that updates UNet output with scribbles based on [3] and then performs GraphCut optimization. We utilize a GPU-based implementation of geodesic distance transform [1] in Eq. (2), whereas MIDeepSeg uses a CPU-based implementation. We use NVIDIA Tesla V100 GPU with 32 GB memory for all our experiments. Comparison of accuracy for each method is made using Dice similarity (Dice) and average symmetric surface distance (ASSD) metrics against ground truth annotations [2, 14]. Moreover, we compare performance using execution time (Time), including online training and inference time, average full annotation time (FA-Time), and number of voxels with scribbles (S) needed for a given accuracy. _Data_. To simulate a scenario where the automatic segmentation model is trained on data from a different source than it is tested on, we utilize two \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline **Method** & **Technique** & **Initial** & **Multi** & **Adaptive** & **Temp.** & **Dice** & **ASSD** \\ & & **Seg.** & **Scale** & **Loss** & (\(\tau\)) & **(\%)** & \\ \hline \hline **MONet (proposed)** & **OL** & ✓ & ✓ & ✓ & ✓ & **77.77** & **11.82** \\ **MONet-NoMS** & **OL** & ✓ & ✗ & ✓ & ✓ & 77.06 & 13.01 \\ **ECONet[2]** & **OL** & ✗ & ✗ & ✗ & ✗ & 77.02 & 20.19 \\ **MIDeepSegTuned[14]** & **PP** & ✓ & ✗ & ✗ & ✓ & 76.00 & 20.16 \\ **MIDeepSeg[14]** & **PP** & ✓ & ✗ & ✗ & ✗ & 56.85 & 33.25 \\ **IntGraphCut** & **PP** & ✓ & ✗ & ✗ & ✗ & 68.58 & 28.64 \\ \hline \hline \end{tabular} \end{table} Table 1: State-of-the-art evaluated comparison methods, showing improvement in accuracy (Dice and ASSD) when using different features. Features in blue text are proposed in this paper. Key: OL - online learning, PP - post-processing. different COVID-19 CT datasets. The dataset from the COVID-19 CT lesion segmentation challenge [21] is used for training and validation of 3D UNet for automatic segmentation task and patch-based pre-training of MONet/MONet-NoMS/ECONet. This dataset contains binary lung lesions segmentation labels for 199 CT volumes (160 training, 39 validation). We use UESTC-COVID-19 [25], a dataset from a different source, for the experimental evaluation of interactive segmentation methods (test set). This dataset contains 120 CT volumes with lesion labels, from which 50 are by expert annotators and 70 are by non-expert annotators. To compare robustness of the proposed method against expert annotators, we only use the 50 expert labelled CT volumes. _Training Parameters._ Training of 3D UNet utilized a learning rate (lr) of \(1e^{-4}\) for 1000 epochs and MONet/MONet-NoMS offline pre-training used 50 epochs, and \(\mathrm{lr}=1e^{-3}\) dropped by 0.1 at the 35th and 45th epoch. Online training for MONet, MONet-NoMS and ECONet [2] used 200 epochs with \(\mathrm{lr}=1e^{-2}\) set using cosine annealing scheduler [13]. Dropout of 0.3 was used for all fully-connected layers in online models. Each layer size in ECONet and MONet-NoMS was selected by repeating line search experiments from [2]: (i) input patch/3D convolution kernel size of \(K=9\), (ii) 128 input 3D convolution filters and (iii) fully-connected sizes of 32\(\times\)16\(\times\)2. For MONet, we utilize four input 3D convolution with multi-scale kernel sizes \(K=[1,3,5,9]\) with each containing 32 filters (i.e., a total of 128 filters, same as (ii)). We utilize the same fully-connected sizes as in (iii) above. Parameters \(\zeta=0.8\) and \(\eta=0.98\) are selected empirically. We utilize \(\tau=0.3\) for MONet, MIDepSegTuned and MONet-NoMS. We use GraphCut regularization, where \(\lambda=2.5\) and \(\sigma=0.15\)[3]. Search experiments used for selecting \(\tau\), \(\lambda\), \(\sigma\) are shown in Fig. 2 and 2 in supplementary material. #### 3.2.2 Quantitative Comparison using Synthetic Scribbler. We employ the synthetic scribbler method from [2, 26] where mis-segmented regions in the inferred segmentations are identified by comparison to the ground truth segmentations. Table 2 and Fig. 3 present quantitative comparison of methods using synthetic scribbler. They show that MONet outperforms all existing state-of-the-art in terms of accuracy with the least number of synthetic scribbled voxels. In particular, MONet outperforms both MIDeepSeg [14] and MIDeepSegTuned, where adaptive online learning enables it to quickly adapt and refine segmentations. In \begin{table} \begin{tabular}{l c c c c} \hline **Method** & **Dice (\%)** & **ASSD** & **Time (s)** & **Seribbles** \\ \hline \hline **MONet (proposed)** & **77.77\(\pm\)6.84** & **11.82\(\pm\)12.83** & 6.18\(\pm\)2.42 & **20\(\pm\)24** \\ MONet-NoMS & 77.06\(\pm\)7.27 & 13.01\(\pm\)15.29 & 7.76\(\pm\)8.16 & 20\(\pm\)24 \\ ECONet[2] & 77.02\(\pm\)6.94 & 20.19\(\pm\)14.71 & 1.46\(\pm\)1.22 & 2283\(\pm\)2709 \\ MIDeepSegTuned[14] & 76.00\(\pm\)7.37 & 20.16\(\pm\)22.57 & 7.97\(\pm\)2.47 & 32\(\pm\)17 \\ MIDeepSeg[14] & 56.85\(\pm\)14.25 & 33.25\(\pm\)25.26 & 6.26\(\pm\)1.46 & 436\(\pm\)332 \\ **IntGraphCut** & 68.58\(\pm\)9.09 & 28.64\(\pm\)27.36 & **0.11\(\pm\)0.04** & 480\(\pm\)359 \\ \hline \end{tabular} \end{table} Table 2: Quantitative comparison of interactive segmentation methods using synthetic scribbler shows mean and standard deviation of Dice, ASSD, Time and Synthetic Scribbles Voxels. Figure 3: Validation accuracy using synthetic scribbles. terms of efficiency, online training and inference of the proposed MONet takes around 6.18 seconds combined, which is 22.4% faster as compared to 7.97 seconds for MIDeepSeg. However, it is slower than ECONet and ISeg. MIDeepSeg performs the worst as it is unable to adapt to large variations and ambiguity within lung lesions from COVID-19 patients, whereas by utilizing our proposed Eq. (2) in MIDeepSegTuned, we improve its accuracy. When comparing to online learning methods, MONet outperforms MONet-NoMS, where the accuracy is improved due to MONet's ability to extract multi-scale features. Existing state-of-the-art online method ECONet [2] requires significantly more scribbled voxels as it only relies on user-scribbles for online learning. **Performance and Workload Validation by Expert User.** This experiment aims to compare the performance and _perceived_ subjective workload of the proposed MONet with the best performing comparison method MIDeepSegTuned based on [14]. We asked an expert, with 2 years of experience in lung lesion CT from Radiology Department, Oxford University Hospitals NHS Foundation Trust, to utilize each method for labelling the following pathologies as lung lesions in 10 CT volumes from UESTC-COVID-19 expert set [25]: ground glass opacity, consolidation, crazy-paving, linear opacities. One CT volume is used by the expert to practice usage of our tool. The remaining 9 CT volumes were presented in a random order, where the perceived workload was evaluated by the expert at half way (after 5 segmentations) and at the end. We use the National Aeronautics and Space Administration Task Load Index (NASA-TLX) [9] as per previous interactive segmentation studies [15, 19, 28]. The NASA-TLX asks the expert to rate the task based on six factors, being performance, frustration, effort, mental, physical and temporal demand. The weighted NASA-TLX score is then recorded as the expert answers 15 pair-wise questions rating factors based on importance. In addition, we also recorded accuracy metrics (Dice and ASSD) against ground truth labels in [25], time taken to complete annotation \begin{table} \begin{tabular}{c|c c} \hline **NASA-TLX** & **MONet** & **MIDeepSegSeg** \\ **weighted scores** & **(proposed)** & **Tuned[14]** \\ \hline \hline **Effort** & **14.67** & 21.33 \\ **Frustration** & **7.67** & 16.67 \\ **Mental Demand** & **13.33** & 15.00 \\ **Performance** & **10.00** & 17.00 \\ **Physical Demand** & **4.67** & 7.00 \\ **Temporal Demand** & 2.00 & **0.00** \\ \hline **Total workload** & **52.33** & 77.00 \\ \hline \end{tabular} \end{table} Table 4: NASA-TLX perceived workload by expert user, shows total workload and individual sub-scale scores. Method with low score requires less effort, frustration, mental, temporal and physical demands with high perceived performance. and whether the expert was able to successfully complete their task within 10 minutes allocated for each volume. Table 3 presents an overview for this experiment, where using the proposed MONet, the expert was able to complete 100% of the labelling task, whereas using MIDeepSegTuned they only completed 33.33% within the allocated time. In addition, MONet achieves better accuracy with lower time for complete annotation and less overall perceived workload with NASA-TLX of 52.33% as compared to 77.00% for MIDeepSegTuned. Table 4 shows the individual scores that contribute to overall perceived workload. It shows that using the proposed MONet, the expert perceived reduced workload in all sub-scale scores except temporal demand. We believe this is due to the additional online training/inference overhead for MONet application. Fig. 4 visually compares these results where MONet results in more accurate segmentation as compared to MIDeepSegTuned. We also note that MONet's ability to apply learned knowledge on the whole volume enables it to also infer small isolated lesions, which MIDeepSegTuned fails to identify. ## 4 Conclusion We proposed a multi-scale online likelihood network (MONet) for scribbles-based AI-assisted interactive segmentation of lung lesions in CT volumes from COVID-19 patients. MONet consisted of a multi-scale feature extractor that enabled extraction of relevant features at different scales for improved accuracy. We proposed an adaptive online loss that utilized adaptive weights based on user-provided scribbles that enabled adaptive learning from both an initial automated segmentation and user-provided label corrections. Additionally, we proposed a dynamic label-balanced cross-entropy loss that addressed dynamic Figure 4: Visual comparison of interactive segmentation results from Section 3. Segmentations are shown with contours on axial plane slices from different cases. class imbalance, an inherent challenge for online interactive segmentation methods. Experimental validation showed that the proposed MONet outperformed the existing state-of-the-art on the task of annotating lung lesions in COVID-19 patients. Validation by an expert showed that the proposed MONet achieved on average 5.86% higher Dice while achieving 24.67% less perceived NASA-TLX workload score than the MIDeepSegTuned method [14]. ## 5 Acknowledgments This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 101016131 (icovid project). This work was also supported by core and project funding from Wellcome / EPSRC [WT203148/Z/16/Z; NS/A000049/1; WT101957; NS/A000027/1]. TV is supported by a Medtronic/Royal Academy of Engineering Research Chair [RCSRF1819/7/34]. This project utilized scribbles-based interactive segmentation tools from open source project MONAI Label [7].
2308.07585
Generalized Fourier quasicrystals and almost periodic sets
Let $\mu$ be a positive measure on the real line with locally finite support $\Lambda$ and integer masses such that its Fourier transform in the sense of distributions is a purely point measure. An explicit form is found for an entire almost periodic function with a set of zeros $\Lambda$, taking multiplicities into account. A necessary and sufficient condition for the exponential growth of this function is also found. Our constructions are based on the properties of almost periodic sets on the line. In particular, in the article we find a simple representation of such sets.
Sergii Favorov
2023-08-15T06:16:48Z
http://arxiv.org/abs/2308.07585v1
# Generalized Fourier quasicrystals and almost periodic sets # Generalized Fourier quasicrystals and almost periodic sets Sergii Yu.Favorov **Abstract.** Let \(\mu\) be a positive measure on the real line with locally finite support \(\Lambda\) and integer masses such that its Fourier transform in the sense of distributions is a purely point measure. An explicit form is found for an entire almost periodic function with a set of zeros \(\Lambda\), taking multiplicities into account. A necessary and sufficient condition for the exponential growth of this function is also found. Our constructions are based on the properties of almost periodic sets on the line. In particular, in the article we find a simple representation of such sets. AMS Mathematics Subject Classification: 42A75, 42A38, 52C23 **Keywords: Fourier quasicrystal, Fourier transform of distribution, pure point measure, almost periodic function, almost periodic sets, entire function with a given zero set** ## 1. Introduction A crystalline measure on \(\mathbb{R}^{d}\) is a complex measure with discrete locally finite support, which is temperate distribution and its distributional Fourier transform \(\hat{\mu}\) is also a measure with locally finite support; if, in addition, the measures \(|\mu|\) and \(|\hat{\mu}|\) are temperate distributions, then \(\mu\) is called the Fourier quasicrystal. The Fourier quasicrystal may be considered as a mathematical model for atomic arrangement having a discrete diffraction pattern. There are a lot of papers devoted to study properties of Fourier quasicrystals or, more generally, crystalline measures. For example, one can mark collections of papers [2], [16], in particular, the basic paper [11]. Measures of the form \[\mu=\sum_{\lambda\in\Lambda}c_{\lambda}\delta_{\lambda},\qquad c_{\lambda}\in \mathbb{N}, \tag{1}\] are the most important cases of Fourier quasicrystal. Recently A.Olevalskii and A.Ulanovskii [14], [15] give a complete description of these measures as zero sets of an exponential polynomial with pure imaginary exponents, where \(c_{\lambda}\) is just the multiplicity of the zero \(\lambda\) of the polynomial. In the present paper we give an analog of the above result for measures (1) with the distributional Fourier transform \[\hat{\mu}=\sum_{\gamma\in\Gamma}b_{\gamma}\delta_{\gamma}, \tag{2}\] where \(\Gamma\) is an arbitrary countable set. In this case the corresponding Poisson's formula \[\sum_{\lambda\in\Lambda}c_{\lambda}\hat{f}(\lambda)=\sum_{\gamma\in\Gamma}b_{ \gamma}f(\gamma).\] also takes place for every function \(f\) from Schwarz's class. In order to describe such measures, we use the concept of almost periodic sets, which was introduced by M. Krein and B. Levin ([9], Appendix VI). In modern notations (cf.[12], [17]), a locally finite set \(\Lambda\) with multiplicities \(c_{\lambda}\) at points \(\lambda\in\Lambda\) is almost periodic, if the convolution of measure (1) with every continuous function with compact support is almost periodic function. We will write an almost periodic set like the sequence \(A=\{a_{n}\}_{n\in\mathbb{Z}}\), where each point \(a_{n}=\lambda\) occurs \(c_{\lambda}\) times. Therefore, almost periodic sets are in fact multisets. In Section 2 we give the original definition of almost periodic sets given by Krein and Levin, which is equivalent to the one given above. Also we prove some properties of almost periodic sets, in particular, show that such sets have the form \(\{\alpha n+\phi(n)\}_{n\in\mathbb{N}}\) with \(\alpha>0\) and an almost periodic mapping \(\phi:\,\mathbb{Z}\to\mathbb{R}\). Note that the zero set of every entire almost periodic function is almost periodic (cf.[18]). On the other hand, it was proved in [7] that every almost periodic set \(A\subset\mathbb{R}\) is exactly the zero set of some entire almost periodic function. In section 3 we consider measure \(\mu\) of the form (1) under additional conditions that \(\hat{\mu}\) is a measure and \(|\hat{\mu}|\) is a temperate distribution. If this is the case, the almost periodicity of measure \(\mu\), in other words, almost periodicity of corresponding multiset \(A\), is equivalent to the property of \(\hat{\mu}\) to be pure point. Here we find an almost periodic entire function with zero set \(A\) in an explicit form depending only on \(\gamma\) and \(b_{\gamma}\) from equality (2). Also we find a criterion for \(A\) to be the zero set of an almost periodic entire function of the exponential growth. Note that according to the Phragmen-Lindelof principle any entire function bounded on \(\mathbb{R}\) cannot grow less than exponentially. ## 2. Almost periodic sets **Definition 1** (for example, see [1], [10]).: _A continuous function on a strip_ \[S=\{z=x+iy:\,-\infty\leq a<y<b\leq+\infty\}\subset\mathbb{C}\] _is almost periodic if for any \(\alpha,\,\beta\) such that \([\alpha,\beta]\subset(a,b)\) and \(\varepsilon>0\) the set of \(\varepsilon\)-almost periods_ \[E_{\alpha,\beta,\varepsilon}=\{\tau\in\mathbb{R}:\sup_{x\in\mathbb{R},\alpha \leq y\leq\beta}|g(x+\tau+iy)-g(x+iy)|<\varepsilon\}\] _is relatively dense, i.e., \(E_{\alpha,\beta,\varepsilon}\cap(x,x+L)\neq\emptyset\) for all \(x\in\mathbb{R}\) and some \(L\) depending on \(\varepsilon,\alpha,\beta\)._ Just as it was done in [7], we could define an almost periodic set in a strip. But we need it only for the case of sets on the real line, so we can give this definition here in a simplified form. **Definition 2** (M. Krein and B. Levin [9], Appendix VI).: _A discrete locally finite multiset \(A=\{a_{n}\}_{n\in\mathbb{Z}}\subset\mathbb{R}\) is almost periodic if for any \(\varepsilon>0\) the set of its \(\varepsilon\)-almost periods_ \[E_{\varepsilon}=\{\tau\in\mathbb{R}:\,\exists\ \text{ a bijection }\sigma:\mathbb{Z}\to\mathbb{Z}\quad\text{such that }\sup_{n}|a_{n}+\tau-a_{\sigma(n)}|<\varepsilon\} \tag{3}\] _has nonempty intersection with every interval \((x,x+L_{\varepsilon})\)._ Set \(\mu_{A}=\sum_{n}\delta_{a_{n}}\). Clearly, the mass of \(\mu_{A}\) in any point \(x\in\mathbb{R}\) is equal to the multiplicity of this point in the sequence \(\{a_{n}\}_{n\in\mathbb{Z}}\). It was proved in [7] that almost periodicity of \(A\) is equivalent to almost periodicity of convolution \(\mu_{A}\star\varphi\) for every \(C^{\infty}\)-function \(\varphi(x)\), \(x\in\mathbb{R}\), with compact support. But it is easy to replace \(C^{\infty}\)-functions by continuous functions with compact support. Indeed, take \(C^{\infty}\)-function \(\varphi\geq 0\) such that \(\varphi(x)\equiv 1\) for \(0<x<1\). If \(\mu_{A}\star\varphi\) is almost periodic, then it is uniformly bounded, hence \(\mu_{A}[x,x+1]<K\) for all \(x\in\mathbb{R}\) with some constant \(K\). For any continuous function \(\psi\) with support in \((0,1)\) one can take \(\varphi\in C^{\infty}\) such that \(\sup_{x\in\mathbb{R}}|\psi(x)-\varphi(x)|<\varepsilon/K\). We obtain that every \(\varepsilon\)-almost period of \(\mu_{A}\star\varphi\) is \(2\varepsilon\)-almost period of \(\mu_{A}\star\psi\). By the way, we gave the proof of the following proposition **Proposition 1** ([7]).: _For any almost periodic set there is \(k_{1}\in\mathbb{N}\) such that \(\#A\cap[x,x+1]\leq k_{1}\). Also, \(\#A\cap[x,x+h)\leq k_{1}(h+1)\)._ Here and below, \(\#H\) means the number of points in the set \(H\). Note that the above proposition implies that \(\mu_{A}(-r,r)=O(r)\) as \(r\to\infty\), therefore \(\mu_{A}\) is a temperate distribution. **Proposition 2**.: _For any almost periodic set there is \(k_{2}\in\mathbb{N}\) such that for every \(h\) and every half-intervals \([x_{1},x_{1}+h),\,[x_{2},x_{2}+h)\) we have \(|\#A\cap[x_{1},x_{1}+h)-\#A\cap[x_{2},x_{2}+h)|\leq k_{2}\). Also, for every \(x\in\mathbb{R}\), \(h>0,\,M\in\mathbb{N}\)_ \[|\#A\cap[x,x+h)-(1/M)\#A\cap[x,x+Mh)|\leq k_{2}. \tag{4}\] **Proof**. Let \(L_{1},\,E_{1}\) be defined in (3), and \(\tau\in E_{1}\cap[x_{1}-x_{2},L_{1}+x_{1}-x_{2})\). Since \([x_{2},x_{2}+h)+\tau\subset[x_{1},x_{1}+L_{1}+h)\), we see that to each \(a_{n}\in[x_{2},x_{2}+h)\) assign a point \(a_{\sigma(n)}\in[x_{1}-1,x_{1}+L_{1}+h+1)\). Therefore, \[\#A\cap[x_{2},x_{2}+h]\leq\#A\cap[x_{1},x_{1}+h)+\#A\cap[x_{1}-1,x_{1})+\#A \cap[x_{1}+h,x_{1}+h+L_{1}+1).\] By Proposition 1, the last two terms are bounded by \(k_{1}+(L_{1}+2)k_{1}\). The proof of the opposite inequality is the same. To prove the second assertion, we have to add together all the inequalities \[\#A\cap[x,x+h)-k_{2}\leq\#A\cap[x+(m-1)h,x+mh)\leq k_{2}+\#A\cap[x,x+h),\quad m =1,2,\ldots,M.\] **Proposition 3**.: _Let \(A\) be an almost periodic set. There is a strictly positive density \(d\) such that for any \(\eta>0\) and any half-interval \(I\) with length \(l(I)>N_{\eta}\) we have_ \[\left|\frac{\#A\cap I}{l(I)}-d\right|<\eta.\] This result was generalized to all Euclidean spaces in [6]. **Proof of Proposition 3**. Let \(I_{1}=[x_{1},x_{1}+h_{1}),I_{2}=[x_{2},x_{2}+h_{2})\) be two half-intervals such that \(h_{1}/h_{2}=p/q,\,p,q\in\mathbb{N}\). We have \[\frac{\#A\cap I_{1}}{h_{1}}-\frac{\#A\cap I_{2}}{h_{2}}=\frac{\#A\cap I_{1}}{ h_{1}}-\frac{\#A\cap qI_{1}}{qh_{1}}+\frac{\#A\cap qI_{1}}{qh_{1}}-\frac{\#A \cap pI_{2}}{ph_{2}}+\frac{\#A\cap pI_{2}}{ph_{2}}-\frac{\#A\cap I_{2}}{h_{2}}.\] Applying Proposition 2, we get \[\left|\frac{\#A\cap I_{1}}{h_{1}}-\frac{\#A\cap I_{2}}{h_{2}}\right|\leq\frac {k_{2}}{h_{1}}+\frac{k_{2}}{qh_{1}}+\frac{k_{2}}{h_{2}}\leq k_{2}\left(\frac{ 2}{h_{1}}+\frac{1}{h_{2}}\right). \tag{5}\] Take half-interval \(I^{\prime}=[x_{1},x_{1}+h^{\prime})\) such that \(h_{1}<h^{\prime}<h_{1}+1\) and \(h^{\prime}/h_{2}\) rational. We have \[\left|\frac{\#A\cap I_{1}}{h_{1}}-\frac{\#A\cap I^{\prime}}{h^{\prime}}\right| \leq\frac{\#A\cap[x_{1}+h_{1},x_{1}+h^{\prime})}{h_{1}}+\frac{\#A\cap[x_{1},x _{1}+h^{\prime})}{h_{1}h^{\prime}}.\] By Proposition 1, we obtain \[\left|\frac{\#A\cap I_{1}}{h_{1}}-\frac{\#A\cap I^{\prime}}{h^{\prime}}\right| \leq\frac{k_{1}}{h_{1}}+\frac{k_{1}(h^{\prime}+1)}{h_{1}h^{\prime}}.\] Applying (5) with \(I^{\prime}\) instead of \(I_{1}\), we obtain for all \(I_{1}\), \(I_{2}\) \[\left|\frac{\#A\cap I_{1}}{h_{1}}-\frac{\#A\cap I_{2}}{h_{2}}\right|\leq k_{2} \left(\frac{2}{h_{1}}+\frac{1}{h_{2}}\right)+k_{1}\left(\frac{2}{h_{1}}+\frac {1}{h_{1}h^{\prime}}\right).\] Therefore there is a limit \[d=\lim_{l(I)\to\infty}\frac{\#A\cap I}{l(I)}.\] Since the set \(A\) is relatively dense, this limit is strictly positive. **Theorem 1**.: _Let \(A=\{a_{n}\}\subset\mathbb{R}\) be an almost periodic set of density \(d\) such that \(a_{n}\leq a_{n+1}\) for all \(n\in\mathbb{Z}\). Then_ \[a_{n}=n/d+\phi(n)\quad\text{with an almost periodic mapping}\quad\phi:\,Z\to\mathbb{R}. \tag{6}\] **Remark**. The wrong proof of this Theorem was given in [5]. **Proof of Theorem 1**. After replacing \(A\) with \(A/d\) we may suppose \(d=1\). Also, we may suppose that \(a_{0}<a_{1}\). It follows from Proposition 1 that every interval of length \(1\) contains at least one subinterval of length \(1/(2k_{1})\) that does not intersect \(A\). Take \(\varepsilon<\min\{1/(6k_{1}),(a_{1}-a_{0})/3\}\). Divide \(\mathbb{R}\) into an infinite number of disjoint semiintervals \(I_{j}=(t_{j},t_{j+1}]\), \(j\in\mathbb{Z}\) such that \(t_{j+1}-t_{j}<2\) and \(A\cap(t_{j}-2\varepsilon,t_{j}+2\varepsilon)=\emptyset\) for all \(j\). Therefore, if \(\sigma\) is any bijection that satisfies (3), then \(\rho(j)\in\mathbb{Z}\) corresponds to any \(j\) such that \(\sigma\) is the bijection of \(A\cap I_{j}\) to \(A\cap I_{\rho(j)}\). Let \(\sigma_{j}\) is the monotone increasing bijection of \(A\cap I_{j}\) on \(A\cap I_{\rho(j)}\). Check that \[|a_{n}+\tau-a_{\sigma_{j}(n)}|<\varepsilon\qquad\forall\,a_{n}\in I_{j}. \tag{7}\] Suppose the contrary. Let \(n_{0}\) be the minimal number such that (7) does not satisfy. If \(a_{n_{0}}+\tau+\varepsilon\leq a_{\sigma_{j}(n_{0})}\), then \(a_{n}+\tau+\varepsilon\leq a_{k}\) for all \(n\leq n_{0}\) and \(k\geq\sigma_{j}(n_{0})\), \(a_{n}\in I_{j}\), \(a_{k}\in I_{\rho(j)}\). Therefore, \(k\neq\sigma(n)\) for these numbers, and \(\sigma\) may give a correspondence between points from the set \(\{n\leq n_{0}:\,a_{n}\in I_{j}\}\) and points only from the set \(\{k<\sigma_{j}(n_{0}):\,a_{k}\in I_{\rho(j)}\}\). But by definition of \(\sigma_{j}\), we have \[\#\{n\leq n_{0}:\,a_{n}\in I_{j}\}=\#\{k\leq\sigma_{j}(n_{0}):\,a_{k}\in I_{ \rho(j)}\}>\#\{k<\sigma_{j}(n_{0}):\,a_{k}\in I_{\rho(j)}\}.\] We get a contradiction. If \(a_{n_{0}}+\tau\geq a_{\sigma_{j}(n_{0})}+\varepsilon\), then \(a_{n}+\tau\geq a_{k}+\varepsilon\) for all \(n\geq n_{0}\) and \(k\leq\sigma_{j}(n_{0})\), \(a_{n}\in I_{j}\), \(a_{k}\in I_{\rho(j)}\). Therefore, \(k\neq\sigma(n)\) for these numbers, and \(\sigma\) may give a correspondence between points from the set \(\{n\geq n_{0}:\,a_{n}\in I_{j}\}\) and points only from the set \(\{k>\sigma_{j}(n_{0}):\,a_{k}\in I_{\rho(j)}\}\). But \(\#(A\cap I_{j})=\#(A\cap I_{\rho(j)})\), hence by definition of \(\sigma_{j}\), we have \[\#\{n\geq n_{0}:\,a_{n}\in I_{j}\}=\#\{k\geq\sigma_{j}(n_{0}):\,a_{k}\in I_{ \rho(j)}\}>\#\{k>\sigma_{j}(n_{0}):\,a_{k}\in I_{\rho(j)}\}.\] We get a contradiction as well. Since numbers \(A\cap I_{j}\) and \(A\cap I_{\rho(j)}\) coincide, we see that the differences between indices of the first elements in these sets coincide for all \(j\). Hence a number \(h\in\mathbb{Z}\) corresponds to every \(\tau\in E_{\varepsilon}\) such that (3) satisfies with \(\sigma(n)=n+h\). It follows from the definition of \(\tau\) for all \(k\in\mathbb{N}\) \[\tau-\varepsilon<a_{kh}-a_{(k-1)h}<\tau+\varepsilon.\] Therefore the length of interval that contains points with numbers from \((k-1)h+1\) till \(kh\) is between \(\tau-\varepsilon\) and \(\tau+\varepsilon\), and the length of interval that contains points with numbers from \(1\) till \(Nh\) is between \(N(\tau-\varepsilon)\) and \(N(\tau+\varepsilon)\). Since \(d=1\), we get the inequality \(\tau-\varepsilon\leq h\leq\tau+\varepsilon\). Set \(\phi(n):=a_{n}-n\). We obtain for all \(n\in\mathbb{Z}\) \[\phi(n+h)-\phi(n)=a_{n+h}-a_{n}-h=a_{\sigma(n)}-(a_{n}+\tau)+(\tau-h).\] Using (3), we obtain \(|\phi(n+h)-\phi(n)|<2\varepsilon\). Therefore, \(h\) is \(2\varepsilon\)-almost period of the function \(\phi\). The set of \(\varepsilon\)-almost periods \(\tau\) of \(A\) is relatively dense, therefore the set of such integers \(h\) is relatively dense as well. **Corollary 1**.: _For any almost periodic set \(A=\{a_{n}\}\) such that \(0\not\in A\) there is a finite limit_ \[\alpha_{0}=\lim_{N\to\infty}\sum_{|a_{n}|<N}1/a_{n}.\] _Moreover, for any \(z\in\mathbb{C}\setminus A\) the sum_ \[\alpha_{z}=\frac{1}{z-a_{0}}+\sum_{n\in\mathbb{N}\setminus\{0\}}\left[\frac{1}{z -a_{n}}+\frac{1}{z-a_{-n}}\right].\] _converges absolutely._ **Proof**. Let \(A=\{n/d+\phi(n)\}_{n\in\mathbb{Z}}\). Since the numbers \(\phi(n)\) are uniformly bounded, we see that the sums \[\sum_{n\in\mathbb{Z},|a_{n}|<N}\frac{1}{a_{n}}\quad\text{and}\quad\sum_{n\in \mathbb{Z},|n|<N}\frac{1}{n/d+\phi(n)}\] differ for a uniformly bounded with respect to \(N\) number of terms, and each of these terms tends to \(0\) as \(N\to\infty\). Then \[\sum_{n\in\mathbb{Z},0<|n|<N}\frac{1}{n/d+\phi(n)}=\sum_{n\in\mathbb{N},0<n<N} \frac{\phi(n)+\phi(-n)}{\phi(n)\phi(-n)+n\phi(-n)/d-n\phi(n)/d-(n/d)^{2}}.\] The first assertion follows from Cauchy criterium. The second one follows from the absolutely convergence of the series \[\sum_{n\in\mathbb{N}\setminus\{0\}}\left[\frac{1}{z-a_{n}}+\frac{1}{z-a_{-n}} \right]=\sum_{n\in\mathbb{N}\setminus\{0\}}\left[\frac{2z-\phi(-n)-\phi(n)}{( n/d+\phi(n)-z)(-n/d+\phi(-n)-z)}\right].\qed\] In [9], Appendix VI, M.Krein and B.Levin considered zero sets \(Z_{f}\) of entire almost periodic functions \(f\) of exponential growth. They proved that if \(Z_{f}\subset\mathbb{R}\), then its zeros \(a_{n}\) form an almost periodic set, which satisfy (6) and \[\sup_{\tau\in\mathbb{Z}}\sum_{n\in\mathbb{Z}\setminus\{0\}}n^{-1}[\phi(n+\tau )-\phi(n)]<\infty. \tag{8}\] On the other hand, they proved that any almost periodic set \(A\subset\mathbb{R}\) satisfying conditions (6) and (8) is the set of zeros of an entire almost periodic function of exponential growth. It follows from Theorem 1 that condition (6) can be omitted in the last result. Theorem 1 was generalized by W.Lawton [8] to almost periodic sets in \(\mathbb{R}^{d},\,d>1,,\) whose spectrum is contained in a finitely generated additive group. ## 3. Almost periodic zeros of entire functions In this section we suppose that \(\mu\) is a measure of form (1), which is a temperate distribution and its Fourier transform \(\hat{\mu}\) is a measure of form (2) such that \(|\hat{\mu}|\) is also a temperate distribution. It follows from [4], Lemma 1, that the multiset \(A=\{a_{n}\}_{n\in\mathbb{Z}}\), where each point \(a_{n}=\lambda\) occurs \(c_{\lambda}\) times, is an almost periodic set. In what follows we will suppose that \(0\not\in A\). Set \[f(z)=(1-z/a_{0})\prod_{n\in\mathbb{N}}(1-z/a_{n})(1-z/a_{-n}). \tag{9}\] It follows from Corollary 1 that \(A\) satisfies Lindelof's condition, hence, \(f\) is an entire function of exponential type. Remark that for any measure \(\mu\) of form (1) with an almost periodic \(A\) condition "\(\hat{\mu}\) is a measure" implies "\(\hat{\mu}\) is a pure point measure" (cf.[13], Theorem 5.5). Set \[\mathbb{R}_{+}:=\{x\in\mathbb{R}:x>0\},\,\mathbb{R}_{-}:=-\mathbb{R}_{+},\, \mathbb{C}_{+}:=\{z\in\mathbb{C}:\text{Im}\,\,z>0\},\,\mathbb{C}_{-}:=-\mathbb{ C}_{+},\] \[a_{z}(t)=\begin{cases}-2\pi ie^{2\pi itz}&\text{if }t>0,\\ 0&\text{if }t\leq 0,\end{cases}\quad z\in\mathbb{C}_{+},\quad\quad\quad a_{z}(t)= \begin{cases}2\pi ie^{2\pi itz}&\text{if }t<0,\\ 0&\text{if }t\geq 0,\end{cases}\quad z\in\mathbb{C}_{-}.\] It is not hard to check that in the sense of distributions \(\hat{a}_{z}(\lambda)=1/(z-\lambda)\) for \(z\in\mathbb{C}_{+}\cup\mathbb{C}_{-}\). **Proposition 4**.: _Let \(A,\,\mu,\,\hat{\mu}\) be as above. Then for all \(z=x+iy\in\mathbb{C}_{+}\)_ \[\frac{f^{\prime}(z)}{f(z)}=\frac{1}{z-a_{0}}+\sum_{n\in\mathbb{N}}\left[\frac {1}{z-a_{n}}+\frac{1}{z-a_{-n}}\right]=-2\pi i\sum_{\gamma\in\Gamma\cap \mathbb{R}_{+}}b_{\gamma}e^{2\pi i\gamma z}, \tag{10}\] _and for all \(z=x+iy\in\mathbb{C}_{-}\)_ \[\frac{f^{\prime}(z)}{f(z)}=\frac{1}{z-a_{0}}+\sum_{n\in\mathbb{N}}\left[\frac {1}{z-a_{n}}+\frac{1}{z-a_{-n}}\right]=2\pi i\sum_{\gamma\in\Gamma\cap \mathbb{R}_{-}}b_{\gamma}e^{2\pi i\gamma z}. \tag{11}\] _The function \(f^{\prime}(z)/f(z)\) is almost periodic on each line \(y=y_{0}\neq 0\)._ **Proof**. Let \(\varphi(t)\) be any even nonnegative \(C^{\infty}\)-function such that \(\operatorname{supp}\varphi\subset(-1,1)\) and \(\int\varphi(t)dt=1\). Set \(\varphi_{\varepsilon}(t)=\varepsilon^{-1}\varphi(t/\varepsilon)\) for \(\varepsilon>0\). Fix \(z=x+iy\in\mathbb{C}_{+}\). The functions \(a_{z}(t)\star\varphi_{\varepsilon}(t)\) and \(\hat{\alpha}_{z}(\lambda)\hat{\varphi}_{\varepsilon}(\lambda)\) belong to Schwartz space of \(C^{\infty}\)-functions. Therefore, \[(\hat{\mu},a_{z}(t)\star\varphi_{\varepsilon}(t))=(\mu,\hat{\alpha}_{z}( \lambda)\hat{\varphi}_{\varepsilon}(\lambda)). \tag{12}\] Then for any \(T_{0}<\infty\) \[(\hat{\mu},(a_{z}\star\varphi_{\varepsilon})(t))-(\hat{\mu},\alpha_{z}(t)) \tag{13}\] \[=-2\pi i\sum_{\gamma\geq T_{0}}b_{\gamma}e^{2\pi i\gamma z}\int_{-\varepsilon }^{\varepsilon}(e^{-2\pi isz}-1)\varphi_{\varepsilon}(s)ds-2\pi i\sum_{0< \gamma<T_{0}}b_{\gamma}e^{2\pi i\gamma z}\int_{-\varepsilon}^{\varepsilon}(e^ {-2\pi isz}-1)\varphi_{\varepsilon}(s)ds.\] The first sum is majorized by \[2\pi(e^{2\pi cy}+1)\sum_{\gamma\geq T_{0}}|b_{\gamma}|e^{-2\pi\gamma y}. \tag{14}\] Set \(M(t)=\sum_{\gamma\in\Gamma:\,0<\gamma\leq t}|b_{\gamma}|\). Then \[\sum_{\gamma\geq r}|b_{\gamma}|e^{-2\pi\gamma y}=\int_{r}^{\infty}e^{-2\pi ty }dM(t)\leq\lim_{T\to\infty}M(T)e^{-2\pi Ty}+2\pi y\int_{r}^{\infty}e^{-2\pi ty }M(t)dt. \tag{15}\] It is easy to check (cf.[3]) that if \(|\hat{\mu}|\) is a temperate distribution, then \(|\hat{\mu}|(-r,r)=O(r^{\kappa})\) as \(r\to\infty\) with some \(\kappa<\infty\). Hence, (14) is less than any \(\eta>0\) for \(T_{0}\) large enough. The last sum in (13) is less than \[2\pi(e^{2\pi cy}-1)\sum_{0<\gamma<T_{0}}|b_{\gamma}|. \tag{16}\] Since \(\hat{\mu}\) is a measure, we see that \(\sum_{0<\gamma<T_{0}}|b_{\gamma}|<\infty\), therefore (16) is less than \(\eta\) for small \(\varepsilon\). Hence we obtain from (12) \[\lim_{\varepsilon\to 0}(\mu,\hat{\alpha}_{z}(\lambda)\hat{\varphi}_{\varepsilon}( \lambda))=(\hat{\mu},\alpha_{z}(t))=-2\pi i\sum_{\gamma\in\Gamma\cap\mathbb{R} _{+}}b_{\gamma}e^{2\pi i\gamma z}.\] On the other hand, we have \[(\mu,\hat{\alpha}_{z}(\lambda)\hat{\varphi}_{\varepsilon}(\lambda))=\frac{ \hat{\varphi}(\varepsilon a_{0})}{z-a_{0}}+\sum_{n\in\mathbb{N}}\left[\frac{ \hat{\varphi}(\varepsilon a_{n})}{z-a_{n}}+\frac{\hat{\varphi}(\varepsilon a_ {-n})}{z-a_{-n}}\right]. \tag{17}\] The function \(\hat{\varphi}(t)\) tends to \(1\) as \(t\to 0\) and \(|\hat{\varphi}(t)|\leq 1\). We have \[\left[\frac{\hat{\varphi}(\varepsilon a_{n})}{z-a_{n}}+\frac{\hat{\varphi}( \varepsilon a_{-n})}{z-a_{-n}}\right]=\hat{\varphi}(\varepsilon a_{-n})\left[ \frac{1}{z-a_{n}}+\frac{1}{z-a_{-n}}\right]+\frac{1}{z-a_{n}}[\hat{\varphi}( \varepsilon a_{n})-\hat{\varphi}(\varepsilon a_{-n})].\] Since \(\hat{\varphi}\) is even, we get with bounded \(\theta(n)\) and \(\phi(n)\) \[\hat{\varphi}(\varepsilon a_{n})-\hat{\varphi}(\varepsilon a_{-n})=\hat{ \varphi}(\varepsilon n+\varepsilon\phi(n))-\hat{\varphi}(-\varepsilon n+ \varepsilon\phi(-n))=\hat{\varphi}^{\prime}(\varepsilon n+\varepsilon\theta( n))\varepsilon|\phi(n)-\phi(-n)|.\] Since \(\hat{\varphi}(t)\) belongs to Schwartz space, we see that \(\hat{\varphi}^{\prime}(t)=O(1/|t|)\) as \(t\to\infty\). Hence for \(\varepsilon\geq 1/|n+\theta(n)|\) \[|\varepsilon[\hat{\varphi}^{\prime}(\varepsilon n+\varepsilon\theta(n))]|\leq C |n|^{-1}\] with a constant \(C<\infty\). The same estimate (with another constant \(C\)) is valid for \(\varepsilon<1/|n+\theta(n)|\), that for all \(n\in\mathbb{N}\) and \(\varepsilon>0\) \[|\hat{\varphi}(\varepsilon a_{n})-\hat{\varphi}(\varepsilon a_{-n})|\leq(C/n )2\sup_{n}|\phi(n)|.\] Hence the right-hand side of (17) for all \(\varepsilon>0\) is majorized by the sum \[\frac{1}{|z-a_{0}|}+\sum_{n\in\mathbb{N}}\left|\frac{1}{z-a_{n}}+\frac{1}{z-a_ {-n}}\right|+\sum_{n\in\mathbb{N}}\frac{C^{\prime}}{n|z-a_{n}|}. \tag{18}\] By Theorem 1, we have \(1/(z-a_{n})=O(1/n)\). Taking into account also Corollary 1, we get the convergence of both sums in (18). Therefore we can go to the limit in (17) as \(\varepsilon\to 0\) and obtain (10). By (15), \(\sum_{\gamma\geq 1}|b_{\gamma}|e^{-2\pi\gamma y_{0}}<\infty\) for \(y_{0}>0\), and \(\sum_{0<\gamma<1}|b_{\gamma}|<\infty\). Therefore the series in right-hand part of (10) absolutely converges, and \(f^{\prime}(z)/f(z)\) is almost periodic on the line \(y=y_{0}\). Furthermore, the measure \(\mu\) is real-valued, hence, \(b_{-\gamma}=\bar{b}_{\gamma}\). Therefore, in the case \(y_{0}<0\) we can apply (10) to the function \(\overline{f(\bar{z})}\) and obtain (11). **Theorem 2**.: _For measures \(\mu\), \(\hat{\mu}\), and the almost periodic set \(A=\{a_{n}\}_{n\in\mathbb{Z}}\) as above there is the almost periodic entire function of the form_ \[F(z)=e^{g(z)}f(z), \tag{19}\] _where \(f(z)\) is defined in (9) and_ \[g(z):=\sum_{\gamma\in\Gamma,0<\gamma<1}b_{\gamma}\frac{e^{2\pi i\gamma z}-1}{ \gamma}. \tag{20}\] **Proof**. The sum \(\sum_{\gamma\in\Gamma\cap\mathbb{R}_{+}}b_{\gamma}e^{2\pi i\gamma z}\) is majorized by \[\sum_{\gamma\in\Gamma,o<\gamma<1}|b_{\gamma}|+\sum_{\gamma\geq 1}|b_{\gamma}|e^{ -2\pi\gamma y}.\] Since \(\hat{\mu}\) is a measure, the first sum is finite, the second one is also finite due to (15). In particular, the function \[g(z)=\sum_{k=1}^{\infty}\frac{(2\pi iz)^{k}}{k!}\sum_{\gamma\in\Gamma,0<\gamma <1}\gamma^{k-1}b_{\gamma}\] is well-defined and holomorphic for all \(z\in\mathbb{C}\). Using (10) for \(z=x+iy\in\mathbb{C}_{+}\) and changing the order of summing and integrating, we get \[\log f(z)-\log f(iy)=\int_{0}^{x}\frac{f^{\prime}(t+iy)}{f(t+iy)}dt=-\left[\sum _{\gamma\in\Gamma\cap\mathbb{R}_{+}}b_{\gamma}e^{-2\pi\gamma y}\frac{e^{2\pi i \gamma x}-1}{\gamma}\right]. \tag{21}\] Also, both sums \[\sum_{\gamma\in\Gamma,\gamma\geq 1}\gamma^{-1}e^{-2\pi\gamma y}\,b_{\gamma}(1-e^{2 \pi i\gamma x})\quad\text{and}\quad\sum_{\gamma\in\Gamma,0<\gamma<1}\frac{e^{-2 \pi\gamma y}-1}{\gamma}b_{\gamma}(1-e^{2\pi i\gamma x})\] are uniformly bounded in every strip \(\{z:\,0<\alpha\leq\text{Im}\;z\leq\beta<\infty\}\). Therefore in this strip \[\log f(z)=-\sum_{\gamma\in\Gamma,0<\gamma<1}b_{\gamma}\gamma^{-1}[e^{2\pi i \gamma z}-1]+O(1). \tag{22}\] In particular, the function \(\log|f(z)|+\text{Re}\;g(z)\) is bounded in this strip, and the function \(F(z)\) is uniformly bounded in every closed horizontal substrip of a finite width in \(\mathbb{C}_{+}\). Arguing by the same way, we get for \(z=x+iy\in\mathbb{C}_{-}\) \[\log f(z)=\sum_{\gamma\in\Gamma,-1<\gamma<0}b_{\gamma}\gamma^{-1}[e^{2\pi i \gamma z}-1]+O(1).\] Since \(b_{-\gamma}=\bar{b}_{\gamma}\), we see that the real part of the right-hand side of the equality equals \[-\text{Re}\;\sum_{\gamma\in\Gamma,0<\gamma<1}\bar{b}_{\gamma}\gamma^{-1}[e^{- 2\pi i\gamma z}-1]+O(1)=-\text{Re}\;\sum_{\gamma\in\Gamma,0<\gamma<1}b_{\gamma }\gamma^{-1}[e^{2\pi i\gamma\bar{z}}-1]+O(1).\] Taking into account that \[\left|\sum_{\gamma\in\Gamma,0<\gamma<1}b_{\gamma}\gamma^{-1}[e^{2\pi i\gamma \bar{z}}-1]-\sum_{\gamma\in\Gamma,0<\gamma<1}b_{\gamma}\gamma^{-1}[e^{2\pi i \gamma z}-1]\right|\leq\sum_{\gamma\in\Gamma,0<\gamma<1}|b_{\gamma}|\left| \frac{e^{2\pi\gamma y}-e^{-2\pi\gamma y}}{\gamma y}\right||y|,\] we obtain that the function \(\log|f(z)|+\text{Re}\;g(z)\) is bounded in every strip \(\{z:\,-\infty<\alpha\leq\text{Im}\;z\leq\beta<0\}\), and the function \(F(z)\) is uniformly bounded in every closed horizontal substrip of a finite width in \(\mathbb{C}_{-}\) as well. Furthermore, in any strip \(\{z:\,|\text{Im}\;z|<M\}\) \[|g(z)-g(x)|\leq\sum_{\gamma\in\Gamma,0<\gamma<1}|b_{\gamma}|\left|\frac{e^{2 \pi i\gamma x}(e^{2\pi\gamma y}-1)}{\gamma y}\right||y|<C(M), \tag{23}\] and \[|g(x)|\leq\left[\sum_{\gamma\in\Gamma,0<\gamma<\varepsilon}+\sum_{\gamma\in \Gamma,\varepsilon\leq\gamma<1}\right]|b_{\gamma}|\left|\frac{1-e^{2\pi i \gamma x}}{\gamma}\right|\leq 2\pi|x|\sum_{o<\gamma<\varepsilon}|b_{\gamma}|+2 \varepsilon^{-1}\sum_{\varepsilon\leq\gamma<T_{0}}|b_{\gamma}|. \tag{24}\] Since \(\sum_{o<\gamma<\varepsilon}|b_{\gamma}|\) is arbitrary small, we see that \(g(x)=o(|x|)\) as \(x\to\infty\). Therefore the function \(g(z)\) has the same growth in any horizontal strip of the finite width. Also, \(\log|f(z)|\leq O(|z|)\) in this strip. Applying Fragment-Lindelof Principle, we obtain that \(F(z)\) is bounded in every such strip. It follows from (22) that the function \(\log F(z)=\log f(z)+g(z)\) is bounded on any line \(y=\text{const}\). Then we have \[(\log F(x+iy))^{\prime}=f^{\prime}(x+iy)/f(x+iy)+2\pi i\sum_{\gamma\in\Gamma, 0<\gamma<1}b_{\gamma}e^{2\pi i\gamma x-2\pi y}.\] Since the function \(f^{\prime}(x+iy)/f(x+iy)\) is almost periodic in \(x\), we see that Bohr's theorem (cf.[10], Theorem 1.2.1) implies almost periodicity of the function \(\log F(x+iy)\) and then \(F(x+iy)\). By ([10], Theorem 1.2.3), the function \(F(z)\) is almost periodic in every strip, where it is bounded, hence it is almost periodic in \(\mathbb{C}\). **Theorem 3**.: _In conditions of the previous theorem, \(A\) to be the zero set of an almost periodic entire function of exponential type if and only if the function \(g(z)\) from (20) is uniformly bounded on \(\mathbb{R}\). If this is the case, the entire function (9) is almost periodic._ **Proof**. Let \(g(z)\) be uniformly bounded for \(z\in\mathbb{R}\). By (23), the function \(g(x+iy)\) is also bounded in \(x\) for any fixed \(y>0\). It follows from (22) that the function \(\log f(x+iy)\) is also bounded in \(x\in\mathbb{R}\). But \((\log f(x+iy))^{\prime}\) is almost periodic, hence, by Bohr's Theorem, the functions \(\log f(x+iy)\) and \(f(x+iy)\) are almost periodic too. The function \(f\) is the exponential type, and by Fragment-Lindelof Principle, it is bounded on every horizontal strip of a finite width. Hence, it is almost periodic on such strips, consequently, \(f\) is an almost periodic entire function with the given zero set \(A\). Now suppose that \(G(z)\) is an entire almost periodic function of exponential type with zero set \(A\). Clearly, \(G(z)=K_{1}e^{K_{2}z}f(z)\) with \(K_{1},\,K_{2}\in\mathbb{C}\). Since zero set of \(G(z)\) coincides with \(A\), we get, using Lemma 1 from [9], Ch.6, that for every \(\varepsilon>0\) and \(M<\infty\) there is \(m(\varepsilon)>0\) such that \[|G(z)|\geq m(\varepsilon)\quad\text{for}\quad\{z:\,|\text{Im}\ z|\leq M,\,z \not\in A(\varepsilon)\},\quad\text{where}\quad A(\varepsilon):=\{z:\,\text{ dist}(z,A)<\varepsilon\}.\] By Proposition 1, for \(\varepsilon\) small enough each connected component of \(A(\varepsilon)\) contains no segment of length \(1\), hence its diameter is less than \(1\). Let \(F\) be the almost periodic function that defined in (19). The holomorphic function \(F(z)/G(z)\) is uniformly bounded on the set \(\{z:\,|\text{Im}\ z|<M,\,z\not\in A(\varepsilon)\}\), therefore it is uniformly bounded on the whole strip \(|\text{Im}\ z|<M\). Then \(|G(x+iy)|\geq m(\varepsilon)\) for any \(y=y_{0}>\varepsilon\), therefore \(F(x+iy)/G(x+iy)\) is almost periodic in \(x\) for \(y=y_{0}\). Consequently, it is almost periodic for every \(|y|<M\), in particular, on the real line. Moreover, \(F(z)/G(z)\) has no zeros, hence the same Lemma 1 from [9] implies that \(|F(x)/G(x)|\geq c>0\) for all \(x\). Now by Theorem 2.7.1 from [10], \[F(x)/G(x)=e^{h(x)+i\omega x},\qquad\omega\in\mathbb{R},\] with almost periodic \(h(x)\). Therefore the function \(g(x)-K_{2}x-i\omega x=h(x)\) is bounded on \(\mathbb{R}\). By (24), \(g(x)=o(|x|)\) as \(x\to\infty\), hence, \(K_{2}=-i\omega\) and \(g(x)\) is bounded on \(\mathbb{R}\). \(\blacksquare\) **Corollary 2**.: _If \(\sum_{\gamma\in\Gamma,0<\gamma<1}\gamma^{-1}|b_{\gamma}|<\infty\), then \(A\) is the zero set of some almost periodic entire function of exponential type._
2305.08892
Deep Photonic Reservoir Computer Based on Frequency Multiplexing with Fully Analog Connection Between Layers
Reservoir computers (RC) are randomized recurrent neural networks well adapted to process time series, performing tasks such as nonlinear distortion compensation or prediction of chaotic dynamics. Deep reservoir computers (deep-RC), in which the output of one reservoir is used as the input for another one, can lead to improved performance because, as in other deep artificial neural networks, the successive layers represent the data in more and more abstract ways. We present a fiber-based photonic implementation of a two-layer deep-RC based on frequency multiplexing. The two RC layers are encoded in two frequency combs propagating in the same experimental setup. The connection between the layers is fully analog and does not require any digital processing. We find that the deep-RC outperforms a traditional RC by up to two orders of magnitude on two benchmark tasks. This work paves the way towards using fully analog photonic neuromorphic computing for complex processing of time series, while avoiding costly analog-to-digital and digital-to-analog conversions.
Alessandro Lupo, Enrico Picco, Marina Zajnulina, Serge Massar
2023-05-15T14:55:28Z
http://arxiv.org/abs/2305.08892v2
# Fully analog photonic deep reservoir computer based on frequency multiplexing ###### Abstract Reservoir computers (RC) are randomized recurrent neural networks well adapted to process time series, performing tasks such as nonlinear distortion compensation or prediction of chaotic dynamics. Deep reservoir computers (deep-RC), in which the output of one reservoir is used as input of the next reservoir, can lead to improved performance because, as in other deep artificial neural networks, the successive layers represent the data in more and more abstract ways. We present a fiber-based photonic implementation of a two-layer deep-RC based on frequency multiplexing. The two RC layers are encoded in two frequency combs propagating in the same experimental setup. The connection between layers is fully analog and does not require any digital processing. We find that the deep-RC outperforms traditional RC by up to two orders of magnitude on two benchmark tasks. This work thus paves the way towards using fully analog photonic neuromorphic computing for complex processing of time series, while avoiding costly analog-to-digital and digital-to-analog conversions. ## 1 Introduction Artificial Intelligence is probably the most disruptive new technology to emerge during the first decades of the XXIst century. Its success is based on the use of deep neural networks in which multiple layers of artificial neurons are connected in a feed forward architecture [4, 19]. Recent advances include, for instance, image classification and analysis [28], game playing [31], protein structure prediction[3, 16], chat bots that simulate human conversation such as ChatGPT and Bing[26, 5], and more. Artificial neural networks are fundamentally analog systems simulated on a digital computer. Thus, it seems very attractive to replace the digital simulation by analog hardware, as this could result in considerable energy savings. Photonics is particularly attractive for such analog neural networks because of its potential for very high speed (see e.g. [39; 8]), parallelism (see e.g. [22; 29]), possibility of implementing spiking networks (see e.g. [7; 15]), and low energy consumption per operation (see e.g. [11]). The importance of deep neural networks for complex applications has lead to several demonstrations of deep photonic networks [30; 21; 40; 2]. It is also possible to use physical substrates that process information in ways very different from traditional artificial neural networks, such as extreme learning machines[13] and reservoir computers[14], see e.g. the reviews [33; 37; 23]. These physical substrates for computing are very attractive because of the augmented freedom in the design that they offer, for instance, in the combination of several systems to constitute deep networks. However, the training and implementation of deep physical networks provide new challenges which are only starting to be addressed [38; 25]. Reservoir computers (RC), which are the topic of this work, are randomized recurrent neural networks (RNN) in which the recurrency is provided by a (simulated or physical) high dimensional nonlinear dynamical system called "reservoir" [14]. RCs have been successfully implemented in analog systems including photonics, electronics, spintronics, mechanics, biology and more (see e.g. [33]). Many of the photonic implementations of RC used a delay loop and a single dynamical node, encoding multiple neurons using time multiplexing [1]. Even though high speed implementations have been realised using this approach [18], the time multiplexing represents an inherent slowdown. Alternative approaches include spatial multiplexing, e.g. using free space optics [27], multimode fibers [32], as well as hybrid temporal/spatial approaches [24]. As for other neural network architectures, assembling several RCs in a deep architecture enhances performance. Deep RCs were first used in [34] and studied in more depth in [10], where it is shown that the serial connection among different RC layers enhances the system performance by enriching its dynamics. Different ways of combining photonic reservoirs into networks are compared in [9]. Motivated by these works, a first experimental implementation of deep-RC is reported in [25], showing significant improvement in performance with increasing the number of layers. However, in this work each reservoir was implemented using the time multiplexing architecture, which is not optimal in terms of computing speed, and, more importantly, the connection between reservoirs was implemented digitally. The latter is also the case in the related work [38]. Ref. [20] proposes an architecture for a deep reservoir based on time delay architecture with analog connection between layers. Here we report a deep reservoir computer in which the connection between layers is fully analog and does not require storage or processing on a digital computer. Our experiment is based on a recently reported reservoir computer in which the neuron signals are encoded in the amplitudes of a frequency comb, and mixing between neurons is realised by electro-optic phase modulators [6]. This architecture allows for a relatively easy to realize optical output layer, as weights can be applied on comb lines by a programmable spectral filter, and the nonlinear summation of the weighted neuron signals can be executed by a photodiode, which measures the total intensity of the weighted frequency comb and introduces a quadratic nonlinearity. This technique, already employed in [6] to generate the output signals with optical weighting, in the present work allows us to use the output of a reservoir as input to a second one without leaving the analog domain. In the present work we also fully exploit the frequency degree of freedom of the light by using the same hardware for implementing multiple reservoirs simultaneously, each one working in a different frequency band. In particular, here we report two simultaneous RC computations and we demonstrate that combining the two computation in a deep fashion improves performance compared to using the two reservoirs in parallel without interconnections. We test two strategies for optimizing the interconnections between two layers in the deep configuration. In the first, simpler, approach we only adjust the strength of the connection; while in the second approach we optimise the connections using the Covariance Matrix Adaptation Evolution Strategy (CMA-ES)[12]. Surprisingly we find that both approaches yield comparable results. In Sec. 2 we present the algorithms, the experimental setup and the benchmarking methods; in Sec. 3 we present and discuss results; finally, in Sec. 4 we present conclusions and outlooks for this work. ## 2 Methods ### Algorithms #### 2.1.1 Reservoir computing A reservoir computer (RC, see left panel of Fig.1) [14] is a recurrent neural network composed of three layers: the input layer, the reservoir layer and the output layer. Only the output weights are trained, while the input and internal weights are fixed and not trained. Figure 1: Left panel: standard reservoir computing scheme. Right panel: deep reservoir computing scheme. The weights in black are fixed, while the weights in red are trained. The experimental system is based on the frequency multiplexing RC scheme described in [6]. Neurons are encoded in the complex amplitudes of the lines of a frequency comb and neuronal interconnections are realized via frequency-domain interference that makes comb line exchange power. The electric field at any point in the reservoir can thus be expressed as \[E(t)=\sum_{k}x_{k}(t)\exp\left(i(\omega+k\Omega)t\right), \tag{1}\] where \(\omega\) is the center frequency of the comb, \(\Omega\) the frequency spacing between comb lines, and \(x_{k}(t)\) are the slowly varying amplitudes of the comb lines which encode neuron information. To describe more conveniently the RC application, we restrict to the \(N\) most central lines of the comb, which are the ones encoding information. Moreover, we group the amplitude of these lines in a \(N\) dimensional complex vector \(\mathbf{x}_{n}\) that evolves in slow, discrete, time \(n\). The discrete timescale corresponds to the discrete evolution of the RC states. The RC based on frequency multiplexing uses nonlinear input and output layers, and a linear reservoir (which is a powerful architecture, as demonstrated in [35]). It can be described by the evolution equations: \[\mathbf{x}_{n} =\mathbf{W}\cdot\mathbf{x}_{n-1}+\mathbf{W}_{\text{in}}\cdot f_{ \text{in}}\left(u_{n}\right), \tag{2}\] \[y_{n} =\left|\mathbf{W}_{\text{out}}^{+}\cdot\mathbf{x}_{n}\right|^{2} -\left|\mathbf{W}_{\text{out}}^{-}\cdot\mathbf{x}_{n}\right|^{2}, \tag{3}\] where \(u_{n}\) (a real scalar) is the input signal to the reservoir at timestep \(n\), and \(y_{n}\) (a real scalar) is the output signal of the reservoir at timestep \(n\), \(\mathbf{W}\) is a complex \(N\times N\) matrix representing the internal connections of the reservoir, \(\mathbf{W}_{\text{in}}\) is a complex \(N\) dimensional vector representing the input-to-reservoir connections, and \(\mathbf{W}_{\text{out}}^{+}\) and \(\mathbf{W}_{\text{out}}^{-}\) are \(N\times N\) diagonal matrices with positive real coefficients representing the output weights. In this notation, \(\mathbf{W}_{\text{out}}^{-}\cdot\mathbf{x}_{n}\) and \(\mathbf{W}_{\text{out}}^{+}\cdot\mathbf{x}_{n}\) are complex \(N\) dimensional vectors representing the the comb line amplitudes after the application of negative and positive weights respectively. The input signal is supplied through a Mach-Zehnder modulator operating in the negative quadrature point, hence the input nonlinearity \(f_{\text{in}}\) is given by the modulator transfer function: \[f_{\text{in}}(u)=E_{0}\cdot\sin\left(\gamma\cdot u\right), \tag{4}\] where \(E_{0}\) represents the input radiation amplitude and \(\gamma\) is the driving strength of the electrical signal to the modulator. The vector norm square \(|\cdot|^{2}\) in Eq. (3) represents the output nonlinearity given by the quadratic response of photodiodes. Note that the output \(y_{n}\) can also be computed by first measuring the intensity of each comb line independently, and then performing the multiplication by \((\mathbf{W}_{\text{out}}^{+})^{2}\) or \(-(\mathbf{W}_{\text{out}}^{-})^{2}\) according to whether the weight is positive or negative. The weights are optimised using ridge regression so that the output \(y_{n}\) approximates as well as possible the desired output. #### 2.1.2 Deep Reservoir Computing A deep reservoir computer (deep-RC, see right panel of Fig. 1) is a collection of RC layers connected in series. The deep-RC output signal is a linear combination of neuron values of each reservoir. The hierarchy introduced by the serial connection enhances the network performance because the different reservoirs can have independent dynamics, thus enriching the states of the full deep-RC. The deep-RC composed of \(N_{\text{layers}}\) layers, each one comprising \(N\) neurons, as implemented in our system, is described by the set of equations: \[\mathbf{x}_{n}^{(1)} =\mathbf{W}^{(1)}\cdot\mathbf{x}_{n-1}^{(1)}+\mathbf{W}_{\text{ in}}^{(1)}\cdot f_{\text{in}}\left(u_{n}\right), \tag{5}\] \[\mathbf{x}_{n}^{(\text{i})} =\mathbf{W}^{(\text{i})}\cdot\mathbf{x}_{n-1}^{(\text{i})}+ \mathbf{W}_{\text{in}}^{(\text{i})}\cdot f_{\text{in}}\left(u_{n}^{(i)}\right), i=2,\ldots,N_{\text{layers}}\] (6) \[u_{n}^{(i+1)} =\left|\mathbf{W}_{\text{out}}^{(\text{i})}\cdot\mathbf{x}_{n}^ {(\text{i})}\right|^{2}, i=1,\ldots,N_{\text{layers}}-1\] (7) \[y_{n}^{(\text{T})} =\left|\mathbf{W}_{\text{out}}^{(\text{T}+)}\cdot\mathbf{x}_{n} ^{(\text{T})}\right|^{2}-\left|\mathbf{W}_{\text{out}}^{(\text{T}\cdot)} \cdot\mathbf{x}_{n}^{(\text{T})}\right|^{2}, \tag{8}\] where the superscript \((i)\), \(1\leq i\leq N_{\text{layers}}\), identifies the reservoir layer. Here, as before, \(\mathbf{W}^{(\text{i})}\) is a complex \(N\times N\) matrix representing the internal connections of the \(i\)-th reservoir layer, \(\mathbf{W}_{\text{in}}^{(\text{i})}\) is a complex \(N\) dimensional vectors representing the input connections of the \(i\)-th reservoir layer, \(\mathbf{W}_{\text{out}}^{(\text{i})}\) is a \(N\times N\) diagonal matrices with positive real coefficients representing the output connections of the \(i\)-th layer. In our current photonic implementation, \(N_{\text{layers}}=2\), but the equations generalise easily to more layers. The first reservoir layer is driven by the input time series \(u_{n}\), while the next reservoir layers are driven by the output \(u_{n}^{(i)}\) of the previous layer, see Eq. (7). Note that in our implementation the connections among consecutive layers are intermediated by only positive weights, contained in the diagonal of \(\mathbf{W}_{\text{out}}^{(\text{i})}\), which is why there is only a single term on the right hand side of Eq. (7). The output of the deep-RC, \(y_{n}^{(\text{T})}\), is obtained by combing the states from both layers, i.e. by multiplying the lines of both combs by output weights, and measuring the sum with a photodiode. To this purpose, we have defined \(\mathbf{x}_{n}^{(\text{T})}=\left(\mathbf{x}_{n}^{(1)},\mathbf{x}_{n}^{(2)}, \ldots,\mathbf{x}_{n}^{(N_{\text{layers}})}\right)\), as the complex vector of size \(N_{\text{layers}}\cdot N\) representing the full deep-RC state at timestep \(n\), and \(\mathbf{W}_{\text{out}}^{(\text{T}+)}\) and \(\mathbf{W}_{\text{out}}^{(\text{T}-)}\) as the diagonal \((N_{\text{layers}}\cdot N)\times(N_{\text{layers}}\cdot N)\) matrices with positive real coefficients representing the deep-RC output weights. The output weights are optimised using ridge regression. Note that the interconnection between successive layers (say layers \(i\) and \(i+1\)) is determined by \(3N\) real parameters: the \(N\) positive real elements of the diagonal matrix \(\mathbf{W}_{\text{out}}^{(\text{i})}\), and both the real and imaginary parts of the \(N\) elements of the vector \(\mathbf{W}_{\text{in}}^{(\text{i}+1)}\). Of these, only the \(N\) elements of the diagonal matrix \(\mathbf{W}_{\text{out}}^{(\text{i})}\) can be tuned in our experimental setup. This is to be compared with the proposal of [10], in which the interconnection is given by a \(N\times N\) random matrix, whose spectral radius is tuned. ### Experimental setup Experimental Setup.The experimental system is based on what described in [6], modified in such a way to support two RC computations at the same time. Fig. 2 reports the schematic of the experiment. All fiber connections and couplers are single-mode and polarization-maintaining. We employ two continuous-wave laser sources (CW source 1 and CW source 2) at wavelengths \(\lambda_{1}=1550.2\) nm and \(\lambda_{2}=1555.4\) nm. The two laser outputs are modulated by two Mach-Zehnder modulators (MZM 1 and MZM 2). Both MZMs are biased to operate in the negative quadrature point (bias controllers are not shown in Fig. 2). The transfer functions of MZM 1 and MZM 2 define the input nonlinearities of the two RC layers, \(f_{\mathrm{in}}\) in Eq. (4). MZM 1 is driven by an arbitrary waveform generator (AWG 1) which supplies the input signal \(u_{n}^{(1)}\); MZM 2 can be driven both by a second arbitrary waveform generator (AWG 2) or by the output of a photodiode (PD 2), as described below. The two modulated signals are merged together in a 50/50 fiber coupler (C1) and then injected into an Erbium-doped-fiber amplifier (EDFA 1). EDFA 1 raises the total power to 9 dBm, equally distributed between the two signals. After the amplification, the two signals are injected in a phase modulator (PM 1). PM 1 is driven by a sinusoidal radio-frequency signal (frequency \(\Omega\approx 17\) GHz, power P1\(\approx 30\) dBm). The radio-frequency signal is generated by an RF clock (RF source) and amplified by an RF amplifier (RF AMP 1). The phase modulation provided by PM 1 generates two frequency combs centered in \(\lambda_{1}\) and \(\lambda_{2}\) (Fig. 3). The spacing of the comb lines is equal to \(\Omega\) and the number of lines depends on P1. In our im Figure 2: Experimental setup. Optical connections are in blue, electrical connections in red. MZM: Lithium Niobate Mach-Zehnder modulator; AWG: arbitrary waveform generator; C: fiber couplers; EDFA: Erbium-doped-fiber amplifier; PM: phase modulator; RF source: radio frequency source at frequency \(\Omega\); RF AMP: radio frequency amplifiers; PSF: programmable spectral filter; PD: photodiode; ES: electric switch. plementation, PM 1 provides approximately 20 usable comb lines per comb, i.e., 20 neurons. The two combs constitute the input stimuli for the two reservoir networks. The amplitude of each line determines how strongly the input signal is coupled to the particular neuron encoded in that line. Hence, the distribution of (complex) amplitudes among the comb lines defines the two vectors of input-to-reservoir weights, \(\mathbf{W}_{\mathrm{in}}^{(1)}\) and \(\mathbf{W}_{\mathrm{in}}^{(2)}\). The two frequency combs are injected in a fiber loop through a 30/70 coupler (C2). The fiber loop is 15 meters long, corresponding to a roundtrip frequency of approximately 20 MHz. All the input signals are synchronized with the roundtrip time of the loop, in such a way that each timestep of the input signals entirely fills the loop. Hence, the processing frequency of our system is fixed by the cavity length and is approximately 20 MHz. The fiber loop contains a second phase modulator (PM 2) and an optical amplifier (EDFA 2). PM 2 is driven by a signal generated by the same RF source but undergoes a different amplification (RF AMP 2), hence it has the same frequency but a different power P2\(\approx\)20 dBm as the RF signal supplied to PM 1. The phase modulation provided by PM 2 creates frequency interference among the lines of the same comb, thus implementing the (complex-weighted) connectivity among the neurons of the same reservoir. EDFA 2 compensates for the losses in the loop. The transformation of the combs over a roundtrip, including the effects of phase modulation, amplification and dispersion (which acts differently on each comb line/neuron) define the matrices \(\mathbf{W}^{(1)}\) and \(\mathbf{W}^{(2)}\). The amplitudes of the two combs at each roundtrip \(n\) provide the states of the two reservoirs \(\mathbf{x}_{n}^{(1)}\) and \(\mathbf{x}_{n}^{(2)}\). Part of the circulating radiation is extracted by a 20/80 fiber coupler (C3), amplified by EDFA 3 and directed to the readout circuit. The readout consists of a multi-channel programmable spectral filter (PSF, _Coherent II-VI Waveshaper_) and two photodiodes (PD 1 and PD 2), measuring each of the two PSF outputs. Figure 3: Normalized spectral power of the radiation as measured at the output of the fiber loop, after coupler C3. Red markers indicate the input wavelengths \(\lambda_{1}=1550.2\) nm and \(\lambda_{2}=1555.4\) nm. The first PSF channel, connected to PD 1, is employed to measure the evolution of both reservoirs. The measurement procedure consists of selecting a single comb line using a band-pass filter and recording via PD 1 the intensity of that comb line. At the end of the procedure, the intensities of all the comb lines, i.e. the norm square of the components of vectors \(\mathbf{x}_{n}^{(1)}\) and \(\mathbf{x}_{n}^{(2)}\), are recorded on a computer. The output of the reservoir is then obtained by multiplying these intensities by the output weights \(|\mathbf{W}_{\mathrm{out}}^{(\mathrm{T}+)}|^{2}\) and \(-|\mathbf{W}_{\mathrm{out}}^{(\mathrm{T}\cdot)}|^{2}\). Note that this operation could be realised by using two channels of the PSF to which one applies weights \(\mathbf{W}_{\mathrm{out}}^{(\mathrm{T}+)}\) and \(\mathbf{W}_{\mathrm{out}}^{(\mathrm{T}\cdot)}\) respectively, sending the two outputs to two photodiodes, and taking the difference of the resulting currents, as described in [6]. Operation modes.We use two operation modes: "deep" and "independent". In deep-RC mode, the second channel of the programmable spectral filter is configured to select and transmit only the comb centered on \(\lambda_{1}\) after having applied an attenuation mask \(\mathbf{W}_{\mathrm{out}}^{(1)}\). Consequently, PD 2 measures the signal \(u_{n}^{(2)}=\left|\mathbf{W}_{\mathrm{out}}^{(1)}\cdot\mathbf{x}_{n}^{(1)} \right|^{2}\). The output of PD 2 drives MZM 2, and thus constitutes the input of the second RC. In this configuration the system is a two layer deep-RC, as described in subsection 2.1.2. In independent mode, the two RC computations are decoupled by driving MZM 2 through a second, independent, arbitrary waveform generator AWG 2 (the second channel of PSF and PD 2 are deactivated). The two RC computations do not interact with each other and are carried out independently. The selection of the computation mode, deep or independent, is made by flipping an electric switch that selects whether MZM 2 is driven by PD 2 or by AWG 2, as illustrated in Fig. 2. Stabilization.The experimental setup is sensitive to acoustical noise and thermal drift. To limit these effects, the optical loop, including PM 2 and EDFA 2, is mounted inside an insulated box on an optical table. Furthermore, two PID controllers piezo-tune the emission wavelengths of the two laser sources in order to fix the operating condition to a certain point in the loop transfer function. The two PID controllers are fed by the intensity of the reflection of each of the two combs at the entrance of the loop, on the coupler C2. This requires two auxiliary photodiodes and spectral filters (not represented in Fig. 2). ### Benchmark tasks We selected two benchmark tasks, the first consisting of the prediction of the evolution of a chaotic time series, and the second one consisting of the compensation of the distortion produced in a nonlinear communication channel. The time series prediction task is based on the infrared laser dataset of the Santa Fe Time Series Competition [36]. The time series \(u_{t}\) is supplied as input, and the task consists of producing \(u_{t-\tau}\), with \(-5\leq\tau\leq+5\). Note that when the timeshift \(\tau\) is negative, the task consists of remembering the past, while when the timeshift \(\tau\) is positive, the task consists of predicting the future. The accuracy is expressed in terms of Normalized Mean Square Error (NMSE) between the target signal and the produced output. When running this benchmark, the training set is composed of 6000 timesteps, and the testing set is composed of 2500 timesteps (this is a standard 70%-30% repartition). We discard the first 500 timesteps of reservoir output to avoid operating in a transient regime. The nonlinear compensation task was first used in the RC community in [14]. A random signal composed of four different symbols is propagated along a simulated channel exhibiting nonlinearity, noise, and memory about past inputs. The task consists in reconstructing the original input given the channel output. Performance is evaluated for different Signal-to-Noise Ratios (SNR) in the range [8dB, 32dB]. The results are expressed in terms of Symbol Error Rate (SER), i.e., the ratio of wrongly reconstructed output symbols over the total number of transmitted symbols. When running this benchmark, the training set is composed of 14000 timesteps, and the testing set is composed of 30000 timesteps. We discard the first 1000 timesteps of reservoir output to avoid operating in a transient regime. Note that, differently from the chaotic time series benchmark which is based on a limited dataset, in this case the training dataset can be easily generated at the moment. This is why we employed a larger amount of datapoints for the initial wash-out and the testing. Every benchmark result has been validated through a 100-steps cross-validation, meaning that the points belonging to the train and test datasets have been selected at random for 100 times and results have been averaged. ### Tested configurations Our photonic system supports two RCs that operate simultaneously, either independently from each other, or connected in series. We evaluated the performance of three different configurations, described in Fig. 1. First, a "shallow-RC" configuration (Fig. 4a) in which only one of the two independent RCs executes the benchmark task, constituting a "traditional" RC as described in Sec. 2.1.1. In this configuration, the second RC processes a different, not evaluated, computation, with the purpose of simulating a parallel computation scenario where two different tasks are performed at the same time. Second, a "parallel-RC" configuration (Fig. 4b) where both independent RCs execute the same task in an uncorrelated way, and a single output layer is connected to both reservoirs. This constitutes a "non-deep" way of using the full computational capabilities of the system on a single task. Third, a "deep-RC" configuration (Fig. 4c) where the two independent RCs are connected in series as described in Sec. 2.1.2. In addition, in the deep-RC configuration, we used two different methods to tune the weights \(\mathbf{W}_{\mathrm{out}}^{(1)}\), i.e. the attenuations applied by the PSF, that determine the connection from the first RC layer to the second one. In the first, simplest, approach, we apply the same attenuation to all comb lines, corresponding to \(\mathbf{W}_{\text{out}}^{(1)}=\text{diag}(\alpha)\), and we optimize the overall attenuation \(\alpha^{2}\) by sweeping it in the range \([-20\text{dB},\ 0\text{dB}]\). In the second approach, we optimise all the coefficients of \(\mathbf{W}_{\text{out}}^{(1)}\) by using the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) optimisation algorithm [12]. Finally, in order to improve the reservoir computing performance, we tune the comb line spacing \(\Omega\) to the best-performing value for each task. The fiber loop constitutes a spectral interferometer and exhibits, due to dispersion in the fiber, a complex behavior strongly dependent on \(\Omega\). This is illustrated in Fig. 5, where the performance in shallow-RC configuration on the two tasks is plotted as a function of \(\Omega\) for both reservoirs. ## 3 Results Results of the benchmark tests are reported in Fig. 6 for the three operation modes: shallow-RC, parallel-RC, and deep-RC. In this figure, the deep-RC results are shown, for both the optimization techniques described in Sec. 2.4. Nonlinear channel equalization results (Fig. 6a) show the expected decrease of Symbol Error Rate (SER) when the Signal-to-Noise Ratio (SNR) increases, since the addition of noise increases the difficulty of the task, eventually making it impossible to revert the channel distortion. For high-SNR values, both shallow-RC and parallel-RC SER scores saturate, while deep-RC SER score maintains an exponential decay for increasing SNR values. For every SNR value, deep-RC always performed better, followed by the parallel-RC and finally by shallow-RC. Figure 4: The three tested configurations for the two independent RCs. “ Reservoir \(\lambda_{1}\)” is encoded in the frequency comb centered around \(\lambda_{1}\), while “reservoir \(\lambda_{2}\)” is encoded in the frequency comb centered around \(\lambda_{2}\). Both reservoirs are executed on the same photonic substrate. (a) Shallow-RC: one of the two reservoirs performs the benchmark task as a traditional RC, while the other reservoir processes a different time series in parallel. (b) Parallel-RC: both reservoirs process the same input time series, but their dynamics are decoupled from each other. A single output layer is trained, which combines signals from both reservoirs. (c) Deep-RC: the two reservoirs constitute the two layers of a deep-RC. A similar behavior is found in the results of the chaotic time series prediction task (Fig. 6b). Two trends are clearly visible from Fig. 6. First, parallel-RC systematically outperforms shallow-RC. Indeed since the two parallel RCs perform different computations (as is evident from Fig. 5), using both reservoirs in parallel should perform at least as well as using a single reservoir. Second, deep-RC outperforms parallel-RC in every test we conducted. Both configurations exploit the same number of neurons and are differentiated only by the topology, thus we conclude that the serial configuration in deep-RC really boosts RC performance. We observe that the two optimization techniques for the inter-layer connection perform comparably, with the simpler algorithm sometimes outperforming the CMA-ES algorithm. We identified two reasons for this behavior. First, the CMA-ES algorithm could get stuck in local minima. Second, the search of the optimal set of weights could be affected negatively by slow drifts in the operating conditions of the deep-RC. ## 4 Conclusion We presented a fully analog photonic implementation of a deep reservoir computer. The connection between the two layers is performed in the analog domain with no processing or storing on a digital computer. The presented implementation also allows for two independent RC computations to be executed at the Figure 5: Performance of the reservoir computers in shallow-RC configuration as a function of \(\Omega\) on the channel equalization task (top) and the Santa-Fe time series prediction task for prediction 1 timestep ahead (bottom). The complex dependence on \(\Omega\) is due to the dispersion in the optical fiber. The dispersion is also the reason why the dependence on \(\Omega\) is different for RC-1 and RC-2, as they use frequency combs centered on different wavelengths. (As these plots are time-consuming to obtain, a reduced number of comb lines \(N=14\) was used). Figure 6: Experimental results for the three operation modes (shallow-RC, parallel-RC, and deep-RC) on the two selected benchmark tasks: nonlinear channel equalization (a) and chaotic time series prediction (b). Deep-RC results are shown for both optimization methods presented in the text (uniform optimized attenuation \(\alpha\) and CMA-ES). Error-bars represent the score standard deviation measured in cross-validation phase. Results in (a) are expressed as symbol error rate (SER) vs. signal to noise ratio (SNR). Results in (b) are expressed as normalized mean square error (NMSE) vs. shift of the target time series with respect to the input one. When the shift is positive, the task consists in predicting the future; when the shift is zero, the task consists in reproducing the present input; when the shift is negative, the task consists in reproducing the past. same time. We found that the deep-RC configuration, obtained by connecting in series the two RCs, performs better than a parallel-RC configuration, where the two RCs are connected to the same output, and process the same input data, but do not interact during computation. The reported experiment has only two layers, but deeper schemes are in principle possible. New layers can be added to the deep-RC by using more than two lasers, provided that the generated combs do not overlap each other. The C band could host 10 parallel computations (considering combs 3 nm wide, see Fig. 3). These 10 parallel computations could be employed to constitute a single 10-layers deep-RC, or even multiple deep-RC running in parallel, each one composed of fewer layers. On the other hand, broader combs would be able to encode more neurons in each reservoir. Thus, a balance between the number of layers and the number of neurons per layer has to be searched for. In any case, integrating (partially or entirely) the experiment, as proposed in [17], could be a route to scaling up the system while simplifying its stabilization. Although we already explored two strategies for optimizing the interconnection among the two deep-RC layers, many ideas are still to be tested (see e.g., [10, 38, 25]) and could be the object of further investigation. In summary, developing deep architectures for neuromorphic photonic computing is a highly promising avenue for increasing both the complexity of the tasks that can be solved and the system performance. However, the presence of analog-to-digital or digital-to-analog converters strongly affects power consumption and footprint, hence it is to be avoided. We have demonstrated that this is possible for photonic deep reservoir computing. ## Funding The authors acknowledge financial support from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement 860830 (POST DIGITAL), and from the FWO and F.R.S.-FNRS Excellence of Science (EOS) programme grant 40007536. ## Disclosures The authors declare no conflicts of interest. ## Data availability Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
2302.03960
Computing aberration coefficients for plane-symmetric reflective systems: A Lie algebraic approach
We apply the Lie algebraic method to reflecting optical systems with plane-symmetric freeform mirrors. Using analytical ray-tracing equations we construct an optical map. The expansion of this map gives us the aberration coefficients in terms of initial ray coordinates. The Lie algebraic method is applied to treat aberrations up to arbitrary order. The presented method provides a systematic and rigorous approach to the derivation, treatment and composition of aberrations in plane-symmetric systems. We give the results for second- and third-order aberrations and apply them to three single-mirror examples.
A. Barion, M. J. H. Anthonissen, J. H. M. ten Thije Boonkkamp, W. L. IJzerman
2023-02-08T09:30:50Z
http://arxiv.org/abs/2302.03960v1
# Computing aberration coefficients for plane-symmetric reflective systems: A Lie algebraic approach ###### Abstract We apply the Lie algebraic method to reflecting optical systems with plane-symmetric freeform mirrors. Using analytical ray-tracing equations we construct an optical map. The expansion of this map gives us the aberration coefficients in terms of initial ray coordinates. The Lie algebraic method is applied to treat aberrations up to arbitrary order. The presented method provides a systematic and rigorous approach to the derivation, treatment and composition of aberrations in plane-symmetric systems. We give the results for second- and third-order aberrations and apply them to three single-mirror examples. osajournal ## 1 Introduction Off-axis mirror systems provide additional degrees of freedom for the design of more compact and accurate imaging systems compared to rotationally symmetric ones. The study of their aberration behaviour is of great interest to the optics community. The mathematical characterization of aberrations has been investigated in [1, 2]. Possible approaches to derive explicit aberration expansions are given in [3], where only confocal arrangements are considered, or in [4] where the starting design is rotationally symmetric. Recently, explicit expressions for plane-symmetric reflective optical systems have been determined using a matrix formalism [5, 6, 7]. Here, the matrix method for paraxial ray-tracing is extended to accommodate for higher degree polynomial terms and aberrations are composed by manipulating the respective matrix coefficients. In this work we will describe the Lie algebraic method needed to obtain analytical expressions for the aforementioned aberration terms of arbitrary order. Starting from a chosen ray, which we will define as the optical axis ray (OAR), we will follow its path through the system from object to image plane. At the image plane the aberration terms are given as polynomials in \((\mathbf{q},\mathbf{p})\), which are the phase-space variables of our optical system [8, 9]. In this Hamiltonian formulation, the propagation and reflection maps are symplectic, i.e., volume preserving in phase-space. Our goal is to approximate these maps while preserving symplecticity. Applying these approximating maps to the initial coordinates will deliver the desired aberration expansion terms. The Lie approach provides the tools to systematically determine the approximating map for one single plane-symmetric mirror. The description of a complete optical system is then reduced to a concatenation of maps. This process is described and handled by the Lie theory. Compared to the matrix formalism in [5, 6, 7], the mathematical framework of the Lie method reduces the number of coefficients necessary to be stored. Additionally, the phenomenon of low-order aberrations composing into higher order contributions follows directly from the mathematical framework. This is also known as the distinction between intrinsic and extrinsic aberrations [10], where low-order aberrations of individual surfaces (intrinsic) combine into higher order contributions to the complete system (extrinsic). In Section 2 we will describe the explicit maps that govern ray propagation and reflection in a plane-symmetric reflective optical system. A brief summary of the essential Lie algebraic notions is given in Section 3, even though we refer to [9, 11] for a more in depth description. Section 4 contains the steps needed to construct the approximation maps and the calculations up to third-order aberrations. Three examples to validate the presented method are given in Section 5, where both existing theoretical and computational results are reproduced. ## 2 Analytic Ray-Tracing In this section we discuss the mappings needed to ray-trace light rays through a reflective system composed of plane-symmetric, i.e., symmetric with respect to the \(yz\)-plane, optical surfaces; see Figure 1. In order to follow a ray path from object to image plane, we describe three transformations. First, the incoming ray is propagated from the object plane to the reflecting surface and the reflected ray from the surface to the image plane. Second, we describe the reflection of the ray at a plane-symmetric mirror. Finally, rotation of the coordinate system is shown, such that the \(z\)-axis remains aligned with the optical axis ray (OAR) before and after reflection. This implies that the considered \(z\)-axis will be broken into line segments. Once these three mappings have been described, they are concatenated to describe a single mirror, which we call the _fundamental element_, according to the following five steps: \(i)\) propagation from object plane to mirror; \(ii)\) rotation of the optical axis and corresponding coordinate system by an angle \(\theta\), equal to the incidence angle of the OAR; \(iii)\) reflection of the rays; \(iv)\) second rotation of coordinate system by the angle \(\theta\) and \(v)\) propagation from mirror to image plane. Position coordinates of an arbitrary ray before and after reflection are projected along the ray onto the two planes passing through the point of impact of the OAR and orthogonal to it; see Figure 1. The incoming (outgoing) plane, which is orthogonal to the incoming (outgoing) OAR, will be called the incoming (outgoing) standard screen and the incoming (outgoing) position and direction coordinates will be evaluated with respect to it. The incoming standard screen is the \(xy\)-plane and the outgoing one is the \(x^{\prime}y^{\prime}\)-plane, where the \(x\) and \(x^{\prime}\)-axis are the same; see Figure 1. In the remaining part of this section the three elementary maps are described independently from each other. Eventually, we concatenate them to describe a complete mirror element as previously described. Each ray is characterized by its position \(\mathbf{q}=(q_{x},q_{y})\) and its direction \(\mathbf{p}=(p_{x},p_{y})\) at a standard screen. As such, we use the phase-space coordinates \((\mathbf{q},\mathbf{p})\) as our ray coordinates, cf. [12]. Note that the coordinates of the OAR are at the origin of phase-space both before and after reflection, i.e., the OAR will have coordinates \(\mathbf{q}=\mathbf{0}=\mathbf{q}^{\prime}\) and \(\mathbf{p}=\mathbf{0}=\mathbf{p}^{\prime}\). In the descriptions to follow phase-space coordinates \((\mathbf{q},\mathbf{p})\) are mapped to primed coordinates \((\mathbf{q}^{\prime},\mathbf{p}^{\prime})\) by the respective mappings. ### Propagation We introduce the Hamiltonian \(H(\mathbf{p})\) governing free propagation of light in a medium of constant refractive index \(n\)[9, 12, 13], i.e., \[H(\mathbf{p})=-\sigma\sqrt{n^{2}-|\mathbf{p}|^{2}}=-\sigma p_{z}, \tag{1}\] where \(\mathbf{p}=(p_{x},p_{y})\) and \(p_{z}\) are the direction momenta - direction cosines times the refractive index \(n\) - along the respective axes. The variable \(\sigma=\pm 1\), is positive for forward travelling rays and negative for backwards propagating rays. Since we are only considering reflections, the refractive index of our medium (air/vacuum) \(n=1\). The distance measured along the optical axis, which coincides with the \(z\)-axis, serves as evolution parameter of the Hamiltonian system related to Eq. (1): \[\dot{\mathbf{q}}=\frac{\partial H}{\partial\mathbf{p}}=-\frac{\mathbf{p}}{H},\qquad\dot{ \mathbf{p}}=-\frac{\partial H}{\partial\mathbf{q}}=\mathbf{0}. \tag{2}\] The solution to the Hamiltonian system Eq. (2.2) with initial conditions \((\mathbf{q},\mathbf{p})\), after propagating a distance \(d\) along the optical axis ray, reads: \[\mathbf{q}^{\prime}=\mathbf{q}-d\frac{\mathbf{p}}{H(\mathbf{p})},\quad\mathbf{p}^{\prime}=\mathbf{p}. \tag{2.3}\] ### Reflection Next, we consider the law of reflection in vector form regardless of the coordinate system [14] \[\hat{\mathbf{k}}_{\mathrm{r}}=\hat{\mathbf{k}}_{\mathrm{i}}-2(\hat{\mathbf{k}}_{\mathrm{i} }\cdot\hat{\mathbf{n}})\hat{\mathbf{n}}, \tag{2.4}\] where \(\hat{\mathbf{k}}_{\mathrm{r}}\) is the unit direction vector of the reflected ray, \(\hat{\mathbf{k}}_{\mathrm{i}}\) the unit direction vector of the incoming ray and \(\hat{\mathbf{n}}\) the unit outer normal of the reflector at the impact point. Here, \(\hat{}\)(hat) indicates that the vector has length one and with the term 'outer' we mean opposite to the incoming ray direction, i.e., \(\hat{\mathbf{k}}_{\mathrm{i}}\cdot\hat{\mathbf{n}}<0\). Let the reflector be described by \(z=\zeta(\mathbf{q})\), then the outer normal \(\hat{\mathbf{n}}\) of the surface at point \((\mathbf{q},\zeta(\mathbf{q}))\) reads \[\hat{\mathbf{n}}=\frac{(\nabla\zeta(\mathbf{q}),-1)}{\sqrt{1+|\nabla\zeta(\mathbf{q})|^{2 }}}. \tag{2.5}\] The incoming and outgoing ray directions are \(\hat{\mathbf{k}}_{\mathrm{i}}=(\mathbf{p},\mathbf{p}_{z})/n\) and \(\hat{\mathbf{k}}_{r}=(\mathbf{p}^{\prime},\mathbf{p}_{z}^{\prime})/n\), respectively. The vector \(\hat{\mathbf{k}}_{\mathrm{r}}\) is calculated inserting Eq. (2.5) in Eq. (2.4). This way we get for the reflected momenta \((\mathbf{p}^{\prime},\mathbf{p}_{z}^{\prime})\): \[\mathbf{p}^{\prime}=\mathbf{p}-2\frac{\nabla\zeta(\bar{\mathbf{q}})}{1+|\nabla\zeta(\bar{ \mathbf{q}})|^{2}}(\mathbf{p}\cdot\nabla\zeta(\bar{\mathbf{q}})-p_{z}), \tag{2.6a}\] \[\mathbf{p}_{z}^{\prime}=p_{z}+\frac{2}{1+|\nabla\zeta(\bar{\mathbf{q}})|^{2}}(\mathbf{p} \cdot\nabla\zeta(\bar{\mathbf{q}})-p_{z}), \tag{2.6b}\] where \((\bar{\mathbf{q}},\zeta(\bar{\mathbf{q}}))\) is the intersection point of the incoming ray and the reflector. The intersection point \(\bar{\mathbf{q}}\) is related to the screen coordinate before reflection \(\mathbf{q}\) and the one after reflection \(\mathbf{q}^{\prime}\) by [12, 13, 9] \[\bar{\mathbf{q}}=\mathbf{q}+\zeta(\bar{\mathbf{q}})\frac{\mathbf{p}}{p_{z}},\quad\mathbf{q}^{ \prime}=\bar{\mathbf{q}}-\zeta(\bar{\mathbf{q}})\frac{\mathbf{p}^{\prime}}{p_{z}^{\prime}}. \tag{2.7}\] Figure 1: Point \(A\) of the incoming ray in \(xy\)-coordinates is mapped to point \(B\) of the outgoing ray in the rotated \(x^{\prime}y^{\prime}\)-coordinate system. The axes \(x\) and \(x^{\prime}\) are perpendicular to the \(yz\)-plane (not shown) and the incidence angle of the OAR is equal to \(\theta\). Eq. (7) gives an implicit relation for \(\bar{\mathbf{q}}\) which needs to be solved iteratively; see [12, 13, 15]. The Eqs. (7) are again solution to the Hamiltonian system Eq. (2), but now propagating a distance \(d=\zeta(\bar{\mathbf{q}})\). ### Rotation of the Standard Screen After propagation and reflection, we discuss the necessary steps to rotate our coordinates according to the OAR. We describe an arbitrary rotation by an angle \(\theta\) that rotates our standard screen, see Figure 2. For a single reflector two rotations of angle \(\theta\) are used, where eventually \(\theta\) is the incidence angle of the OAR. The first rotation brings the \(z\)-axis of the incoming coordinate system from being aligned with the OAR to being aligned with the surface normal at the point of intersection of the OAR. The surface equation \(z=\zeta(\mathbf{q})\) is defined in this coordinate system aligned with its normal and therefore has zero gradient at the origin, i.e., \(\nabla\zeta(\mathbf{0})=\mathbf{0}\), which is the point of impact of the OAR. We then apply the reflection mapping and subsequently rotate to the outgoing coordinate system aligned with the reflected OAR, see Figure 3. Let us define positive rotations when the \(y\)-axis is rotated towards the \(z\)-axis (clock-wise). In the starting coordinate system the \(z\)-axis is aligned with the incoming OAR and as such we can call it the incoming coordinate system. We consider surfaces with plane-symmetry with respect to the \(yz\)-plane and as such the rotations are around the \(x\)-axis. The rotation mapping of the screen around the \(x\)-axis for the phase-space coordinates can be found as a Lie transformation [9]. Here we present an equivalent derivation. The momentum coordinates \(p_{x},p_{y},p_{z}\) are rotated into the coordinates \(p_{x}^{\prime},p_{y}^{\prime},p_{z}^{\prime}\) according to the well-known rotation matrix \[\begin{pmatrix}p_{x}^{\prime}\\ p_{y}^{\prime}\\ p_{z}^{\prime}\end{pmatrix}=\begin{pmatrix}1&0&0\\ 0&\cos\theta&\sin\theta\\ 0&-\sin\theta&\cos\theta\end{pmatrix}\begin{pmatrix}p_{x}\\ p_{y}\\ p_{z}\end{pmatrix}. \tag{8}\] The expressions for the position coordinates are more complex. In fact, recall that we map the intersection points of the light rays with the (rotated) standard screens; see Figure 2. As such, let us first fix the parametrization of the ray and the normal equation of the Figure 2: Upon rotation of axes we map the position coordinate \(q_{y}\) to the position coordinate \(q_{y}^{\prime}\) relating to the same ray (red). of the ray can be parametrized by \[\begin{pmatrix}\mathbf{q}\\ 0\end{pmatrix}+\lambda\begin{pmatrix}\mathbf{p}\\ p_{z}\end{pmatrix},\quad\lambda\in\mathbb{R}. \tag{2.9}\] After the first rotation, the equation of the rotated standard screen, with normal \((0,\sin\theta,-\cos\theta)\) and passing through \((0,0,0)\), reads \[y\sin\theta-z\cos\theta=0. \tag{2.10}\] By substituting the parametrization (2.9) into the rotated standard screen equation (2.10) we can solve for \(\lambda\) to get the point of intersection. We get \[\lambda=\frac{q_{y}\,\sin\theta}{p_{z}\cos\theta-p_{y}\,\sin\theta}. \tag{2.11}\] Substituting the value in Eq. (2.11) in the parametrization (2.9) gives us the coordinates of the point of intersection of the considered ray and the tilted screen. The last step is to derive the position coordinates with respect to the rotated coordinate system, which corresponds to dividing the \(y\)-coordinate by \(\cos\theta\). The map for positive rotation around the \(x\)-axis of the position coordinates \(\mathbf{q}\) by an angle \(\theta\) reads: \[q^{\prime}_{x}=\frac{q_{x}\,p_{z}\cos\theta-(q_{x}\,p_{y}-q_{y}\,p_{x})\sin \theta}{p_{z}\cos\theta-p_{y}\sin\theta}, \tag{2.12a}\] \[q^{\prime}_{y}=\frac{q_{y}\,p_{z}}{p_{z}\cos\theta-p_{y}\sin\theta}. \tag{2.12b}\] In Eq. (2.12) the condition \(p_{z}\cos\theta-p_{y}\sin\theta=0\) implies that the considered ray is parallel to the rotated plane and as such will not intersect with the plane. ### The Fundamental Map With the transformations described in Eqs. (2.6)-(2.8) and (2.12) we can rotate the coordinate system for the \(z\)-axis to be aligned with the surface normal, reflect the incoming rays and rotate the system again to align the \(z\)-axis with the outgoing OAR. If propagation before and after the surface are added to this map, we will call it the _fundamental map_. This composition of transformations can be expanded up to the desired order in terms of the phase-space coordinates \((\mathbf{q},\mathbf{p})\). After reflection, \(\sigma\) in the Hamiltonian described in Eq. (2.1) changes sign. An intuitive way to understand this is to recall Eq. (2.1) with \(\sigma=1\) where \(H=-p_{z}\). By the condition \(\hat{\mathbf{k}}_{1}\cdot\hat{\mathbf{n}}<0\) that we imposed at reflection, the reflected OAR travels in the same \(z\)-direction as the surface normal, which is opposite to the one of the incoming OAR. This would lead to negative propagation distances. Since we prefer to consider forward moving rays, we opt for dealing with a left-handed coordinate system and align the \(z^{\prime}\)-axis in Figure 3 with the direction of the reflected OAR after the second rotation. It can be verified that this change does not influence the form of our rotation map and the reflection map remains also unchanged. The only important caveats are that the reflective surface must always be described in the coordinate system of the incoming OAR and that angles are positive when the \(y\)-axis rotates towards the \(z\)-axis. With the mappings described in Eqs. (2.6)-(2.8) and (2.12) we can concatenate them into a reflection plus rotation mapping. We define this composition of transformations by \(\mathcal{S}(\theta)\). Let \(\mathcal{R}(\theta)\) denote the rotation mapping by an angle of \(\theta\) and \(\mathcal{T}\) the reflection mapping. Then, we can concisely describe the map \(\mathcal{S}(\theta)\) as \[\mathcal{S}(\theta)=\mathcal{R}(\theta)\,\mathcal{T}\,\mathcal{R}(\theta). \tag{2.13}\] In Figure 1, we have that \(\mathcal{S}(\theta)\) maps \(A\) to \(B\). This definition of \(\mathcal{S}(\theta)\) is necessary to apply the Lie algebraic method. Note that the surface equation is given with respect to the coordinate system denoted by the index \(s\) in Figure 3. To conclude, concatenating \(\mathcal{S}(\theta)\) with propagation in object and image-space, \(\mathcal{P}_{\text{ob}}\) and \(\mathcal{P}_{\text{im}}\) respectively, constitutes the fundamental map \(\mathcal{M}\) necessary for our description of the optical system \[\mathcal{M}=\mathcal{P}_{\text{im}}\mathcal{S}(\theta)\mathcal{P}_{\text{ob}}. \tag{2.14}\] ## 3 Lie Algebraic Tools With the help of the Lie algebraic method it is possible to construct operators that reproduce the actions of propagation, reflection and rotation. These operators enable us to derive closed form expressions for the aberration components of an arbitrary optical system. A more detailed description of the Lie algebraic tools used in this work can be found in [9, 12, 13]. Here, we briefly introduce the main concepts. The space of functions on phase-space becomes a Lie algebra when endowed with the Poisson bracket \([\cdot,\cdot]\). The Poisson bracket of two functions \(f(\boldsymbol{q},\boldsymbol{p}),g(\boldsymbol{q},\boldsymbol{p})\) is defined as \[[f,g]=\frac{\partial f}{\partial\boldsymbol{q}}\boldsymbol{\cdot}\,\frac{ \partial g}{\partial\boldsymbol{p}}-\frac{\partial f}{\partial\boldsymbol{p}} \boldsymbol{\cdot}\,\frac{\partial g}{\partial\boldsymbol{q}}. \tag{3.1}\] Accordingly, we can associate with each \(f\) a Lie operator \([f,\cdot]\) that acts on a second function \(g\) by taking the Poisson bracket of the two. For example, \([q_{1},\cdot]=\partial\cdot/\partial p_{1}\) and for vectors we have \([\boldsymbol{q},\cdot]=\partial\cdot/\partial\boldsymbol{p}\). Using the Poisson bracket, we can associate to each function \(f\) on phase-space a mapping \(\exp([f,\cdot])\), called a Lie transformation, defined as \[\exp([f,\cdot])=\sum_{k=0}^{\infty}\frac{[f,\cdot]^{k}}{k!}, \tag{3.2}\] where \([f,\cdot]^{0}=I\) and \([f,\cdot]^{k}=[f,[f,\cdot]^{k-1}]\) for \(k>1\). Suppose \(f\) is only dependent on \(\boldsymbol{q}\), i.e. Figure 3: Definition of incoming \(xyz\) and outgoing \(x^{\prime}y^{\prime}z^{\prime}\)-coordinate systems. The auxiliary system denoted by the index \(s\) is where the surface equation is defined. \(f=f(\mathbf{q})\), then \[\exp(\left[f(\mathbf{q}),\cdot\,\right])\mathbf{q}=\mathbf{q}\quad\text{and}\quad\exp(\left[f (\mathbf{q}),\cdot\,\right])\mathbf{p}=\mathbf{p}+\frac{\partial f}{\partial\mathbf{q}}. \tag{3.3}\] In Eq. (3.3) the infinite series is truncated after the first two terms as any subsequent one is equal to zero. Note that Lie transformation are applied component-wise to vectors. A map \((\mathbf{q},\mathbf{p})\mapsto(\mathbf{q}^{\prime}(\mathbf{q},\mathbf{p}),\ \mathbf{p}^{\prime}(\mathbf{q},\mathbf{p}))\) is said to be a symplectic transformation, if it satisfies [9, 11]: \[\left[q_{i}^{\prime},q_{j}^{\prime}\right] =\left[q_{i},q_{j}\right]=0, \tag{3.4}\] \[\left[p_{i}^{\prime},p_{j}^{\prime}\right] =\left[p_{i},p_{j}\right]=0,\] \[\left[q_{i}^{\prime},p_{j}^{\prime}\right] =\left[q_{i},p_{j}\right]=\delta_{ij},\] where \(\delta_{ij}\) is the Kronecker delta. Symplectic transformations preserve volumes in phase-space. In fact, light propagation, reflection and rotation are all symplectic maps. It can be proven that a mapping defined as in Eq. (3.2) is symplectic [11]. Conversely, symplectic mappings \(\mathcal{M}\) that map the origin to itself, i.e., \(\mathcal{M}(\mathbf{0})=\mathbf{0}\), can be represented as an infinite concatenation of Lie transformations of the form \[\mathcal{M}=\exp(\left[g_{2},\cdot\,\right])\exp(\left[g_{3},\cdot\,\right]) \cdots, \tag{3.5}\] where the generators \(g_{2},g_{3}\), etc. are homogeneous polynomials in the variables \((\mathbf{q},\mathbf{p})\) of degree \(2,3\), etc. [11]. Here, we omit the concatenation symbol \(\circ\), as it is clear from the context that we are concatenating operators. Recall that a homogeneous polynomial \(g\) of degree \(m\), as in Eq. (3.5), has the following property \[g(\lambda\mathbf{q},\lambda\mathbf{p})=\lambda^{m}g(\mathbf{q},\mathbf{p})\quad\forall\lambda \in\mathbb{R}. \tag{3.6}\] The maps for ray propagation and reflection plus rotation are symplectic and map the origin onto itself [9, 12, 13]. It is therefore possible to represent them as an infinite, or approximate them by a truncated, concatenation of Lie transformations according to the result in Eq. (3.5) and then rearrange the Lie transformations using additional Lie tools given in Appendix A, cf. Eq. (A.1) and Eq. (A.2). Our aim is to approximate the fundamental map \(\mathcal{M}\) in Eq. (2.14) of a reflector by means of a truncated concatenation of Lie transformations in ascending order, similarly to the structure of Eq. (3.5). This enables us to clearly distinguish which parts of the map influence which order of aberrations. In fact, generators of order \(k\) are directly related to the transverse ray aberrations of order \(k-1\)[12]. Concatenating multiple fundamental maps representing the different mirrors in our system and disregarding terms that lead to higher order aberrations leads to a map describing the complete optical system - up to the desired order of accuracy in terms of initial phase-space coordinates. ## 4 The Fundamental Element Free propagation, reflection and rotation are symplectic maps and their combined actions map the origin of phase-space, i.e., the OAR, to itself. As such, it is possible to represent the combined actions of reflection and rotation \(\mathcal{S}(\theta)\), see Eq. (2.13), in the form of Eq. (3.5). The polynomials necessary for this representation in terms of Lie transformations are called the _generators_ of the map. It is important to consider the complete reflection with rotation map \(\mathcal{S}(\theta)\) because this ensures that the origin of phase-space, i.e., our OAR, is mapped onto itself. Hence, we have that \((\mathcal{S}(\theta))(\mathbf{0})=\mathbf{0}\) and we can apply the results in Eq. (3.5). Rotation alone does not map the origin of phase-space onto itself. We subsequently concatenate \(\mathcal{S}(\theta)\) with the maps of object and image-space propagation to derive the description of the _fundamental element_ of the optical system. The fundamental element represents the physical counterpart of the fundamental map described at the end of Section 2. This fundamental element is the building block of any arbitrary reflecting optical system with plane-symmetry with respect to the \(yz\)-plane. We restrict our analysis to aberrations of order three and therefore only polynomials up to degree four in Eq. (3.5) are of relevance; see [9, 12, 13]. The generators of free propagation for light rays in a medium of refractive index \(n=1\) are, up to degree four [12], \[h_{2}(\boldsymbol{p})=\frac{1}{2}|\boldsymbol{p}|^{2},\quad h_{4}(\boldsymbol {p})=\frac{1}{8}|\boldsymbol{p}|^{4}. \tag{4.1}\] This means, that if we want to propagate our physical system with initial condition \((\boldsymbol{q},\boldsymbol{p})\) over a distance \(d\) along the optical axis, then the expression \[\begin{pmatrix}\boldsymbol{q}^{\prime}\\ \boldsymbol{p}^{\prime}\end{pmatrix}=\exp(-d\left[h_{2},\cdot\,\right])\exp(- d\left[h_{4},\cdot\,\right])\begin{pmatrix}\boldsymbol{q}\\ \boldsymbol{p}\end{pmatrix}, \tag{4.2}\] is equal, up to third-order terms, to the solution given in Eq. (2.3) [9, 12, 13]. The result of Eq. (4.2) is therefore sufficiently accurate to investigate third-order aberrations. Note that the polynomials in Eq. (4.1) are simply the first two terms in the Taylor expansion of the Hamiltonian defined in Eq. (2.1). The mirror equation is given in the coordinate system with its \(z\)-axis aligned with the surface's normal and is of the form \[z=\zeta(\boldsymbol{q})=\sum_{\begin{subarray}{c}2\leq m+n\leq 4\\ m\text{ even}\end{subarray}}c_{mn}q_{x}^{m}q_{y}^{n}. \tag{4.3}\] We consider surface terms up to fourth order, since higher order terms do not influence third-order aberrations. The reflection and rotation mapping \(\mathcal{S}(\theta)\) maps \(\boldsymbol{q}\), \(\boldsymbol{p}\) to \(\boldsymbol{q}^{\prime}\), \(\boldsymbol{p}^{\prime}\). First, the rotation by the angle \(\theta\) is applied to the incoming ray coordinates, cf. Eqs. (2.8),(2.12). Secondly, reflection acts on these already rotated coordinates, cf. Eqs. (2.6),(2.7). Lastly, a second rotation by \(\theta\) maps these coordinates into the final reflected coordinate system. All these transformations - and their concatenation - can be expanded in terms of \((\boldsymbol{q},\boldsymbol{p})\) with the aid of computer algebra software, e.g., Mathematica. The first order expansion of \(\mathcal{S}(\theta)\) reads: \[\begin{split} q_{x}^{\prime}&=q_{x},\\ q_{y}^{\prime}&=q_{y},\\ p_{x}^{\prime}&=p_{x}+4\,c_{20}\cos(\theta)\,q_{x},\\ p_{y}^{\prime}&=p_{y}+4\,c_{02}\sec(\theta)\,q_{y}. \end{split} \tag{4.4}\] Here, the coefficients \(c_{20},c_{02}\), cf. Eq. (4.3), can be related to the radii of curvature of the mirror surface. The polynomial \(g_{2}\) associated with the Lie transformation that generates the linear map in Eq. (4.4) is only dependent on \(\boldsymbol{q}\): \[g_{2}(\boldsymbol{q})=2\,c_{20}\cos(\theta)\,q_{x}^{2}+2\,c_{02}\sec(\theta)\, q_{y}^{2}. \tag{4.5}\] The Lie transformation generated by Eq. (4.5) reads \[\begin{pmatrix}\boldsymbol{q}^{\prime}\\ \boldsymbol{p}^{\prime}\end{pmatrix}=\exp(\left[g_{2}(\boldsymbol{q}),\cdot \,\right])\begin{pmatrix}\boldsymbol{q}\\ \boldsymbol{p}\end{pmatrix}=\begin{pmatrix}\boldsymbol{q}\\ \boldsymbol{p}+\frac{\partial g_{2}(\boldsymbol{q})}{\partial\boldsymbol{q}} \end{pmatrix}. \tag{4.6}\] One can verify that the expression in Eq. (4.6) is the same as the one in Eq. (4.4). Note that, if \(g_{2}\) would also depend on \(\mathbf{p}\), it would generate contributions to the \(\mathbf{q}\)-coordinates, which is undesired; cf. Eq. (4.4). To initiate a more systematic approach, we define the generators \(g_{m}\) in a more general way: \[g_{m}(\mathbf{q},\mathbf{p})=\sum_{i+j+k+l=m}a_{ijkl}\,q_{x}^{i}\,q_{y}^{j}\,p_{x}^{k} \,p_{y}^{l},\quad i,j,k,l\in\mathbb{N},\quad i+k\text{ even} \tag{4.7}\] where the condition \(i+k\) even stems from the symmetry of the optical system itself. In this notation the functions \(g_{2},g_{3},g_{4}\) are defined by their coefficients. It is our goal to determine these coefficients such that \[\mathcal{S}(\theta)\overset{(\ref{eq:2})}{=}\exp([g_{2},\cdot ])\exp([g_{3},\cdot])\exp([g_{4},\cdot]). \tag{4.8}\] The notation \(\overset{(\ref{eq:2})}{=}\) symbolizes that the truncated concatenation of Lie transformations on the right-hand side (RHS) of Eq. (4.8) produces the same expressions as the map \(\mathcal{S}(\theta)\) up to third-order terms in phase-space coordinates. To derive these coefficients, one has to expand the mapping \(\mathcal{S}(\theta)\) up to the order of interest, i.e., 3 in our case. Subsequently, we consider the general form of the generators as given in Eq. (4.7) and compute the action of the concatenation of Lie transformations in Eq. (4.8) on the phase-space variables. Since the coefficients of the generator \(g_{k}\) are fully determined by their contributions to the aberrations of order \(k-1\), we can determine the coefficients of the generators in increasing order; see the method described in [12]. The non-zero coefficients of the generators of the reflection plus rotation mapping \(\mathcal{S}(\theta)\) are listed in Table 1 in the form described by Eq. (4.7). We proceed to combine reflection and rotation with propagation before and after the surface. The mapping \(\mathcal{M}\) will describe the action of a fundamental element on the rays from object plane coordinates \((\mathbf{q},\mathbf{p})\) to image plane coordinates \((\mathbf{q}^{\prime},\mathbf{p}^{\prime})\). The map \(\mathcal{M}\) is, up to fourth degree generators, composed as follows: \[\mathcal{M}\overset{(\ref{eq:2})}{=}\underbrace{\exp\left(-s_{ \text{ob}}[h_{2},\cdot]\right)\exp\left(-s_{\text{ob}}[h_{4},\cdot]\right)}_{ \overset{(\ref{eq:2})}{=}\text{propagation from object plane}} \underbrace{\exp([g_{2},\cdot])\exp([g_{3},\cdot])\exp([g_{4}, \cdot])}_{\overset{(\ref{eq:2})}{=}\mathcal{S}(\theta)}\\ \underbrace{\exp\left(-s_{\text{im}}[h_{2},\cdot]\right)\exp \left(-s_{\text{im}}[h_{4},\cdot]\right)}_{\overset{(\ref{eq:2})}{=}\text{ propagation to image plane}}. \tag{4.9}\] Here, \(s_{\text{ob}},s_{\text{im}}\) are the object and image distances measured along the OAR in the sagittal plane. Although it might appear counter-intuitive, due to their unique properties, the order of Lie transformations is left-to-right like the order of transformations undergone by the ray [9]. The object and image distances for the sagittal and tangential planes satisfy the Coddington equations [16]: \[\text{sagittal plane:} \frac{1}{s_{\text{ob}}}+\frac{1}{s_{\text{im}}}=-4\,c_{20}\cos( \theta), \tag{4.10a}\] \[\text{tangential plane:} \frac{1}{t_{\text{ob}}}+\frac{1}{t_{\text{im}}}=-4\,c_{02}\sec( \theta). \tag{4.10b}\] We want to reorder and combine the Lie transformations of Eq. (4.9) into three Lie transformations generated by the functions \(\tau_{2},\tau_{3},\tau_{4}\) such that \[\mathcal{M}\overset{(\ref{eq:2})}{=}\exp([\tau_{2},\cdot])\exp([\tau_{3}, \cdot])\exp([\tau_{4},\cdot]). \tag{4.11}\] This allows us to separate the linear part of the mapping, generated by \(\tau_{2}\), from the higher order parts generated by \(\tau_{3},\tau_{4}\) that induce aberrations. Again, equality up to third-order expansions is sufficient for our current work since we are investigating aberrations up to this same order. The functions \(\tau_{2},\tau_{3},\tau_{4}\) describe the action of a fundamental element up to the expansion order 3. To derive the functions \(\tau_{2},\tau_{3},\tau_{4}\) it is necessary to manipulate the mapping in Eq. (4.9) such that the generators are combined and reordered in ascending order. The procedure has been shown in [12] and a short example can be found in Appendix A. The main tools necessary for these calculations are the Baker-Campbell-Hausdorff (BCH) formula (A.1) and the identity given in Eq. (A.2). The Lie transformation generated by the second degree polynomial \(\tau_{2}\) is more conveniently represented in its matrix form \(M_{G}\) as it is the matrix multiplication of its three components, i.e., object-space propagation, reflection with rotation and image-space propagation. We call this the Gaussian part of the mapping \(\mathcal{M}_{G}=\exp(\{\tau_{2},\cdot\,\cdot\,\})\) and the associated \(M_{G}\) reads \[M_{G}=\begin{pmatrix}1&0&s_{\text{im}}&0\\ 0&1&0&s_{\text{im}}\\ 0&0&1&0\\ 0&0&0&1\end{pmatrix}\begin{pmatrix}1&0&0&0\\ 0&1&0&0\\ 4\,c_{20}\cos(\theta)&0&1&0\\ 0&4\,c_{02}\sec(\theta)&0&1\end{pmatrix}\begin{pmatrix}1&0&s_{\text{ob}}&0 \\ 0&1&0&s_{\text{ob}}\\ 0&0&1&0\\ 0&0&0&1\end{pmatrix}. \tag{4.12}\] The coefficients of the polynomial \(\tau_{3}\) are given in Table 2 analogously to Eq. (4.7) with coefficients denoted by \(b_{ijkl}\). The expressions for the coefficients of \(\tau_{4}\) are rather lengthy and not useful for the current discussion, but can be found in Appendix B for completeness. We thus have a mapping that describes the fundamental element up to third-order. ### From Optical Element to Optical System To treat optical systems it suffices to concatenate multiple fundamental elements, keeping in mind the sign conventions described at the end of Section 2.4. Each intermediate image plane corresponds to the intermediate object plane of the subsequent mirror. Thus, if one fundamental element is described by Eq. (4.11), then multiple elements are a concatenation of Lie transformations of this form. For example, suppose a two-mirror system where one mirror is described by the generators \(\tau_{k}\) and the other mirror by the generators \(\sigma_{k}\). Then, the map \(\mathcal{M}\) of the complete system, up to third-order contributions, reads, \[\mathcal{M}\stackrel{{(3)}}{{=}}\exp(\,[\tau_{2},\cdot\,])\exp([ \tau_{3},\cdot\,])\exp(\,[\tau_{4},\cdot\,])\exp(\,[\sigma_{2},\cdot\,])\exp( \,[\sigma_{3},\cdot\,])\exp(\,[\sigma_{4},\cdot\,]). \tag{4.13}\] The coefficients of \(\tau_{k},\sigma_{k}\) are completely described by the geometry of the system according to the expressions \(b_{ijkl}\). Previously, we have stressed the importance of having the Lie transformations in ascending order. This allows us to separate the contributions to the different (ascending) orders of aberrations. \begin{table} \begin{tabular}{l l} \hline Coefficients & Values \\ \hline \(b_{0003}\) & \(s_{\text{im}}^{2}\,(a_{0201}-s_{\text{im}}a_{0300})\) \\ \(b_{0021}\) & \(s_{\text{im}}^{2}\,(a_{1110}+a_{2001}-s_{\text{im}}a_{2100})\) \\ \(b_{0102}\) & \(s_{\text{im}}\,(3s_{\text{im}}a_{0300}-2a_{0201})\) \\ \(b_{0120}\) & \(s_{\text{im}}\,(s_{\text{im}}a_{2100}-a_{1110})\) \\ \(b_{0201}\) & \(a_{0201}-3s_{\text{im}}a_{0300}\) \\ \(b_{0300}\) & \(a_{0300}\) \\ \(b_{1011}\) & \(s_{\text{im}}\,(2s_{\text{im}}a_{2100}-a_{1110}+2a_{2001})\) \\ \(b_{1110}\) & \(a_{1110}-2s_{\text{im}}a_{2100}\) \\ \(b_{2001}\) & \(a_{2001}-s_{\text{im}}a_{2100}\) \\ \(b_{2100}\) & \(a_{2100}\) \\ \hline \end{tabular} \end{table} Table 2: Coefficients of \(\tau_{3}\). The necessary computations to reorder Eq. (4.13) rely on the procedure for reordering shown in [12] and make use of the BCH formula (A.1) and the results of Eq. (A.2). During these steps, the composition of low-order aberrations into high-order ones follows directly from the application of the BCH formula. In more complex optical systems the intermediate image planes for the sagittal and tangential rays need not to be located at the same point along the OAR. As such, the choice of the propagation distances for each fundamental element is seemingly unclear. However, whatever the chosen propagation distance is equal to, the sum of the intermediate image distance of the surface \(j\) and the object distance of surface \(j+1\) needs always be equal to the total distance between the two surfaces. Since the propagation mappings commute, see [9, 12, 13], it does not matter what distance is chosen for the image propagation of surface \(j\) or the object distance for surface \(j+1\) as long as their sum remains equal to the distance between the two surfaces. ## 5 Applications We verify the presented methodology using three examples. We recover the surface expansion coefficients of a spherical ellipsoid for a point-to-point imager and the surface expansion coefficients for a focusing mirror as recently presented in [5]. Lastly, we use our proposed method to ray-trace a beam of rays reflected by a biconic mirror and compare with the spot diagram generated using OpticStudio. The first example will be the problem of perfect point-to-point imaging; see Figure 4. Suppose we have an object point on the OAR which is then reflected off a surface onto an image point. A spherical ellipsoid with these two points at its foci will result in perfect imaging [17], i.e., no aberrations will be present. Therefore, if we choose arbitrarily an object and an image point and impose zero aberrations up to third-order for all rays with initial position \(\mathbf{q}^{\rm ob}=\mathbf{0}\), then the solution for the surface coefficients should be the surface expansion terms up to fourth order of the corresponding spherical ellipsoid. We fix the object distance, image distance and the surface coefficients \(c_{20},c_{02}\) to have the Figure 4: A spherical ellipsoid as perfect imager between its foci. The OAR is in red and the red dashed lines are other rays originating from the object. The object and image planes are represented. The black dashed line is the major axis of the ellipsoid. desired paraxial properties. Subsequently, the corresponding map given in Eq. (4.11) is applied to the initial coordinates (\(\mathbf{0}\), \(\boldsymbol{p}^{\mathrm{ob}}\)). The expression for the final position coordinates at the image plane \(\boldsymbol{q}^{\mathrm{im}}\) is of the form: \[\boldsymbol{q}^{\mathrm{im}}=\boldsymbol{q}^{\mathrm{im}}(\mathbf{0}, \boldsymbol{p}^{\mathrm{ob}}). \tag{5.1}\] Eq. (5.1) is a polynomial dependent only on the initial direction \(\boldsymbol{p}^{\mathrm{ob}}\) where each monomial coefficient will depend on the chosen parameters and the - yet undetermined - higher order coefficients \(c_{mn}\) of the reflecting surface; see Eq. (4.3). The requirement of zero aberration, i.e., \(\boldsymbol{q}^{\mathrm{im}}=\mathbf{0}\), simply translates in setting all monomial coefficients in Eq. (5.1) equal to zero. The resulting system of equations will determine the value of the surface expansion coefficients. For example, if we choose a spherical ellipsoid with major axis \(a=20\) and minor axis \(b=10\), then with the corresponding initial parameters for the system \(s_{\mathrm{ob}}=s_{\mathrm{im}}=20\) and \(c_{20}=-1/20\), \(c_{02}=-1/80\), we get the following system of equations for the unknown coefficients \(c_{mn}\) with \(m=0,2,4\) and \(2\leq m+n\leq 4\): \[\begin{cases}80\,p_{x}^{3}(8000c_{40}+1)=0,\\ -80\,p_{x}p_{y}^{2}\left(2400\sqrt{3}c_{03}+200\sqrt{3}c_{21}-16000c_{22}-1 \right)=0,\\ 32000\,p_{x}p_{y}c_{21}=0,\\ 80\,p_{x}^{2}p_{y}\left(2400\sqrt{3}c_{03}+200\sqrt{3}c_{21}+16000c_{22}+1 \right)=0,\\ 16000\,p_{x}^{2}c_{21}=0,\\ 80\,p_{y}^{3}(128000c_{04}+1)=0,\\ 192000\,p_{x}^{2}c_{03}=0.\end{cases} \tag{5.2}\] The solution to Eqs. (5.2), computed with exact arithmetic, gives the surface expansion coefficients shown in Table 3. The coefficients in Table 3 are the same we would get by directly expanding the ellipsoid's equation \[z=\zeta(\boldsymbol{q})=\frac{b}{a}\sqrt{a^{2}-q_{y}^{2}-\frac{a^{2}}{b^{2}}q_ {x}^{2}}-b, \tag{5.3}\] in terms of \(q_{x},q_{y}\) around the origin. In fact, the surface equation (5.3) represents the ellipsoid at the point of impact of the OAR with respect to the coordinate system aligned with its normal at that point; see Figure 4. The next example reproduces some of the results given in [5]. Here, the authors calculate the surface expansion coefficients for a single mirror where again zero third-order aberrations are \begin{table} \begin{tabular}{c c} \hline \(c_{mn}\) & Values \\ \hline \(c_{21}\) & \(0\) \\ \(c_{03}\) & \(0\) \\ \(c_{40}\) & \(-1/8000\) \\ \(c_{22}\) & \(-1/16000\) \\ \(c_{04}\) & \(-1/128000\) \\ \hline \end{tabular} \end{table} Table 3: Surface expansion coefficients for the spherical ellipsoid defined according to Eq. (4.3). imposed with the additional condition that the initial momenta are equal to zero, i.e., \(\mathbf{p}^{\rm ob}=\mathbf{0}\); see Figure 5. The initial surface parameters are the effective radius of curvature \(R=-200\), cf. [5], for both the sagittal and tangential planes and the incidence angle of \(\theta=-0.2\). The expression for the final position coordinates \(\mathbf{q}^{\rm im}\) is now a polynomial in \(\mathbf{q}^{\rm ob}\), i.e., \[\mathbf{q}^{\rm im}=\mathbf{q}^{\rm im}(\mathbf{q}^{\rm ob},\mathbf{0}). \tag{5.4}\] Setting all monomial coefficients of Eq. (5.4) to zero, results in the following system of equations for the \(c_{mn}\) with \(m=0,2,4\) and \(2\leq m+n\leq 4\): \[\begin{cases}\dfrac{q_{x}^{3}\left(-8R^{3}\cos(\theta)c_{40}+ \sec^{2}(\theta)-1\right)}{2R^{2}}=0,\\ \dfrac{q_{x}q_{y}^{2}\sec(\theta)\left(-3\sec(\theta)\left(2R^{2}( \sin(2\theta)c_{21}-2\tan(\theta)c_{03})+\cos(2\theta)-1\right)-8R^{3}c_{22} \right)}{4R^{2}}=0,\\ -\dfrac{q_{x}q_{y}\left(2R^{2}c_{21}+\tan(\theta)\right)}{R}=0,\\ -\dfrac{q_{x}^{2}q_{y}\sec(\theta)\left(5\sin(\theta)\left(2R^{2}c_{21}+\tan( \theta)\right)+2R^{2}(3\tan(\theta)\sec(\theta)c_{03}+2Rc_{22})\right)}{2R^{2} }=0,\\ -\dfrac{q_{x}^{2}\left(2R^{2}c_{21}+\tan(\theta)\right)}{2R}=0,\\ \dfrac{q_{y}^{3}\sec^{3}(\theta)\left(-32R^{2}(2\sin(\theta)c_{03}+Rc_{04})-3 \cos(\theta)+3\cos(3\theta)\right)}{8R^{2}}=0,\\ -\dfrac{3q_{y}^{2}\left(2R^{2}\sec^{2}(\theta)c_{03}+\tan(\theta)\right)}{2R} =0.\end{cases} \tag{5.5}\] The Eqs. (5.5) are solved using exact arithmetic and give results for the surface expansion coefficients shown in Table 4, which agree exactly with those given in [5]. The last example we present in this paper is a comparison between spot diagrams of a biconic mirror when computed using OpticStudio and our Lie method; see Figure 6. The mapping in Eq. (4.11) generates a third-degree polynomial in phase-space variables which can be used as a ray-tracer between object and image plane coordinates. In Figure 7, we can see the difference in the ray-tracing for an off-axis beam of rays originating from the object point at position Figure 5: A focusing reflector for an object at infinity as used in [5]. \(\mathbf{q}=(-0.5,0.5)\) and direction domain \(\mathbf{p}\in[-0.0075,0.0125]\times[-0.0125,0.0075]\) and for an on-axis beam of rays originating from \(\mathbf{q}=(0,0)\) with direction domain \(\mathbf{p}\in[-0.01,0.01]^{2}\). For both cases the object and image distances are \(s_{\rm ob}=200,\ s_{\rm im}=100\) and the OAR has an incidence angle equal to \(\theta=\pi/6\). The surface equation of the biconic is: \[z=\zeta(\mathbf{q})=\frac{c_{x}q_{x}^{2}+c_{y}q_{y}^{2}}{1+\sqrt{1-(1+\kappa_{x})c _{x}^{2}q_{x}^{2}-(1+\kappa_{y})c_{y}^{2}q_{y}^{2}}}, \tag{5.6}\] with \(\kappa_{x}=\kappa_{y}=0\) and \(c_{x}=-\frac{\sqrt{3}}{200}\), \(c_{y}=-\frac{3\sqrt{3}}{800}\). Using the surface expansion coefficients of Eq. (5.6) we can determine the necessary coefficients \(b_{ijkl}\) for the Lie operators and the resulting spot diagram coincides almost perfectly with the OpticStudio ray tracing. The maximum distance between the coordinates given by the two methods in the examples is \(\Delta_{\rm max}=9\times 10^{-5}\). ## 6 Conclusions In this paper we extend the procedure presented for rotationally symmetric systems in [12, 13, 9] to mirror systems with only planar symmetry. Starting from a set of analytical ray-tracing \begin{table} \begin{tabular}{c c} \hline \(c_{mn}\) & Values \\ \hline \(c_{21}\) & \(2.53388\times 10^{-6}\) \\ \(c_{03}\) & \(2.43386\times 10^{-6}\) \\ \(c_{40}\) & \(-6.55111\times 10^{-10}\) \\ \(c_{22}\) & \(-3.77553\times 10^{-9}\) \\ \(c_{04}\) & \(-3.02209\times 10^{-9}\) \\ \hline \end{tabular} \end{table} Table 4: Surface expansion coefficients for the second example defined according to Eq. (4.3). Figure 6: Sketch of the biconic reflector in Eq. (5.6). The point objects 1 and 2 are imaged paraxially onto their primed counterparts. equations, we expand them up to third-order. The information about these expansions is then encoded into the associated Lie transformations. We derive the generator polynomials for the Lie transformations up to fourth degree. Thus, the method produces third-order analytical expressions for the transverse ray aberrations for an arbitrary mirror with planar symmetry. We calculate coefficients of the generators for a single mirror. These coefficients depend only on the geometrical information of the mirror itself. It is therefore possible to describe an arbitrary optical system as the concatenation of single mirrors since for each mirror the associated generated polynomials are known. Complex phenomena like lower order aberrations combining into higher order ones are captured by the method. We verified our results with three applications. In the first two, we show how it is possible to use the analytic expressions of the aberrations to determine the freeform coefficients of the mirror surface that eliminate aberrations up to third-order in the case of a point object and an object at infinity. The last example shows that the aberration expressions can also be used for ray tracing (up to the order of accuracy that has been used in the Lie transformations). Here, we see excellent agreement between the Lie-generated spot diagrams and the ones generated by OpticStudio. The authors now aim to explore the application of the shown method to the limiting case of grazing incidence and to investigate possible applications for the determination of mirror systems free of, or with reduced, third-order aberrations. The latter can serve as advantageous starting designs for complex mirror systems. Additionally, we intend to work out the relation between the Lie aberration coefficients and the wavefront aberration coefficients described in [1, 2]. ## Appendix A Additional Lie Tools The Baker-Campbell-Hausdorff (BCH) formula describes how two Lie transformations generated by \(f\) and \(g\) can be combined into a single one generated by \(k\): \[\exp(\left[k,\cdot\,\right])=\exp(\left[f,\cdot\,\right])\exp(\left[g,\cdot\, \right]),\] (A.1a) where \[k=f+g+[f,g]/2+(\left[f,[f,g]\right]+[g,[g,f]])/12+\cdots\quad.\] (A.1b) In the current discussion, through the BCH formula Eq. (A.1) the lower degree generators combine into higher degree contributions and therefore the aberrations do so as well [9, 12, 13]. Furthermore, it can be proven that the following identity for a triplet of Lie transformations Figure 7: Spot diagrams at image plane. Left: Off-axis object point. Right: On-axis object point. holds [11]: \[\exp([g,\cdot\,])\exp([f,\cdot\,])\exp(-[g,\cdot\,])=\exp([k,\cdot\,]),\] (A.2) \[k=\exp([g,\cdot\,])f.\] These tools are important when composing Lie transformations. Consider the example of two mirrors described by the generators \(f_{2},f_{3}\) and \(g_{2},g_{3}\). To derive the ordered composition map \(\mathcal{M}\) of the combined system we proceed as follows: \[\mathcal{M} =\exp([f_{2},\cdot\,])\exp([f_{3},\cdot\,])\exp([g_{2},\cdot\,]) \exp([g_{3},\cdot\,])\] \[=\exp([f_{2},\cdot\,])\underbrace{\exp([g_{2},\cdot\,])\exp(-[g_ {2},\cdot\,])}_{=\mathcal{I}}\exp([f_{3},\cdot\,])\exp([g_{2},\cdot\,])\exp([g_ {3},\cdot\,])\] \[=\underbrace{\exp([f_{2},\cdot\,])\exp([g_{2},\cdot\,])}_{=\exp( \{[k_{2},\cdot\,])}}\underbrace{\exp(-[g_{2},\cdot\,])\exp([f_{3},\cdot\,]) \exp([g_{2},\cdot\,])}_{=\exp([f_{3}^{\mathrm{tr}},\cdot\,])}\exp([g_{3}, \cdot\,])\] \[=\exp([k_{2},\cdot\,])\exp([f_{3}^{\mathrm{tr}},\cdot\,])\exp([g_ {3},\cdot\,])\] \[\overset{(\ref{eq:2})}{=}\exp([k_{2},\cdot\,])\exp([k_{3},\cdot\,]),\] (A.3) where \(f_{3}^{\mathrm{tr}}=\exp(-[g_{2},\cdot\,])f_{3}\) according to Eq. (A.2) and \(k_{3}=f_{3}^{\mathrm{tr}}+g_{3}\) according to Eq. (A.1). The Lie transformations generated by \(f_{2},g_{2}\) have related matrices and the product of these matrices determines the generator \(k_{2}\). The insertion of the identity map \(\mathcal{I}\) in Eq (A.3) allows us to simultaneously reorder the Lie transformations and apply Eq. (A.2). Note that, by using the BCH formula there would be additional generators of degree 4 and higher, which can be neglected if we consider ordered compositions of generators up to degree 3; see [12, 13]. ## Appendix B Generator Coefficients Here the reader will find the non-zero coefficients of \(\tau_{4}\) in Eq. (4.11). Funding.TKI program "Photolitho M&CS" (TKI-HTSM 19.0162) Acknowledgements.The authors thank Teus Tukker (ASML) for his fruitful remarks. Disclosures.The authors declare no conflicts of interest. Data availability.Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
2304.07888
Gibbons' conjecture for entire solutions of master equations
In this paper, we establish a generalized version of Gibbons' conjecture in the context of the master equation \begin{equation*} (\partial_t-\Delta)^s u(x,t)=f(t,u(x,t)) \,\, \mbox{in}\,\, \mathbb{R}^n\times\mathbb{R}. \end{equation*} We show that, for each $t\in\mathbb{R}$, the bounded entire solution $u(x,t)$ must be monotone increasing in one direction, and furthermore it is one-dimensional symmetric under certain uniform convergence assumption on $u$ and an appropriate decreasing condition on $f$. These conditions are slightly weaker than their counter parts proposed in the original Gibbons' conjecture. To overcome the difficulties in proving the Gibbons' conjecture and the impediments caused by the strong correlation between space and time of fully fractional heat operator $(\partial_t-\Delta)^s$, we introduce some new ideas and provide several new insights. More precisely, we first derive a weighted average inequality, which not only provides a straightforward proof for the maximum principle in bounded domains, but also plays a crucial role in further deducing the maximum principle in unbounded domains. Such average inequality and maximum principles are essential ingredients to carry out the sliding method, and then we apply this direct method to prove the Gibbons' conjecture in the setting of the master equation. It is important to note that the holistic approach developed in this paper is highly versatile, and will become useful tools in investigating various qualitative properties of solutions as well as in establishing the Gibbons' conjecture for a broad range of fractional elliptic and parabolic equations and systems.
Wenxiong Chen, Lingwei Ma
2023-04-16T21:04:16Z
http://arxiv.org/abs/2304.07888v1
# Gibbons' conjecture for entire solutions of master equations ###### Abstract. In this paper, we establish a generalized version of Gibbons' conjecture in the context of the master equation \[(\partial_{t}-\Delta)^{s}u(x,t)=f(t,u(x,t))\text{ in }\mathbb{R}^{n}\times \mathbb{R}.\] We show that, for each \(t\in\mathbb{R}\), the bounded entire solution \(u(x,t)\) must be monotone increasing in one direction, and furthermore it is one-dimensional symmetric under certain uniform convergence assumption on \(u\) and an appropriate decreasing condition on \(f\). These conditions are slightly weaker than their counter parts proposed in the original Gibbons' conjecture. To overcome the difficulties in proving the Gibbons' conjecture and the impediments caused by the strong correlation between space and time of fully fractional heat operator \((\partial_{t}-\Delta)^{s}\), we introduce some new ideas and provide several new insights. More precisely, we first derive a weighted average inequality, which not only provides a straightforward proof for the maximum principle in bounded domains, but also plays a crucial role in further deducing the maximum principle in unbounded domains. Such average inequality and maximum principles are essential ingredients to carry out the sliding method, and then we apply this direct method to prove the Gibbons' conjecture in the setting of the master equation. It is important to note that the holistic approach developed in this paper is highly versatile, and will become useful tools in investigating various qualitative properties of solutions as well as in establishing the Gibbons' conjecture for a broad range of fractional elliptic and parabolic equations and systems. _Mathematics Subject classification_ (2020): 35R11; 35K58; 35B50; 26A33. _Keywords:_ Gibbons' conjecture; master equation; average inequality, maximum principles in unbounded domains, sliding method; monotonicity; one-dimensional symmetry. ## 1. Introduction Gibbons' conjecture is associated with a striking question put forth by Italian mathematician Ennio De Giorgi [21] during the 1970s, known as the following **De Giorgi's conjecture.** Suppose that \(u(x)\) is an entire solution of the equation \[-\Delta u=u-u^{3},\text{ }x=(x^{\prime},x_{n})\in\mathbb{R}^{n}, \tag{1.1}\] satisfying \[|u(x)|\leq 1\text{ and }\frac{\partial u}{\partial x_{n}}>0\text{ in }\mathbb{R}^{n}.\] Then the level sets of \(u(x)\) must be hyperplanes, at least for \(n\leq 8\). The dimensional restrictions in De Giorgi's conjecture is related to the Bernstein problem (cf. [41]), which asserts that the minimal graph in Euclidean space must be a hyperplane, as long as the dimension of the ambient space does not exceed \(8\). Indeed, Bombieri, De Giorgi and Giusti [4] found that the minimal graph is not a hyperplane in \(\mathbb{R}^{n}\) with \(n\geq 9\). Despite attracting a significant amount of attention and study from numerous mathematicians for a long time, this challenging conjecture remains unproven in its full generality even today. It has only been completely resolved in \(\mathbb{R}^{2}\) and \(\mathbb{R}^{3}\) by Ghoussoub and Gui [26] and Amerosio and Cabre [1], respectively. When the dimension \(4\leq n\leq 8\), Ghoussoub and Gui [27], and Savin [40] confirmed this conjecture under an additional convergence assumption that \[\lim_{x_{n}\to\pm\infty}u(x^{\prime},x_{n})=\pm 1\text{ for any }x^{\prime}\in\mathbb{R}^{n-1}.\] Instead, the answer is definitely negative as soon as the dimension of \(\mathbb{R}^{n}\) becomes greater than \(8\), since Pino, Kowalczy and Wei [22] presented counterexamples to De Giorgi's conjecture in high-dimensional spaces. Motivated by the problem of detecting the shape of interfaces in cosmology, Gary W. Gibbons [8] proposed a weaker version of De Giorgi's conjecture that replaces the one-direction monotonic condition \(\frac{\partial u}{\partial x_{n}}>0\) by a uniform convergence assumption \[\lim_{x_{n}\to\pm\infty}u(x^{\prime},x_{n})=\pm 1\text{ uniformly with respect to }x^{\prime}\in\mathbb{R}^{n-1}. \tag{1.2}\] This is referred to as the Gibbons' conjecture. **Gibbons' conjecture.** Let \(u(x)\) be an entire solution of (1.1) satisfying \(|u(x)|\leq 1\) and (1.2), then \(u(x)\) is monotonically increasing with respect to \(x_{n}\) and depends exclusively on \(x_{n}\). Fortunately, Gibbons' conjecture has been successfully established in any dimensions by various methods, including the moving plane method by Farina [24], the probabilistic arguments by Barlow, Bass, and Gui [3], and the sliding method by Berestycki, Hamel, and Monneau [5]. It is worth mentioning that their results applied to more general nonhomogeneous terms \(f\) than the De Giorgi-type nonlinearities \(f(u)=u-u^{3}\). Among them, the celebrated sliding method was first introduced by Berestycki and Nirenberg [6] to establish some qualitative properties of positive solutions to the local elliptic equations. Afterwards, Wu and Chen [44] developed a direct sliding method, which is valuable in many applications, such as in deriving monotonicity, one-dimensional symmetry, uniqueness, and nonexistence of solutions to elliptic equations and systems involving fractional Laplacians as well as fractional \(p\)-Laplacians. Please refer to [10, 30, 35, 36] and an exhaustive survey [11] for details. Such direct method avoids the heavy use of classical extension method established in [15] to overcome the difficulties caused by the non-locality of fractional operators. More importantly, this direct sliding method can be used to extend and prove Gibbons' conjecture in the settings of other fractional elliptic equations involving various nonlocal operators (cf. [9, 23, 31, 42, 37]). In contrast, there are few papers on the Gibbons' conjecture for entire solutions of parabolic equations besides a recent article by Chen and Wu [17]. They developed an appropriate sliding method to prove the Gibbons' conjecture for entire solutions of the following fractional reaction-diffusion equation \[u_{t}(x,t)+(-\Delta)^{s}u(x,t)=f(t,u(x,t)),\ \text{in}\ \mathbb{R}^{n}\times \mathbb{R}. \tag{1.3}\] Inspired by the previous literature, in this paper, we focus on the Gibbons' conjecture for the following master equation \[(\partial_{t}-\Delta)^{s}u(x,t)=f(t,u(x,t)),\ \text{in}\ \mathbb{R}^{n}\times \mathbb{R}. \tag{1.4}\] We show that the entire solution of (1.4) is strictly monotonic in one direction and depends only on one Euclidean variable. Here the fully fractional heat operator \((\partial_{t}-\Delta)^{s}\) was first proposed by M. Riesz in [38]. It is a nonlocal operator of order \(2s\) in space and of order \(s\) in time, and can be defined as the following integral form \[(\partial_{t}-\Delta)^{s}u(x,t):=C_{n,s}\int_{-\infty}^{t}\int_{\mathbb{R}^{n }}\frac{u(x,t)-u(y,\tau)}{(t-\tau)^{\frac{n}{2}+1+s}}e^{-\frac{|x-y|^{2}}{4(t- \tau)}}\,\mathrm{d}y\,\mathrm{d}\tau, \tag{1.5}\] where the integral in \(y\) is taken in the Cauchy principle value sense, the normalization positive constant \[C_{n,s}=\frac{1}{(4\pi)^{\frac{n}{2}}|\Gamma(-s)|},\] with \(\Gamma(\cdot)\) denoting the Gamma function and \(0<s<1\). Note that this operator is nonlocal both in space and time, since the value of \((\partial_{t}-\Delta)^{s}u\) at a given point \((x,t)\) depends on the values of \(u\) in the whole \(\mathbb{R}^{n}\) and on all the past time before \(t\). The singular integral in (1.5) is well defined provided \[u(x,t)\in C_{x,\,t,\,\mathrm{loc}}^{2s+\epsilon,s+\epsilon}(\mathbb{R}^{n} \times\mathbb{R})\cap\mathcal{L}(\mathbb{R}^{n}\times\mathbb{R})\] for some \(\varepsilon\in(0,1)\), where the slowly increasing function space \(\mathcal{L}(\mathbb{R}^{n}\times\mathbb{R})\) is defined by \[\mathcal{L}(\mathbb{R}^{n}\times\mathbb{R}):=\left\{u(x,t)\in L_{\mathrm{loc }}^{1}(\mathbb{R}^{n}\times\mathbb{R})\mid\int_{-\infty}^{t}\int_{\mathbb{R}^ {n}}\frac{|u(x,\tau)|e^{-\frac{|x|^{2}}{4(t-\tau)}}}{1+(t-\tau)^{\frac{n}{2}+1 +s}}\,\mathrm{d}x\,\mathrm{d}\tau<\infty,\ \forall\,t\in\mathbb{R}\right\},\] and the definition of the local parabolic Holder space \(C_{x,\,t,\,\mathrm{loc}}^{2s+\epsilon,s+\epsilon}(\mathbb{R}^{n}\times \mathbb{R})\) will be specified in Section 2. Particularly, if the solution \(u\) is bounded, we only need to assume that \(u\) is parabolic Holder continuous to compensate the singularity of the kernel at point \((x,t)\). What makes this problem interesting is that, when the space-time nonlocal operator \((\partial_{t}-\Delta)^{s}\) is applied to a function that only depends on either space or time, it reduces to a familiar fractional order operator (cf. [43]). More precisely, if \(u\) is only a function of \(x\), then \[(\partial_{t}-\Delta)^{s}u(x)=(-\Delta)^{s}u(x),\] where \((-\Delta)^{s}\) is the well-known fractional Laplacian. In recent decades, the well-posedness of solutions to elliptic equations involving the fractional Laplace operator has been extensively investigated, interested readers can refer to [12, 13, 14, 18, 19, 20, 32, 33] and references therein. While if \(u=u(t)\), then \[(\partial_{t}-\Delta)^{s}u(t)=\partial_{t}^{s}u(t),\] where \(\partial_{t}^{s}\) is the Marchaud fractional derivative of order \(s\). Note that the fractional powers of heat operator \((\partial_{t}-\Delta)^{s}\) can be reduced to the local heat operator \(\partial_{t}-\Delta\) as \(s\to 1\) (cf. [25]). Our main result can thus be regarded as a nonlocal generalization of Gibbons' conjecture for the local parabolic equation in the sense that \(s\to 1\). The space-time nonlocal equation represented by (1.4) arises in various physical and biological phenomena, such as anomalous diffusion [29], chaotic dynamics [45], biological invasions [7] and so on. In applications within financial field, it can also be used to model the waiting time between transactions is correlated with the ensuring price jump (cf. [39]). From a probabilistic point of view, the master equation is fundamental in the theory of continuous time random walk, where \(u\) represents the distribution of particles whose random jumps occur simultaneously with random time lag (cf. [34]). It is in contrast to the nonlocal parabolic equations like (1.3) or dual fractional parabolic equation \[\partial_{t}^{s}u+(-\Delta)^{s}u=f, \tag{1.6}\] where jumps are independent of the waiting times. Such strong correlation can also be reflected mathematically by observing the initial conditions for classical maximum principles of the master equation in bounded domains, as described below. _Let \(\Omega\) be a bounded domain in \(\mathbb{R}^{n}\) and \([t_{1},t_{2}]\) be an interval in \(\mathbb{R}\). Assume that \(u(x,t)\) is a solution of initial exterior value problem_ \[\left\{\begin{array}{ll}(\partial_{t}-\Delta)^{s}u(x,t)\geq 0,&(x,t)\in \Omega\times(t_{1},t_{2}],\\ u(x,t)\geq 0,&(x,t)\in(\mathbb{R}^{n}\setminus\Omega)\times(t_{1},t_{2}),\\ u(x,t)\geq 0,&(x,t)\in\mathbb{R}^{n}\times(-\infty,t_{1}].\end{array}\right. \tag{1.7}\] _Then \(u(x,t)\geq 0\) in \(\Omega\times(t_{1},t_{2}].\)_ Due to the nonlocal and strongly correlated nature of the fully fractional heat operator \((\partial_{t}-\Delta)^{s}\), in order to ensure the validity of the classical maximum principle, besides the exterior condition on \((\mathbb{R}^{n}\setminus\Omega)\times(t_{1},t_{2}),\) we must also require the initial condition \(u(x,t)\geq 0\) to hold on \(\mathbb{R}^{n}\times(-\infty,t_{1}]\), rather than just on \(\Omega\times\{t_{1}\}\) or on \(\Omega\times(-\infty,t_{1}]\) as required by the maximum principle for fractional reaction-diffusion equations (1.3) and dual fractional equation (1.6), respectively. These differences can be illuminated by the following counterexample. Let \(\Omega:=(-1,1)\). For simplicity, we consider functions in separated variables form, that is, \[u(x,t):=X(x)T(t)\] on the parabolic cylinder \(\Omega\times(0,1]\). Here \(X\in C^{1,1}([-1,1])\) is a function of \(x\) that satisfies \[X(x)\in\left\{\begin{array}{ll}[-\varepsilon,0],&\mbox{in }[-1,1],\\ (0,1),&\mbox{in }(-2,-1)\cup(1,2),\end{array}\right.\mbox{ and }X(x)\equiv 1\mbox{ in }(-\infty,-2]\cup[2,+\infty), \tag{1.8}\] as illustrated in Figure 1 below. And \(T\in C^{1}([0,1])\) is a function of \(t\) that fulfills \[T(t)\in\left\{\begin{array}{ll}(0,\varepsilon],&\mbox{in }(\frac{1}{8},\frac{7}{ 8}),\\ (-1,0),&\mbox{in }(-2,-1),\end{array}\right.\mbox{ and }T(t)\equiv\left\{ \begin{array}{ll}0,&\mbox{in }[-1,\frac{1}{8}]\cup[\frac{7}{8},1],\\ -1,&\mbox{in }(-\infty,-2],\end{array}\right. \tag{1.9}\] as shown in the following Figure 2. Let \(s=\frac{1}{2}\) and \(\varepsilon>0\) be a sufficiently small positive constant such that \[(\partial_{t}-\Delta)^{\frac{1}{2}}u(x,t)\geq 0\mbox{ in }(-1,1)\times(0,1].\] Please refer to Section 2 for detailed calculations. The values taken by the function \(u(x,t)\) imply that \[\left\{\begin{array}{ll}(\partial_{t}-\Delta)^{\frac{1}{2}}u(x,t)\geq 0,&(x,t) \in\Omega\times(0,1],\\ u(x,t)\geq 0,&(x,t)\in\Omega^{c}\times(0,1),\\ u(x,t)\geq 0,&(x,t)\in\Omega\times(-\infty,0],\end{array}\right. \tag{1.10}\] however \[u(x,t)\leq 0\mbox{ for }(x,t)\in\Omega^{c}\times(-\infty,0], \tag{1.11}\] Figure 1: The shape of function \(X(x)\). Figure 2: The shape of function \(T(t)\). as represented in Figure 3 below. If the master operator \((\partial_{t}-\Delta)^{\frac{1}{2}}\) in (1.10) is replaced by a local parabolic operator \(\frac{\partial}{\partial t}-\triangle\), or a nonlocal parabolic operator \(\frac{\partial}{\partial t}+(-\triangle)^{s}\), or even a dual fractional operator \(\partial_{t}^{\alpha}+(-\triangle)^{s}\), then by the maximum principles, we must have \[u(x,t)\geq 0,\ \ \ (x,t)\in\Omega\times(0,1].\] Nonetheless, for problem (1.10) involving the master operator, it is evident that the initial condition \(u(x,t)\geq 0\) just in \(\Omega\times(-\infty,0]\) does not guarantee \(u(x,t)\) to be nonnegative in \(\Omega\times(0,1]\). One can easily see that the function \(u(x,t)=X(x)T(t)\) so constructed is negative somewhere in \(\Omega\times(0,1]\). The main problem lies in (1.11), the initial condition fail to satisfied in \[\Omega^{c}\times(-\infty,0].\] The aforementioned counterexample shows that the initial condition on the whole \(\mathbb{R}^{n}\times(-\infty,t_{1}]\) is necessary to ensure the validity of the maximum principle for parabolic equations involving the fully fractional heat operator \((\partial_{t}-\Delta)^{s}\). Therefore, the strong correlation of master equations makes it more complicated to study compared to the parabolic equations (1.3) and (1.6) that only possess nonlocal feature. For instance, please see the remark after Theorem 1.2. The extensive practical applications highlight the significance of studying this kind of nonlocal equations in order to gain a deeper understanding of the underlying mechanisms behind various phenomena. Substantial progress in the investigation of the existence, uniqueness and regularity of solutions to master equations has been achieved in a series of remarkable papers [2, 16, 43]. To the best of our knowledge, very little is known on the geometric behavior of solutions to master equation (1.4). In the existing literature, the most common approach to studying master equation is the extension method, which extends such nonlocal equation to a local degenerate parabolic equation in a higher dimensional space. However, this method always requires cumbersome calculations and obscures the essence of the problem, and therefore may not necessarily yield the desired results. We overcome these difficulties by incorporating some new insights (which will be explained in detail later) into the sliding method to directly investigate the fully nonlocal operator \((\partial_{t}-\Delta)^{s}\). This direct method not only allows us to focus on the essential features of the nonlocal problem and avoid the complications that arise from the extension process, but also enables us to demonstrate the validity of the generalized version of Gibbons' conjecture in the context of master equation (1.4), as pointed out in the following **Theorem 1.1**.: _Let_ \[u(x,t)\in C^{2s+\epsilon,s+\epsilon}_{x,\,t,\mathrm{loc}}(\mathbb{R}^{n} \times\mathbb{R})\] _be a bounded solution of master equation_ \[(\partial_{t}-\Delta)^{s}u(x,t)=f(t,u(x,t)),\text{ in }\mathbb{R}^{n}\times \mathbb{R},\] _satisfying_ \[\left\{\begin{array}{l}|u(x,t)|\leq 1\text{ for }(x,t)\in\mathbb{R}^{n} \times\mathbb{R},\\ \lim_{x_{n}\to\pm\infty}u(x^{\prime},x_{n},t)=\pm 1,\text{ uniformly for }x^{\prime}=(x_{1},...,x_{n-1})\in \mathbb{R}^{n-1}\text{ and for }t\in\mathbb{R}.\end{array}\right.\] _Assume that \(f(t,u)\) is continuous in \(\mathbb{R}\times[-1,1]\), and for any fixed \(t\in\mathbb{R}\),_ \[f(t,u)\text{ is non-increasing for }u\in[-1,-1+\delta]\cup[1-\delta,1]\text{ with some }\delta>0.\] _Then the entire solution \(u(x,t)\) is strictly increasing with respect to \(x_{n}\), and furthermore it depends only on \(x_{n}\), that is,_ \[u(x^{\prime},x_{n},t)=u(x_{n},t)\] _for any \(t\in\mathbb{R}\)._ _Remark 1.1_.: Our result applies to a wide range of more general nonlinear functions \(f\), which always contains the De Giorgi-type nonlinearities \(f=u-u^{3}\) as a special example. We would like to mention that Theorem 1.1 is the first result that establishes the Gibbons' conjecture for master equations. This work will project new insights and perspectives into the proof of such important conjecture. Specifically, in order to effectively perform the direct sliding method, we first establish a generalized weighted average inequality for the fully fractional heat operator \((\partial_{t}-\Delta)^{s}\) as follows. **Theorem 1.2**.: _Let_ \[u(x,t)\in C^{2s+\epsilon,s+\epsilon}_{x,\,t,\mathrm{loc}}(\mathbb{R}^{n} \times\mathbb{R})\cap\mathcal{L}(\mathbb{R}^{n}\times\mathbb{R})\,.\] _If \(u(x,t)\) attains its maximum at a point \((x^{0},t_{0})\in\mathbb{R}^{n}\times(-\infty,t_{0}]\), then there holds that_ \[u(x^{0},t_{0})\leq\frac{C_{0}}{C_{n,s}}r^{2s}(\partial_{t}-\Delta)^{s}u(x^{0},t_{0})+C_{0}r^{2s}\int_{-\infty}^{t_{0}-r^{2}}\int_{B^{c}_{r}(x^{0})}\frac{u( y,\tau)e^{-\frac{|x^{0}-y|^{2}}{4(t_{0}-\tau)}}}{(t_{0}-\tau)^{\frac{n}{2}+1+s}}\, \mathrm{d}y\,\mathrm{d}\tau \tag{1.12}\] _for any radius \(r>0\), where the positive constant_ \[C_{0}:=\frac{1}{\int_{-\infty}^{-1}\int_{B_{1}^{c}(0)}\frac{e^{\frac{|y|^{2}}{4 \tau}}}{(-\tau)^{\frac{n}{2}+1+s}}\,\mathrm{d}y\,\mathrm{d}\tau},\] _and_ \[C_{0}r^{2s}\int_{-\infty}^{t_{0}-r^{2}}\int_{B_{r}^{c}(x^{0})}\frac{e^{-\frac{ |x^{0}-y|^{2}}{4(t_{0}-\tau)}}}{(t_{0}-\tau)^{\frac{n}{2}+1+s}}\,\mathrm{d}y\, \mathrm{d}\tau=1. \tag{1.13}\] _Remark 1.2_.: Due to the strong correlation of the operator \((\partial_{t}-\Delta)^{s}\), there are three major differences between this weighted average inequality and that in [17]: (i) The kernels. (ii) Our inequality here can only be established at the points where \(x\) and \(t\) simultaneously reach the maximum, as compared to the one for fractional parabolic equations (1.3), where it is sufficient to obtain the inequality at the maximum point with respect to \(x\) for each fixed \(t\) (cf. [17]). (iii) On the right hand side of the inequality, besides the integral with respect to \(x\) on \(B_{r}^{c}(x^{0})\), there is another layer of integral with respect to \(t\) from \(-\infty\) to \(t_{0}-r^{2}\). _Remark 1.3_.: If \(u\) satisfies \[(\partial_{t}-\Delta)^{s}u(x,t)\leq 0\] then at the maximum point \((x^{0},t_{0})\), the key estimate established in Theorem 1.2 can be simplified as follows \[u(x^{0},t_{0})\leq\int_{-\infty}^{t_{0}-r^{2}}\int_{B_{r}^{c}(x^{0})}u(y,\tau )\,\mathrm{d}\mu_{r}(y,\tau), \tag{1.14}\] where we denote \[\int_{-\infty}^{t_{0}-r^{2}}\int_{B_{r}^{c}(x^{0})}1\,\mathrm{d}\mu_{r}(y, \tau):=C_{0}r^{2s}\int_{-\infty}^{t_{0}-r^{2}}\int_{B_{r}^{c}(x^{0})}\frac{e^ {-\frac{|x^{0}-y|^{2}}{4(t_{0}-\tau)}}}{(t_{0}-\tau)^{\frac{n}{2}+1+s}}\, \mathrm{d}y\,\mathrm{d}\tau=1.\] Obviously, inequality (1.14) implies that the maximum value \(u(x^{0},t_{0})\) can be controlled by the weighted average value of \(u(x,t)\) over \(B_{r}^{c}(x^{0})\times(-\infty,t_{0}-r^{2})\). As a consequence of this weighted average inequality (1.14), we can immediately derive the maximum principle for master problems in bounded domains. **Corollary 1.3**.: _Let \(\Omega\) be a bounded domain in \(\mathbb{R}^{n}\) and \(t_{1}<t_{2}\) be two real numbers. Suppose that_ \[u(x,t)\in C_{x,t,\,\mathrm{loc}}^{2s+\epsilon,s+\epsilon}(\Omega\times(t_{1}, t_{2}])\cap\mathcal{L}(\mathbb{R}^{n}\times\mathbb{R})\] _is an upper semi-continuous function on \(\overline{\Omega}\times[t_{1},t_{2}]\), satisfying_ \[\left\{\begin{array}{ll}(\partial_{t}-\Delta)^{s}u(x,t)\leq 0,&(x,t)\in \Omega\times(t_{1},t_{2}],\\ u(x,t)\leq 0,&(x,t)\in\mathbb{R}^{n}\times(-\infty,t_{1}],\\ u(x,t)\leq 0,&(x,t)\in(\mathbb{R}^{n}\setminus\Omega)\times(t_{1},t_{2}),\end{array}\right. \tag{1.15}\] _then \(u(x,t)\leq 0\) in \(\Omega\times(t_{1},t_{2}]\)._ To prove this maximum principle, we argue by contradiction. Suppose \(u\) is positive somewhere in \(\Omega\times(t_{1},t_{2}]\), then it attains its positive maximum at some point \((x^{0},t_{0})\) in this parabolic cylinder. Let \(r\) be sufficiently large so that \(\Omega\subset B_{r}(x^{0})\), then applying weighted average inequality (1.14) and combining with the interior and exterior conditions, we arrive at \(u(x^{0},t_{0})\leq 0\), an obvious contradiction. We emphasize that inequality (1.12) also plays an important role in establishing the following maximum principle in unbounded domains for master equations, which is a crucial ingredient to carry out the direct sliding method. **Theorem 1.4**.: _Let \(\Omega\subset\mathbb{R}^{n}\) be an open set, possibly unbounded and disconnected, and there exists a uniformly positive constant \(\bar{C}\) independent of the point \(x\) such that the limit \(\lim_{R\to+\infty}\frac{|B_{R}(x)\cap\Omega^{c}|}{|B_{R}(x)|}\) exists and satisfies_ \[\lim_{R\to+\infty}\frac{|B_{R}(x)\cap\Omega^{c}|}{|B_{R}(x)|}\geq\bar{C}>0\ \text{for any}\ x\in\Omega. \tag{1.16}\] _Suppose that the upper semi-continuous function (up to the boundary \(\partial\Omega\))_ \[u(x,t)\in C_{x,\,t,\,\text{\rm loc}}^{2s+\epsilon,s+\epsilon}(\Omega\times \mathbb{R})\cap\mathcal{L}(\mathbb{R}^{n}\times\mathbb{R})\] _is bounded from above in \(\Omega\times\mathbb{R}\), and satisfies_ \[\left\{\begin{array}{ll}(\partial_{t}-\Delta)^{s}u(x,t)\leq 0,&\text{at the points in }\Omega\times\mathbb{R}\ \text{where}\ u(x,t)>0,\\ u(x,t)\leq 0,&\text{in }\Omega^{c}\times\mathbb{R}.\end{array}\right. \tag{1.17}\] _Then there holds that_ \[u(x,t)\leq 0\ \text{in}\ \Omega\times\mathbb{R}. \tag{1.18}\] _Remark 1.4_.: Roughly speaking, condition (1.16) indicates that the "size" of the complement of \(\Omega\) is "not too small" as compared to the "size" of \(\Omega\) as measured by limit of the ratio. For instance, this condition is satisfied when \(\Omega\) is a half space, a stripe, an Archimedean spiral and so on. However, condition (1.16) is not fulfilled if, for example \[\Omega:=\{x=(x^{\prime},x_{n})\mid x^{\prime}\in\mathbb{R}^{n-1},\,x_{n}>1\, \text{or}\,x_{n}<-1\}.\] Although its complement \(\Omega^{c}\) is an infinite slab with infinite measure, it is still "much too small" as compared with the "size" of \(\Omega\) in the sense of limit defined in condition (1.16). More precisely, in this case, we have \[\lim_{R\to+\infty}\frac{|B_{R}(x)\cap\Omega^{c}|}{|B_{R}(x)|}=0.\] _Remark 1.5_.: Note that the condition "\(u\) is bounded from above" is necessary to guarantee the validity of the above maximum principle, as shown in the following counterexample. For simplicity, we consider functions of \(x\) only. Let \[u(x,t)=u(x):=(x_{n})_{+}^{s}\] for \(x\in\mathbb{R}^{n}\), then it is well known that \(u(x)\) is a solution of the problem \[\left\{\begin{array}{ll}&\left(\partial_{t}-\Delta\right)^{s}u(x)=\left(- \Delta\right)^{s}u(x)=0,&x\in\mathbb{R}_{+}^{n}\,,\\ &u(x)=0,&x\in\mathbb{R}^{n}\setminus\mathbb{R}_{+}^{n}\,.\end{array}\right.\] However, it is evident that \(u(x)>0\) in \(\mathbb{R}_{+}^{n}\), which violates the conclusion of Theorem 1.4 with \(\Omega=\mathbb{R}_{+}^{n}\). In addition to performing the sliding method to prove the Gibbons' conjecture stated in Theorem 1.1, the aforementioned maximum principle can also be directly applied to establish the monotonicity of solutions to master equations on an upper half space. **Corollary 1.5**.: _Let \(\mathbb{R}_{+}^{n}:=\{x\in\mathbb{R}^{n}\mid x_{n}>0\}\) be an upper half space, and_ \[u(x,t)\in C_{x,\,t,\,\mathrm{loc}}^{2s+\epsilon,s+\epsilon}(\mathbb{R}_{+}^{n} \times\mathbb{R})\] _be a bounded solution of_ \[\left\{\begin{array}{ll}(\partial_{t}-\Delta)^{s}u(x,t)=f(t,u(x,t)),&\text{ in }\mathbb{R}_{+}^{n}\times\mathbb{R},\\ u(x,t)>0,&\text{ in }\mathbb{R}_{+}^{n}\times\mathbb{R},\\ u(x,t)=0,&\text{ in }\left(\mathbb{R}^{n}\setminus\mathbb{R}_{+}^{n}\right) \times\mathbb{R},\end{array}\right. \tag{1.19}\] _where \(u\) is continuous up to the boundary \(\partial\mathbb{R}_{+}^{n}\), and the nonhomogeneous term \(f(t,u)\) is monotonically decreasing with respect to \(u\). Then \(u(x,t)\) is strictly increasing with respect to \(x_{n}\) in \(\mathbb{R}_{+}^{n}\) for any \(t\in\mathbb{R}\)._ It is significant to mention that the holistic approach developed in this paper is very general and can be applied to investigate various qualitative properties of solutions for a wide range of fractional elliptic and parabolic equations and systems. The remaining part of this paper will proceed as follows. Section 2 consists of the definition of parabolic Holder space, the detailed calculation of the above counterexample and some frequently used estimates in what follows. In Section 3, we derive a weighted average inequality applicable to the fully fractional heat operator \((\partial_{t}-\Delta)^{s}\), which is a key estimate for establishing the subsequent results. On this basis, Section 4 is devoted to obtaining the maximum principle in unbounded domains and its straightforward applications. Such a maximum principle plays an essential role in implementing the sliding method. Incorporating the aforementioned average inequality and the maximum principle into the sliding method, we complete the proof of Gibbons' conjecture for master equation in the last section. ## 2. Preliminaries In this section, we collect definitions and derive auxiliary results that are needed in establishing our main theorems. Throughout this paper, \(C\) will denote a positive constant whose value may be different from line to line. We start by providing the definition of parabolic Holder space \[C^{2\alpha,\alpha}_{x,\,t}(\mathbb{R}^{n}\times\mathbb{R}),\] which plays an essential role in ensuring that the fully fractional heat operator \((\partial_{t}-\Delta)^{s}\) is well-defined (cf. [28]). More precisely, 1. When \(0<\alpha\leq\frac{1}{2}\), if \(u(x,t)\in C^{2\alpha,\alpha}_{x,\,t}(\mathbb{R}^{n}\times\mathbb{R})\), then there exists a constant \(C>0\) such that \[|u(x,t)-u(y,\tau)|\leq C\left(|x-y|+|t-\tau|^{\frac{1}{2}}\right)^{2\alpha}\] for any \(x,\,y\in\mathbb{R}^{n}\) and \(t,\,\tau\in\mathbb{R}\). 2. When \(\frac{1}{2}<\alpha\leq 1\), we say that \[u(x,t)\in C^{2\alpha,\alpha}_{x,\,t}(\mathbb{R}^{n}\times\mathbb{R}):=C^{1+(2 \alpha-1),\alpha}_{x,\,t}(\mathbb{R}^{n}\times\mathbb{R}),\] if \(u\) is \(\alpha\)-Holder continuous in \(t\) uniformly with respect to \(x\) and its gradient \(\nabla_{x}u\) is \((2\alpha-1)\)-Holder continuous in \(x\) uniformly with respect to \(t\) and \((\alpha-\frac{1}{2})\)-Holder continuous in \(t\) uniformly with respect to \(x\). 3. While for \(\alpha>1\), if \(u(x,t)\in C^{2\alpha,\alpha}_{x,\,t}(\mathbb{R}^{n}\times\mathbb{R}),\) then it means that \[\partial_{t}u,\,D^{2}_{x}u\in C^{2\alpha-2,\alpha-1}_{x,\,t}(\mathbb{R}^{n} \times\mathbb{R}).\] In addition, we can analogously define the local parabolic Holder space \(C^{2\alpha,\alpha}_{x,\,t,\,\mathrm{loc}}(\mathbb{R}^{n}\times\mathbb{R})\). Next, we present a detailed calculation of the counterexample mentioned in the introduction regarding the maximum principle of master equation not being valid when the initial condition does not satisfy nonnegativity on the whole \(\mathbb{R}^{n}\times(-\infty,t_{1}]\). _Counterexample 1_.: Let \(u(x,t)=X(x)T(t)\), where \(X(x)\in C^{1,1}([-1,1])\) and \(T(t)\in C^{1}([0,1])\) are bounded functions defined in (1.8) and (1.9), respectively. Then there exists a sufficiently small constant \(\varepsilon\in(0,1)\) such that \[(\partial_{t}-\Delta)^{\frac{1}{2}}u(x,t)\geq 0\text{ in }(-1,1)\times(0,1].\] Proof.: For \((x,t)\in(-1,1)\times(0,1]\), applying the definitions of \((\partial_{t}-\Delta)^{\frac{1}{2}}\), we divide the integral domain into three parts \[(\partial_{t}-\Delta)^{\frac{1}{2}}u(x,t)\] \[= C_{1,\frac{1}{2}}\int_{-\infty}^{t}\int_{-\infty}^{\infty}\frac {X(x)T(t)-X(y)T(\tau)}{(t-\tau)^{2}}e^{-\frac{|x-y|^{2}}{4(t-\tau)}}\,\mathrm{ d}y\,\mathrm{d}\tau\] \[= C_{1,\frac{1}{2}}\left(\int_{-\infty}^{0}\int_{|y|>1}+\int_{- \infty}^{0}\int_{-1}^{1}+\int_{0}^{t}\int_{-\infty}^{\infty}\frac{X(x)T(t)-X(y)T (\tau)}{(t-\tau)^{2}}e^{-\frac{|x-y|^{2}}{4(t-\tau)}}\,\mathrm{d}y\,\mathrm{d} \tau\right)\] \[=: I+II+III. \tag{2.1}\] Now we are going to estimate each of these three integrals separately. According to the definition of function \(T\), there is no need to worry about the singularity when \(\tau\in(-\infty,0)\) is close to \(t\in(0,1)\), since \(X(x)T(t)-X(y)T(\tau)=0\) as \(\tau\to 0^{-}\) and \(t\to 0^{+}\). Then in terms of (1.8), (1.9) and the small constant \(\varepsilon\in(0,1)\), we first estimate \(I\) and \(II\) as follows \[I \geq C\int_{-\infty}^{2}\int_{|y|>2}\frac{-\varepsilon^{2}+1}{(t- \tau)^{2}}e^{-\frac{|x-y|^{2}}{4(t-\tau)}}\,\mathrm{d}y\,\mathrm{d}\tau-C \varepsilon^{2} \tag{2.2}\] \[\geq C(1-\varepsilon^{2})\geq C_{0}>0,\] and \[|II|\leq C(\varepsilon+\varepsilon^{2})\leq C\varepsilon. \tag{2.3}\] Due to the presence of singular point \((x,t)\), the estimate of \(III\) is somewhat complicated, and we need to divide it into the following two parts \[III = C_{1,\frac{1}{2}}\left(\int_{0}^{t}\int_{|y-x|\geq\sqrt{ \varepsilon}}+\int_{0}^{t}\int_{|y-x|<\sqrt{\varepsilon}}\frac{X(x)T(t)-X(y)T (\tau)}{(t-\tau)^{2}}e^{-\frac{|x-y|^{2}}{4(t-\tau)}}\,\mathrm{d}y\,\mathrm{d} \tau\right)\] \[=: III_{1}+III_{2}.\] With respect to the estimate of \(III_{1}\), we directly compute \[|III_{1}| \leq C(\varepsilon^{2}+\varepsilon)\int_{0}^{t}\int_{|y-x|\geq\sqrt{ \varepsilon}}\frac{1}{(t-\tau)^{2}}e^{-\frac{|x-y|^{2}}{4(t-\tau)}}\,\mathrm{ d}y\,\mathrm{d}\tau\] \[= -C(\varepsilon^{2}+\varepsilon)\int_{|y-x|\geq\sqrt{\varepsilon} }\int_{0}^{t}\frac{\,\mathrm{d}}{\,\mathrm{d}\tau}\left(e^{-\frac{|x-y|^{2}} {4(t-\tau)}}\right)\frac{4}{|x-y|^{2}}\,\mathrm{d}y\] \[\leq C(\varepsilon^{2}+\varepsilon)\int_{\sqrt{\varepsilon}}^{+ \infty}\frac{\,\mathrm{d}r}{r^{2}}\leq C\sqrt{\varepsilon}\] The estimate of \(III_{2}\) proceeds via a change of variables, Taylor expansion, and the definition of the Cauchy principal value, which yields \[|III_{2}| = C_{1,\frac{1}{2}}\left|\int_{0}^{t}\int_{|y-x|<\sqrt{\varepsilon }}\frac{X(x)(T(t)-T(\tau))+(X(x)-X(y))T(\tau)}{(t-\tau)^{2}}e^{-\frac{|x-y|^{2} }{4(t-\tau)}}\,\mathrm{d}y\,\mathrm{d}\tau\right|\] \[\leq C\varepsilon\int_{0}^{t}\frac{1}{(t-\tau)^{\frac{1}{2}}}\, \mathrm{d}\tau+C_{1,\frac{1}{2}}\left|\int_{0}^{t}\int_{|y-x|<\sqrt{ \varepsilon}}\frac{O(|x-y|^{2})T(\tau)}{(t-\tau)^{2}}e^{-\frac{|x-y|^{2}}{4(t- \tau)}}\,\mathrm{d}y\,\mathrm{d}\tau\right|\] \[\leq C\varepsilon+C\varepsilon\int_{|y-x|<\sqrt{\varepsilon}}\frac{O (|x-y|^{2})}{|x-y|^{2}}\,\mathrm{d}y\leq C\varepsilon.\] Hence, a combination of the estimates of \(III_{1}\) and \(III_{2}\) leads to \[|III|\leq C\sqrt{\varepsilon}. \tag{2.4}\] Finally, inserting (2.2)-(2.4) into (2.1), we deduce that \[(\partial_{t}-\Delta)^{\frac{1}{2}}u(x,t)\geq C_{0}-C\sqrt{\varepsilon}\geq 0\] by choosing the positive constant \(\varepsilon\) small enough. We conclude this section by demonstrating that the nonlocal operator \((\partial_{t}-\Delta)^{s}\) acting on smooth cut-off functions is bounded, which is repeatedly used in establishing our main results. **Lemma 2.1**.: _Let_ \[\eta(x,t)\in C_{0}^{\infty}\left(B_{1}(0)\times(-1,1)\right)\] _be a smooth cut-off function whose value belongs to \([0,1]\), then there exists a positive constant \(C_{0}\) that depends only on \(s\) and \(n\) such that_ \[|(\partial_{t}-\Delta)^{s}\eta(x,t)|\leq C_{0}\mbox{ for }(x,t)\in B_{1}(0) \times(-1,1).\] Proof.: For \((x,t)\in B_{1}(0)\times(-1,1)\), using the definitions of \((\partial_{t}-\Delta)^{s}\), we divide the integral domain into three parts \[(\partial_{t}-\Delta)^{s}\eta(x,t) \tag{2.5}\] \[= C_{n,s}\int_{-\infty}^{t}\int_{\mathbb{R}^{n}}\frac{\eta(x,t)- \eta(y,\tau)}{(t-\tau)^{\frac{n}{2}+1+s}}e^{-\frac{|x-y|^{2}}{4(t-\tau)}} \,\mathrm{d}y\,\mathrm{d}\tau\] \[= C_{n,s}\left(\int_{t-1}^{t}\int_{B_{1}^{c}(x)}+\int_{-\infty}^{t -1}\int_{B_{1}^{c}(x)}+\int_{-\infty}^{t}\int_{B_{1}(x)}\frac{\eta(x,t)-\eta(y,\tau)}{(t-\tau)^{\frac{n}{2}+1+s}}e^{-\frac{|x-y|^{2}}{4(t-\tau)}}\,\mathrm{d }y\,\mathrm{d}\tau\right)\] \[=: I_{1}+I_{2}+I_{3}.\] In order to estimate the first term \(I_{1}\), we combine the smoothness of \(\eta(x,t)\) with \(0<s<1\) and the fact that \[\frac{e^{-\frac{|x|^{2}}{4\tau}}}{\tau^{\frac{n}{2}+1+s}}\leq\frac{C}{|z|^{n+2 +2s}+\tau^{\frac{n}{2}+1+s}} \tag{2.6}\] for \(\tau>0\) and \(z\in\mathbb{R}^{n}\), where the positive constant \(C\) depends on \(n\) and \(s\), then \[I_{1} = \frac{1}{(4\pi)^{\frac{n}{2}}|\Gamma(-s)|}\int_{t-1}^{t}\int_{B_ {1}^{c}(x)}\!\frac{\eta(x,t)-\eta(x,\tau)}{(t-\tau)^{\frac{n}{2}+1+s}}e^{- \frac{|x-y|^{2}}{4(t-\tau)}}\,\mathrm{d}y\,\mathrm{d}\tau\] \[+C_{n,s}\int_{t-1}^{t}\int_{B_{1}^{c}(x)}\frac{\eta(x,\tau)-\eta( y,\tau)}{(t-\tau)^{\frac{n}{2}+1+s}}e^{-\frac{|x-y|^{2}}{4(t-\tau)}}\, \mathrm{d}y\,\mathrm{d}\tau\] \[\leq \frac{1}{|\Gamma(-s)|}\int_{t-1}^{t}\frac{|\eta(x,t)-\eta(x,\tau )|}{(t-\tau)^{1+s}}\int_{\mathbb{R}^{n}}\frac{e^{-\frac{|x-y|^{2}}{4(t-\tau)} }}{[4\pi(t-\tau)]^{\frac{n}{2}}}\,\mathrm{d}y\,\mathrm{d}\tau\] \[+C\int_{t-1}^{t}\int_{\mathbb{R}^{n}}\frac{|x-y|^{2}}{|x-y|^{n+2+2 s}+(t-\tau)^{\frac{n}{2}+1+s}}\,\mathrm{d}y\,\mathrm{d}\tau\] \[= \frac{1}{|\Gamma(-s)|}\int_{t-1}^{t}\frac{|\eta(x,t)-\eta(x,\tau )|}{(t-\tau)^{1+s}}\,\mathrm{d}\tau+C\int_{0}^{1}\int_{\mathbb{R}^{n}}\frac{| y|^{2}}{|y|^{n+2+2s}+\tau^{\frac{n}{2}+1+s}}\,\mathrm{d}y\,\mathrm{d}\tau\] \[\leq C\int_{t-1}^{t}\frac{(t-\tau)}{(t-\tau)^{1+s}}\,\mathrm{d}\tau+C \int_{0}^{1}\int_{0}^{+\infty}\frac{r^{n+1}}{r^{n+2+2s}+\tau^{\frac{n}{2}+1+s}} \,\mathrm{d}r\,\mathrm{d}\tau\] \[= \frac{C}{1-s}+C\int_{0}^{1}\int_{0}^{+\infty}\frac{r^{n+1}}{r^{n+ 2+2s}+\tau^{\frac{n}{2}+1+s}}\,\mathrm{d}r\,\mathrm{d}\tau.\] We further use the formula \[\int_{0}^{+\infty}\frac{r^{q}}{a+r^{p}}\,\mathrm{d}r=\frac{\pi}{p\sin\frac{(q+ 1)\pi}{p}}a^{\frac{q+1-p}{p}}, \tag{2.7}\] where the positive constants \(a\), \(p\) and \(q\) satisfy \(p>q+1\), then \[|I_{1}|\leq\frac{C}{1-s}+C\int_{0}^{1}\tau^{-s}\,\mathrm{d}\tau\leq C(n,s). \tag{2.8}\] Next, we apply (2.6) to estimate \(I_{2}\) as follows \[|I_{2}| = \left|C_{n,s}\int_{-\infty}^{t-1}\int_{B_{1}^{c}(x)}\frac{\eta(x,t)}{(t-\tau)^{\frac{n}{2}+1+s}}e^{-\frac{|x-y|^{2}}{4(t-\tau)}}\,\mathrm{d}y \,\mathrm{d}\tau\right| \tag{2.9}\] \[\leq C\int_{-\infty}^{t-1}\int_{B_{1}^{c}(x)}\frac{1}{|x-y|^{n+2+2s}+ (t-\tau)^{\frac{n}{2}+1+s}}\,\mathrm{d}y\,\mathrm{d}\tau\leq C(n,s).\] With respect to the estimate of \(I_{3}\), by using the change of variables, Taylor expansion, the definition of Cauchy principal value and (2.7), we have \[|I_{3}| \leq \frac{1}{(4\pi)^{\frac{n}{2}}|\Gamma(-s)|}\left|\int_{-\infty}^{ t}\int_{B_{1}(x)}\frac{\eta(x,t)-\eta(y,t)}{(t-\tau)^{\frac{n}{2}+1+s}}e^{- \frac{|x-y|^{2}}{4(t-\tau)}}\,\mathrm{d}y\,\mathrm{d}\tau\right| \tag{2.10}\] \[+C_{n,s}\left|\int_{-\infty}^{t}\int_{B_{1}(x)}\frac{\eta(y,t)- \eta(y,\tau)}{(t-\tau)^{\frac{n}{2}+1+s}}e^{-\frac{|x-y|^{2}}{4(t-\tau)}}\, \mathrm{d}y\,\mathrm{d}\tau\right|\] \[\leq \frac{4^{s}\Gamma(\frac{n}{2}+s)}{\pi^{\frac{n}{2}}|\Gamma(-s)|} \left|P.V.\int_{B_{1}(x)}\frac{\eta(x,t)-\eta(y,t)}{|x-y|^{n+2s}}\,\mathrm{d}y\right|\] \[+C\int_{-\infty}^{t}\int_{B_{1}(x)}\frac{t-\tau}{|x-y|^{n+2+2s}+(t -\tau)^{\frac{n}{2}+1+s}}\,\mathrm{d}y\,\mathrm{d}\tau\] \[\leq C\int_{B_{1}(x)}\frac{1}{|x-y|^{n+2s-2}}\,\mathrm{d}y+C\int_{B_ {1}(0)}\frac{1}{|z|^{n+2s-2}}\,\mathrm{d}z\] \[\leq C(n,s).\] Finally, inserting (2.8)-(2.10) into (2.5), we deduce that \[|(\partial_{t}-\Delta)^{s}\eta(x,t)|\leq C_{0}\mbox{ for }(x,t)\in B_{1}(0) \times(-1,1),\] where the positive constant \(C_{0}=C_{0}(n,s)\). Therefore, we complete the proof of Lemma 2.1. \(\Box\) As a byproduct of Lemma 2.1, we can immediately derive the following result through scaling and translation transformations. **Corollary 2.2**.: _Let_ \[\eta_{r}(x,t):=\eta\left(\frac{x-x^{0}}{r},\frac{t-t_{0}}{r^{2}}\right)\in C_{0}^ {\infty}\left(B_{r}(x^{0})\times(-r^{2}+t_{0},r^{2}+t_{0})\right)\] _for \((x^{0},t_{0})\in\mathbb{R}^{n}\times\mathbb{R}\) and \(r>0\), then_ \[|(\partial_{t}-\Delta)^{s}\eta_{r}(x,t)|\leq\frac{C_{0}}{r^{2s}}\text{ in }B_{r}(x^{0})\times(-r^{2}+t_{0},r^{2}+t_{0}),\] _where the smooth cut-off function \(\eta\) and the positive constant \(C_{0}\) are defined in Lemma 2.1._ ## 3. Weighted average inequality In this section, we establish a key estimate for the fully fractional heat operator \((\partial_{t}-\Delta)^{s}\), which is commonly referred to as the generalized weighted average inequality (i.e., Theorem 1.2). This estimate is particularly useful in establishing the maximum principle and the direct sliding method for master equations, as discussed in later sections. Proof of Theorem 1.2.: According to the definition of the nonlocal operator \((\partial_{t}-\Delta)^{s}\) and the maximality of \(u(x,t)\) at the point \((x^{0},t_{0})\) in \(\mathbb{R}^{n}\times(-\infty,t_{0}]\), we derive \[(\partial_{t}-\Delta)^{s}u(x^{0},t_{0}) = C_{n,s}\int_{-\infty}^{t_{0}}\int_{\mathbb{R}^{n}}\frac{u(x^{0}, t_{0})-u(y,\tau)}{(t_{0}-\tau)^{\frac{n}{2}+1+s}}e^{-\frac{|x^{0}-y|^{2}}{4(t_{0}- \tau)}}\,\mathrm{d}y\,\mathrm{d}\tau\] \[\geq C_{n,s}\int_{-\infty}^{t_{0}-r^{2}}\int_{B_{r}^{c}(x^{0})}\frac{u (x^{0},t_{0})-u(y,\tau)}{(t_{0}-\tau)^{\frac{n}{2}+1+s}}e^{-\frac{|x^{0}-y|^{2} }{4(t_{0}-\tau)}}\,\mathrm{d}y\,\mathrm{d}\tau\] \[= \frac{C_{n,s}u(x^{0},t_{0})}{r^{2s}}\int_{-\infty}^{-1}\int_{B_{1 }^{c}(0)}\frac{e^{\frac{|y|^{2}}{4\tau}}}{(-\tau)^{\frac{n}{2}+1+s}}\,\mathrm{ d}y\,\mathrm{d}\tau\] \[-C_{n,s}\int_{-\infty}^{t_{0}-r^{2}}\int_{B_{r}^{c}(x^{0})}\frac{u (y,\tau)}{(t_{0}-\tau)^{\frac{n}{2}+1+s}}e^{-\frac{|x^{0}-y|^{2}}{4(t_{0}-\tau )}}\,\mathrm{d}y\,\mathrm{d}\tau.\] We denote \[C_{0}:=\frac{1}{\int_{-\infty}^{-1}\int_{B_{1}^{c}(0)}\frac{e^{\frac{|y|^{2}}{ 4\tau}}}{(-\tau)^{\frac{n}{2}+1+s}}\,\mathrm{d}y\,\mathrm{d}\tau},\] then it follows that \[u(x^{0},t_{0})\leq\frac{C_{0}}{C_{n,s}}r^{2s}(\partial_{t}-\Delta)^{s}u(x^{0}, t_{0})+C_{0}r^{2s}\int_{-\infty}^{t_{0}-r^{2}}\int_{B_{r}^{c}(x^{0})}\frac{u(y, \tau)e^{-\frac{|x^{0}-y|^{2}}{4(t_{0}-\tau)}}}{(t_{0}-\tau)^{\frac{n}{2}+1+s}} \,\mathrm{d}y\,\mathrm{d}\tau.\] It remains to estimate validity of (1.13). In terms of the definition of \(C_{0}\), a direct calculation shows that \[C_{0}r^{2s}\int_{-\infty}^{t_{0}-r^{2}}\int_{B_{r}^{c}(x^{0})}\frac{e^{-\frac{ |x^{0}-y|^{2}}{4(t_{0}-\tau)}}}{(t_{0}-\tau)^{\frac{n}{2}+1+s}}\,\mathrm{d}y\, \mathrm{d}\tau\] \[= C_{0}\int_{-\infty}^{-1}\int_{B_{1}^{c}(0)}\frac{e^{\frac{|y|^{2}}{4 \tau}}}{(-\tau)^{\frac{n}{2}+1+s}}\,\mathrm{d}y\,\mathrm{d}\tau=1.\] Thus, we complete the proof of Theorem 1.2. \(\square\) ## 4. Maximum principle in unbounded domains and its direct application In this section, we demonstrate the maximum principle in unbounded domains (i.e., Theorem 1.4) for master equations by utilizing the perturbation argument as well as the weighted average inequality stated in Theorem 1.2. Furthermore, we propose that a direct application of this maximum principle is to establish the monotonicity of solutions to master equations on an upper half space. More importantly, it serves as a fundamental ingredient in carrying out the sliding method adopted in the proof of Gibbons' conjecture. Proof of Theorem 1.4.: We argue by contradiction, if (1.18) is violated, since \(u(x,t)\) has an upper bound in \(\Omega\times\mathbb{R}\), then there exists a positive constant \(A\) such that \[\sup_{\Omega\times\mathbb{R}}u(x,t):=A>0. \tag{4.1}\] Note that the set \(\Omega\times\mathbb{R}\) is unbounded, then the supremum of \(u(x,t)\) may not be attained. Even so, (4.1) implies that there exists a sequence \(\{(x^{k},t_{k})\}\subset\Omega\times\mathbb{R}\) such that \[0<u(x^{k},t_{k}):=A_{k}\to A,\text{ as }k\to\infty.\] Let \(\varepsilon_{k}:=A-A_{k}\), then the sequence \(\{\varepsilon_{k}\}\) is nonnegative and tends to zero as \(k\to\infty\). To proceed, we introduce the following auxiliary function \[v_{k}(x,t):=u(x,t)+\varepsilon_{k}\eta_{k}(x,t),\] where the smooth cut-off function \[\eta_{k}(x,t):=\eta\left(\frac{x-x^{k}}{r},\frac{t-t_{k}}{r^{2}}\right)\in C_{ 0}^{\infty}(B_{r}(x^{k})\times(t_{k}-r^{2},t_{k}+r^{2})),\] satisfying \[\eta(x,t):=\left\{\begin{array}{ll}1&(x,t)\in B_{\frac{1}{2}}(0)\times(- \frac{1}{2},\frac{1}{2}),\\ 0,&(x,t)\not\in B_{1}(0)\times(-1,1).\end{array}\right.\] We first determine the radius \(r\) in the scaled and translated smooth function \(\eta_{k}\). In terms of the condition (1.16), we can directly evaluate \[\lim_{R\to+\infty}\frac{\left|(B_{3R}(x^{k})\setminus B_{\frac{3R}{\sqrt{2}}} (x^{k}))\cap\Omega^{c}\right|}{|B_{3R}(x^{k})|}\geq\frac{\bar{C}}{2}>0.\] It follows that there exists a sufficiently large radius \(R_{k}\) such that \[\frac{\left|(B_{3R}(x^{k})\setminus B_{\frac{3R}{\sqrt{2}}}(x^{k}))\cap\Omega ^{c}\right|}{|B_{3R}(x^{k})|}\geq\frac{\bar{C}}{4}>0\text{ for }R\geq R_{k}. \tag{4.2}\] From then on, we select the radius \[r=\frac{R_{k}}{\sqrt[n]{2}}.\] Let \[Q_{r}(x^{k},t_{k}):=B_{r}(x^{k})\times(t_{k}-r^{2},t_{k}+r^{2})\] be a parabolic cylinder, then a straightforward calculation implies that \[v_{k}(x^{k},t_{k})=u(x^{k},t_{k})+\varepsilon_{k}=A_{k}+A-A_{k}=A,\] and \[v_{k}(x,t)=u(x,t)\leq A\mbox{ for }(x,t)\not\in Q_{r}(x^{k},t_{k}).\] It is evident that the auxiliary function \(v_{k}(x,t)\) must attain its maximum value in \(Q_{r}(x^{k},t_{k})\). More precisely, there exists a point \((\bar{x}^{k},\bar{t}_{k})\in Q_{r}(x^{k},t_{k})\) such that \[A+\varepsilon_{k}\geq v_{k}(\bar{x}^{k},\bar{t}_{k})=\sup_{\mathbb{R}^{n} \times\mathbb{R}}v_{k}(x,t)\geq A>0. \tag{4.3}\] Furthermore, by virtue of the definition of \(v_{k}\), we derive \[A\geq u(\bar{x}^{k},\bar{t}_{k})\geq A-\varepsilon_{k}=A_{k}>0.\] Combining the definition of \(v_{k}\) with the differential equation in (1.17) and Corollary 2.2, we obtain \[(\partial_{t}-\Delta)^{s}v_{k}(\bar{x}^{k},\bar{t}_{k})=(\partial_{t}-\Delta )^{s}u(\bar{x}^{k},\bar{t}_{k})+\varepsilon_{k}(\partial_{t}-\Delta)^{s}\eta _{k}(\bar{x}^{k},\bar{t}_{k})\leq\frac{C\varepsilon_{k}}{r^{2s}}. \tag{4.4}\] Now applying the weighted average inequality established in Theorem 1.2 to \(v_{k}\) at its maximum point \((\bar{x}^{k},\bar{t}_{k})\) and combining with (4.3) and (4.4), we have \[A \leq v_{k}(\bar{x}^{k},\bar{t}_{k}) \tag{4.5}\] \[\leq \frac{C_{0}}{C_{n,s}}(2r)^{2s}(\partial_{t}-\Delta)^{s}v_{k}( \bar{x}^{k},\bar{t}_{k})+C_{0}(2r)^{2s}\int_{-\infty}^{\bar{t}_{k}-(2r)^{2}} \int_{B_{2r}^{c}(\bar{x}^{k})}\frac{v_{k}(y,\tau)e^{-\frac{|\bar{x}^{k}-y|^{2 }}{4(\bar{t}_{k}-\tau)}}}{(\bar{t}_{k}-\tau)^{\frac{n}{2}+1+s}}\,\mathrm{d}y\, \mathrm{d}\tau\] \[\leq C\varepsilon_{k}+C_{0}(2r)^{2s}\int_{-\infty}^{\bar{t}_{k}-(2r)^ {2}}\int_{B_{2r}^{c}(\bar{x}^{k})}\frac{v_{k}(y,\tau)e^{-\frac{|\bar{x}^{k}-y| ^{2}}{4(\bar{t}_{k}-\tau)}}}{(\bar{t}_{k}-\tau)^{\frac{n}{2}+1+s}}\,\mathrm{d}y \,\mathrm{d}\tau.\] It remains to be estimated the second term on the left side of (4.5). A combination of the containment relationship of balls \[B_{r}(x^{k})\subset B_{2r}(\bar{x}^{k})\subset B_{3r}(x^{k}) \tag{4.6}\] with the exterior condition of \(u\) in (1.17) and the definition of the smooth function \(\eta_{k}\) yields that \[v_{k}(x,t)=u(x,t)+\varepsilon_{k}\eta_{k}(x,t)=u(x,t)\leq 0,\mbox{ in }(B_{2r}^{c}(\bar{x}^{k})\cap\Omega^{c})\times\mathbb{R}. \tag{4.7}\] Next, applying (4.2), (4.3), (4.6) and (4.7), and combining with the definition of \(C_{0}\) presented in Theorem 1.2 and the chosen of the radius \(r=\frac{R_{k}}{\sqrt[k]{2}}\), we estimate the second term on the left side of (4.5) as follows \[C_{0}(2r)^{2s}\int_{-\infty}^{\bar{t}_{k}-(2r)^{2}}\int_{B_{2r}^{ c}(\bar{x}^{k})}\frac{v_{k}(y,\tau)e^{-\frac{|x^{k}-y|^{2}}{4(t_{k}-\tau)}}}{( \bar{t}_{k}-\tau)^{\frac{n}{2}+1+s}}\,\mathrm{d}y\,\mathrm{d}\tau \tag{4.8}\] \[= C_{0}(2r)^{2s}\int_{-\infty}^{\bar{t}_{k}-(2r)^{2}}\left[\int_{B_ {2r}^{c}(\bar{x}^{k})\cap\Omega}\frac{v_{k}(y,\tau)e^{-\frac{|x^{k}-y|^{2}}{4( t_{k}-\tau)}}}{(\bar{t}_{k}-\tau)^{\frac{n}{2}+1+s}}\,\mathrm{d}y+\int_{B_{2r}^{ c}(\bar{x}^{k})\cap\Omega^{c}}\frac{(A+\varepsilon_{k})e^{-\frac{|x^{k}-y|^{2}}{4(t_{k}- \tau)}}}{(\bar{t}_{k}-\tau)^{\frac{n}{2}+1+s}}\,\mathrm{d}y\right.\] \[\left.-\int_{B_{2r}^{c}(\bar{x}^{k})\cap\Omega^{c}}\frac{(A+ \varepsilon_{k})e^{-\frac{|x^{k}-y|^{2}}{4(t_{k}-\tau)}}}{(\bar{t}_{k}-\tau)^{ \frac{n}{2}+1+s}}\,\mathrm{d}y+\int_{B_{2r}^{c}(\bar{x}^{k})\cap\Omega^{c}} \frac{v_{k}(y,\tau)e^{-\frac{|x^{k}-y|^{2}}{4(t_{k}-\tau)}}}{(\bar{t}_{k}- \tau)^{\frac{n}{2}+1+s}}\,\mathrm{d}y\right]\mathrm{d}\tau\] \[\leq C_{0}(2r)^{2s}\int_{-\infty}^{\bar{t}_{k}-(2r)^{2}}\int_{B_{2r}^ {c}(\bar{x}^{k})}\frac{(A+\varepsilon_{k})e^{-\frac{|x^{k}-y|^{2}}{4(t_{k}- \tau)}}}{(\bar{t}_{k}-\tau)^{\frac{n}{2}+1+s}}\,\mathrm{d}y\,\mathrm{d}\tau\] \[-C_{0}(2r)^{2s}\int_{-\infty}^{\bar{t}_{k}-(3r)^{2}}\int_{B_{3r}^ {c}(x^{k})\cap\Omega^{c}}\frac{(A+\varepsilon_{k})e^{-\frac{|x^{k}-y|^{2}}{4( t_{k}-\tau)}}}{(\bar{t}_{k}-\tau)^{\frac{n}{2}+1+s}}\,\mathrm{d}y\,\mathrm{d}\tau\] \[= A+\varepsilon_{k}-C_{0}(2r)^{2s}\int_{-\infty}^{\bar{t}_{k}-(3r )^{2}}\int_{B_{3r}^{c}(x^{k})\cap\Omega^{c}}\frac{(A+\varepsilon_{k})e^{-\frac {|x^{k}-y|^{2}}{4(t_{k}-\tau)}}}{(\bar{t}_{k}-\tau)^{\frac{n}{2}+1+s}}\, \mathrm{d}y\,\mathrm{d}\tau\] \[\leq A+\varepsilon_{k}-C_{0}(\frac{2R_{k}}{\sqrt[k]{2}})^{2s}\int_{ \bar{t}_{k}-(3R_{k})^{2}}^{\bar{t}_{k}-(\frac{3R_{k}}{\sqrt[k]{2}})^{2}}\int_ {(B_{3R_{k}}(x^{k})\setminus B_{\frac{3R_{k}}{\sqrt[k]{2}}}^{c}(x^{k}))\cap \Omega^{c}}\frac{(A+\varepsilon_{k})e^{-\frac{|x^{k}-y|^{2}}{4(t_{k}-\tau)}}}{( \bar{t}_{k}-\tau)^{\frac{n}{2}+1+s}}\,\mathrm{d}y\,\mathrm{d}\tau\] \[\leq A+\varepsilon_{k}-C_{0}(\frac{2R_{k}}{\sqrt[k]{2}})^{2s}(A+ \varepsilon_{k})e^{-\frac{(3\sqrt[k]{2}+1)^{2}}{36}}\frac{(3R_{k})^{2}-(\frac{ 3R_{k}}{\sqrt[k]{2}})^{2}}{(3R_{k})^{n+2+2s}}\left|(B_{3R_{k}}(x^{k})\setminus B _{\frac{3R_{k}}{\sqrt[k]{2}}}^{c}(x^{k}))\cap\Omega^{c}\right|\] \[\leq A+\varepsilon_{k}-C_{0}(\frac{2R_{k}}{\sqrt[k]{2}})^{2s}(A+ \varepsilon_{k})e^{-\frac{(3\sqrt[k]{2}+1)^{2}}{36}}\frac{(3R_{k})^{2}-(\frac {3R_{k}}{\sqrt[k]{2}})^{2}}{(3R_{k})^{n+2+2s}}\frac{\bar{C}}{4}\left|B_{3R_{k} }(x^{k})\right|\] \[\leq (1-C)(A+\varepsilon_{k}).\] Finally, substituting (4.8) into (4.5), we deduce that \[0<A\leq C\varepsilon_{k}+(1-C)A,\] which is a contradiction for sufficiently large \(k\), and thus completes the proof of Theorem 1.4. It is well known that the maximum principle is a powerful tool and has many straightforward applications in the study of partial differential equations. For instance, Theorem 1.4 can be directly used to establish Corollary 1.5, that is the monotonicity of solutions to master equations on an upper half space. Proof of Corollary 1.5.: From now on, for any \(\lambda>0\), let \[u_{\lambda}(x,t):=u(x^{\lambda},t),\] where \(x^{\lambda}:=x+\lambda e_{n}\) with \(e_{n}=(0,...,0,1)\), and \[w_{\lambda}(x,t):=u(x,t)-u_{\lambda}(x,t),\] where the positions of the points \(x\) and \(x^{\lambda}\) as illustrated in Figure 4 below. If \(u(x,t)\) is not increasing with respect to \(x_{n}\) in \(\mathbb{R}^{n}_{+}\) for any \(t\in\mathbb{R}\), then there exists some point \((x,t)\in\mathbb{R}^{n}_{+}\times\mathbb{R}\) such that \(w_{\lambda}(x,t)>0\). Since \(f(t,u)\) is monotonically decreasing with respect to \(u\), we derive \[(\partial_{t}-\Delta)^{s}w_{\lambda}(x,t) = (\partial_{t}-\Delta)^{s}u(x,t)-(\partial_{t}-\Delta)^{s}u_{ \lambda}(x,t) \tag{4.9}\] \[= f(t,u(x,t))-f(t,u_{\lambda}(x,t))\leq 0\] in \(\mathbb{R}^{n}_{+}\times\mathbb{R}\) at the points where \(w_{\lambda}(x,t)>0\). Meanwhile, a combination the exterior condition in (1.19) and the nonnegativity of \(u(x,t)\) yields that \[w_{\lambda}(x,t)=u(x,t)-u_{\lambda}(x,t)=-u_{\lambda}(x,t)\leq 0\mbox{ in }( \mathbb{R}^{n}\setminus\mathbb{R}^{n}_{+})\times\mathbb{R}.\] Then by virtue of Theorem 1.4 with \(\Omega=\mathbb{R}^{n}_{+}\), we conclude that \[w_{\lambda}(x,t)\leq 0\mbox{ in }\mathbb{R}^{n}_{+}\times\mathbb{R}\] for any \(\lambda>0\). It implies that \(u(x,t)\) must be increasing with respect to \(x_{n}\) in \(\mathbb{R}^{n}_{+}\) for any \(t\in\mathbb{R}\). In the sequel, we further prove that the increase of \(u(x,t)\) is strict, it suffices to claim that \[w_{\lambda}(x,t)<0\mbox{ in }\mathbb{R}^{n}_{+}\times\mathbb{R} \tag{4.10}\] for any \(\lambda>0\). If not, then there exist some \(\lambda_{0}>0\) and a point \((x^{0},t_{0})\in\mathbb{R}^{n}_{+}\times\mathbb{R}\) such that \[w_{\lambda_{0}}(x^{0},t_{0})=0=\max_{\mathbb{R}^{n}\times\mathbb{R}}w_{ \lambda_{0}}(x,t).\] Figure 4. The positions of the points. Combining the differential inequality (4.9) with the definition of the nonlocal operator \((\partial_{t}-\Delta)^{s}\), we deduce \[(\partial_{t}-\Delta)^{s}w_{\lambda_{0}}(x^{0},t_{0})=C_{n,s}\int_{-\infty}^{t_{ 0}}\int_{\mathbb{R}^{n}}\frac{-w_{\lambda_{0}}(y,\tau)}{(t_{0}-\tau)^{\frac{n} {2}+1+s}}e^{-\frac{|x^{0}-y|^{2}}{4(t_{0}-\tau)}}\,\mathrm{d}y\,\mathrm{d}\tau =0.\] By using \(w_{\lambda_{0}}(x,t)\leq 0\) in \(\mathbb{R}^{n}\times\mathbb{R}\), then we must have \(w_{\lambda_{0}}(x,t)\equiv 0\) in \(\mathbb{R}^{n}\times(-\infty,t_{0}]\), which contradicts the fact that \(w_{\lambda_{0}}(x,t)\not\equiv 0\) in \(\mathbb{R}^{n}\) for any fixed \(t\in(-\infty,t_{0}]\) due to the exterior condition and the interior positivity of \(u(x,t)\). Hence, we verify that the assertion (4.10) is valid, then the proof of Corollary 1.5 is complete. ## 5. The proof of Gibbons' conjecture for master equations This section provides a complete proof of Gibbons' conjecture for master equations, which is based on a direct sliding method applicable to the master equation version, with the weighted average inequality (i.e., Theorem 1.2) and the maximum principle in unbounded domains (i.e., Theorem 1.4) as effective techniques throughout. For the ease of readability, we now present an outline of the proof. Keeping the notations \(x^{\lambda}\), \(u_{\lambda}\) and \(w_{\lambda}\) defined in the proof of Corollary 1.5, we proceed in three steps and first argue that the assertion \[w_{\lambda}(x,t)\leq 0\text{ in }\mathbb{R}^{n}\times\mathbb{R} \tag{5.1}\] is valid for sufficiently large \(\lambda\) by using the maximum principle in unbounded domains. The first Step provides a starting point for sliding the domain upwards along the \(x_{n}\)-axis. In the second step, we continuously decrease \(\lambda\) to its limiting position as long as the inequality (5.1) holds. We aim to show that the domain can slide back to overlap with the original domain. Otherwise, we will utilize the maximum principle in unbounded domains again and combine the weighted average inequality with the perturbation technique and the limit argument to derive a contradiction. Based on (5.1) for any \(\lambda>0\), we further deduce that \(u(x,t)\) is strictly increasing with respect to \(x_{n}\) for any \(t\in\mathbb{R}\) by establishing a strong maximum principle \[w_{\lambda}(x,t)<0,\text{ in }\mathbb{R}^{n}\times\mathbb{R}\text{ for any }\lambda>0.\] In the final step, we apply the sliding method along any direction that has an acute angle with the positive \(x_{n}\) axis to demonstrate that the entire solution \(u(x,t)\) is one-dimensional symmetry for any \(t\in\mathbb{R}\), that is, \[u(x^{\prime},x_{n},t)=u(x_{n},t)\text{ for any }t\in\mathbb{R}.\] Next, we provide the detailed process of proving Gibbons' conjecture for master equations. Proof of Theorem 1.1.: The proof is divided into three steps. **Step 1.** We first show that \[w_{\lambda}(x,t)\leq 0\text{ in }\mathbb{R}^{n}\times\mathbb{R} \tag{5.2}\] for sufficiently large \(\lambda\). The uniform convergence condition of \(u\) in Theorem 1.1 implies that there exists a sufficiently large \(a>0\) such that \[|u(x,t)|\geq 1-\delta\text{ for }|x_{n}|\geq a\text{ and }(x^{\prime},t)\in\mathbb{R}^{n-1}\times \mathbb{R}. \tag{5.3}\] In order to prove (5.2), assume on the contrary that there exists a positive constant \(A\) such that \[\sup_{\mathbb{R}^{n}\times\mathbb{R}}w_{\lambda}(x,t)=A>0 \tag{5.4}\] for any sufficiently large \(\lambda\). Let the auxiliary function \[\bar{w}_{\lambda}(x,t):=w_{\lambda}(x,t)-\frac{A}{2},\] now we claim that \[\bar{w}_{\lambda}(x,t)\leq 0\text{ in }\mathbb{R}^{n}\times\mathbb{R} \tag{5.5}\] for sufficiently large \(\tau\). We further choose a sufficiently large constant \(M>a\) such that \[\bar{w}_{\lambda}(x,t)\leq 0\text{ for any }(x,t)\in\mathbb{R}^{n}\times \mathbb{R}\text{ with }x_{n}\geq M.\] We denote the unbounded domain \(\Omega:=\mathbb{R}^{n-1}\times(-\infty,M)\), then it follows that \[\bar{w}_{\lambda}(x,t)\leq 0\text{ in }\Omega^{c}\times\mathbb{R},\] which is in accordance with the exterior condition in Theorem 1.4. Next, we devote to verifying \[(\partial_{t}-\Delta)^{s}\bar{w}_{\lambda}(x,t)=(\partial_{t}-\Delta)^{s}w_{ \lambda}(x,t)=f(t,u(x,t))-f(t,u_{\lambda}(x,t))\leq 0 \tag{5.6}\] at the points in \(\Omega\times\mathbb{R}\) where \(\bar{w}_{\lambda}(x,t)>0\) for sufficiently large \(\lambda\), which satisfies the differential inequality in Theorem 1.4. We distinguish three cases and first argue that the assertion (5.6) is true for \(|x_{n}|\leq a\) with any \(\lambda\geq 2a\). In this case, we must have \(x_{n}+\lambda\geq a\), then (5.3) indicates that \[1\geq u(x,t)>u_{\lambda}(x,t)+\frac{A}{2}>u_{\lambda}(x,t)\geq 1-\delta\] for any \((x^{\prime},t)\in\mathbb{R}^{n-1}\times\mathbb{R}\) at the points where \(\bar{w}_{\lambda}(x,t)>0\). Thereby for any fixed \(t\in\mathbb{R}\), applying the non-increasing assumption on \(f(t,u)\) for \(u\in[1-\delta,1]\), we derive \[(\partial_{t}-\Delta)^{s}\bar{w}_{\lambda}(x,t)=f(t,u(x,t))-f(t,u_{\lambda}( x,t))\leq 0\] at the points in \(\Omega\times\mathbb{R}\) with \(|x_{n}|\leq a\) where \(\bar{w}_{\lambda}(x,t)>0\) for any \(\lambda\geq 2a\). While if \(x_{n}<-a\), then by virtue of (5.3), we obtain \[-1\leq u_{\lambda}(x,t)<u(x,t)-\frac{A}{2}<u(x,t)\leq-1+\delta\] for any \((x^{\prime},t)\in\mathbb{R}^{n-1}\times\mathbb{R}\) at the points where \(\bar{w}_{\lambda}(x,t)>0\). Using the non-increasing assumption on \(f(t,u)\) with respect to \(u\in[-1,-1+\delta]\) for any fixed \(t\in\mathbb{R}\), we derive \[(\partial_{t}-\Delta)^{s}\bar{w}_{\lambda}(x,t)=f(t,u(x,t))-f(t,u_{\lambda}( x,t))\leq 0\] at the points in \(\Omega\times\mathbb{R}\) with \(x_{n}<-a\) where \(\bar{w}_{\lambda}(x,t)>0\). The last case is \(x_{n}\in(a,M)\), by virtue of (5.3) again, we have \[1\geq u(x,t)>u_{\lambda}(x,t)+\frac{A}{2}>u_{\lambda}(x,t)\geq 1-\delta\] for any \((x^{\prime},t)\in\mathbb{R}^{n-1}\times\mathbb{R}\) at the points where \(\bar{w}_{\lambda}(x,t)>0\). Then for any fixed \(t\in\mathbb{R}\), applying the non-increasing assumption on \(f(t,u)\) for \(u\in[1-\delta,1]\) again, we deduce that the assertion (5.6) is valid for \(x_{n}\in(a,M)\). In conclusion, we show that \(\bar{w}_{\lambda}(x,t)\) satisfies \[\left\{\begin{array}{ll}(\partial_{t}-\Delta)^{s}\bar{w}_{\lambda}(x,t)\leq 0,&\mbox{at the points in }\Omega\times\mathbb{R}\mbox{ where }\bar{w}_{\lambda}(x,t)>0,\\ \bar{w}_{\lambda}(x,t)\leq 0,&\mbox{in }\Omega^{c}\times\mathbb{R},\end{array}\right.\] for any \(\lambda\geq 2a\). Here the unbounded domain \(\Omega=\mathbb{R}^{n-1}\times(-\infty,M)\) clearly satisfies the limit condition (1.16) stated in Theorem 1.4. Thus, Theorem 1.4 infers that \(\bar{w}_{\lambda}(x,t)\leq 0\) in \(\mathbb{R}^{n}\times\mathbb{R}\) for any \(\lambda\geq 2a\). That is to say, \[w_{\lambda}(x,t)\leq\frac{A}{2}\mbox{ in }\mathbb{R}^{n}\times\mathbb{R}\] for any \(\lambda\geq 2a\), which is a contradiction with (5.4). Hence, we verify that \[w_{\lambda}(x,t)\leq 0,\,(x,t)\in\mathbb{R}^{n}\times\mathbb{R}\] for any \(\lambda\geq 2a\). Indeed, Step 1 provides a starting point for sliding the domain along the \(x_{n}\)-axis. **Step 2.** In this step, we continuously decrease \(\lambda\) to its limiting position as long as the inequality (5.2) holds and define \[\lambda_{0}:=\inf\{\lambda\mid w_{\lambda}(x,t)\leq 0,\,(x,t)\in\mathbb{R}^{n} \times\mathbb{R}\}.\] We devote to proving that the limiting position \[\lambda_{0}=0.\] The argument goes by contradiction, if not, then \(\lambda_{0}>0\), we show that \(\lambda_{0}\) can be decreased a little bit while the inequality (5.2) is still valid in this case, which contradicts the definition of \(\lambda_{0}\). With this aim in mind, we first claim that \[\sup_{(x,t)\in(\mathbb{R}^{n-1}\times[-a,a])\times\mathbb{R}}w_{\lambda_{0}}(x,t)<0, \tag{5.7}\] where the sufficiently large \(a>0\) is defined in (5.3). If the assertion (5.7) is violated, then we have \[\sup_{(x,t)\in(\mathbb{R}^{n-1}\times[-a,a])\times\mathbb{R}}w_{\lambda_{0}}(x,t)=0,\] which means that there exist sequences \(\{(x^{k},t_{k})\}\subset(\mathbb{R}^{n-1}\times[-a,a])\times\mathbb{R}\) and \(\{\varepsilon_{k}\}\subset\mathbb{R}_{+}\) such that \[w_{\lambda_{0}}(x^{k},t_{k})=:-\varepsilon_{k}\to 0,\mbox{ as }k\to\infty.\] We further introduce the following auxiliary function \[w_{k}(x,t):=w_{\lambda_{0}}(x,t)+\varepsilon_{k}\eta_{k}(x,t)\] to remedy the supremum of \(w_{\lambda_{0}}(x,t)\) may not be attained due to the set \((\mathbb{R}^{n-1}\times[-a,a])\times\mathbb{R}\) is unbounded. Here \[\eta_{k}(x,t)=\eta(x-x^{k},t-t_{k})\in C_{0}^{\infty}\left(B_{1}(x^{k})\times (-1+t_{k},1+t_{k})\right)\] is a smooth cut-off function satisfying \[\eta_{k}(x,t)\equiv 1\ \mbox{in}\ B_{\frac{3}{2}}(x^{k})\times\left(-\frac{1}{2 }+t_{k},\frac{1}{2}+t_{k}\right),\ \mbox{and}\ 0\leq\eta_{k}(x,t)\leq 1.\] Let the parabolic cylinder \[Q_{1}(x^{k},t_{k}):=B_{1}(x^{k})\times(-1+t_{k},1+t_{k}),\] then through a direct calculation, we obtain \[w_{k}(x^{k},t_{k})=w_{\lambda_{0}}(x^{k},t_{k})+\varepsilon_{k}\eta_{k}(x^{k},t_{k})=-\varepsilon_{k}+\varepsilon_{k}=0,\] and \[w_{k}(x,t)=w_{\lambda_{0}}(x,t)\leq 0,\ \mbox{in}\ (\mathbb{R}^{n}\times \mathbb{R})\setminus Q_{1}(x^{k},t_{k}).\] Hence, the perturbed function \(w_{k}(x,t)\) can attain its maximum value at some point \((\bar{x}^{k},\bar{t}_{k})\in Q_{1}(x^{k},t_{k})\) such that \[\varepsilon_{k}\geq w_{k}(\bar{x}^{k},\bar{t}_{k})=\sup_{\mathbb{R}^{n}\times \mathbb{R}}w_{k}(x,t)\geq 0. \tag{5.8}\] Moreover, the definition of \(w_{k}(x,t)\) yields that \[0\geq w_{\lambda_{0}}(\bar{x}^{k},\bar{t}_{k})\geq-\varepsilon_{k}. \tag{5.9}\] For the sake of illustration, we introduce the translation function \[\bar{w}_{k}(x,t):=w_{k}(x+\bar{x}^{k},t+\bar{t}_{k}).\] It follows from (5.8) that \[\varepsilon_{k}\geq\bar{w}_{k}(0,0)=\sup_{\mathbb{R}^{n}\times\mathbb{R}}\bar {w}_{k}(x,t)\geq 0, \tag{5.10}\] and then \[(\partial-\Delta)^{s}\bar{w}_{k}(0,0)=C_{n,s}\int_{-\infty}^{0}\int_{\mathbb{ R}^{n}}\frac{\bar{w}_{k}(0,0)-\bar{w}_{k}(y,\tau)}{(-\tau)^{\frac{n}{2}+1+s}}e^{ \frac{|y|^{2}}{4\tau}}\,\mathrm{d}y\,\mathrm{d}\tau\geq 0.\] Next, we aim to claim that \[(\partial-\Delta)^{s}\bar{w}_{k}(0,0)\to 0\ \mbox{as}\ k\to\infty. \tag{5.11}\] Combining the equation satisfied by \(u\) with Lemma 2.1, we compute \[0\leq(\partial-\Delta)^{s}\bar{w}_{k}(0,0) = (\partial-\Delta)^{s}w_{k}(\bar{x}^{k},\bar{t}_{k})\] \[= (\partial-\Delta)^{s}w_{\lambda_{0}}(\bar{x}^{k},\bar{t}_{k})+ \varepsilon_{k}(\partial-\Delta)^{s}\eta_{k}(\bar{x}^{k},\bar{t}_{k})\] \[\leq f(\bar{t}_{k},u(\bar{x}^{k},\bar{t}_{k}))-f(\bar{t}_{k},u_{\lambda_{0}}( \bar{x}^{k},\bar{t}_{k}))+C\varepsilon_{k}\] \[\to 0\mbox{ as }k\to\infty,\] where the last line we use the continuity of \(f\) and the fact that \[w_{\lambda_{0}}(\bar{x}^{k},\bar{t}_{k})=u(\bar{x}^{k},\bar{t}_{k})-u_{ \lambda_{0}}(\bar{x}^{k},\bar{t}_{k})\to 0\ k\to\infty\] by (5.9). Thus, we verify that the assertion (5.11) is valid. Now applying Theorem 1.2 to \(\bar{w}_{k}(x,t)\) at its maximum point \((0,0)\), we have \[\bar{w}_{k}(0,0)\leq\frac{C_{0}}{C_{n,s}}r^{2s}(\partial_{t}-\Delta)^{s}\bar{ w}_{k}(0,0)+C_{0}r^{2s}\int_{-\infty}^{-r^{2}}\int_{B_{r}^{c}(0)}\frac{\bar{w}_{k} (y,\tau)e^{\frac{|y|^{2}}{4\tau}}}{(-\tau)^{\frac{n}{2}+1+s}}\,\mathrm{d}y\, \mathrm{d}\tau \tag{5.12}\] for any \(r>0\), where the positive constant \(C_{0}\) is defined in Theorem 1.2. If we select the radius \(r>2\) such that the point \((y+\bar{x}^{k},\tau+\bar{t}_{k})\not\in Q_{1}(x^{k},t_{k})\) for \((y,\tau)\in B_{r}^{c}(0)\times(-\infty,-r^{2})\), then \[C_{0}r^{2s}\int_{-\infty}^{-r^{2}}\int_{B_{r}^{c}(0)}\frac{\bar{w }_{k}(y,\tau)}{(-\tau)^{\frac{n}{2}+1+s}}e^{\frac{|y|^{2}}{4\tau}}\,\mathrm{d}y \,\mathrm{d}\tau\] \[= C_{0}r^{2s}\int_{-\infty}^{-r^{2}}\int_{B_{r}^{c}(0)}\frac{w_{ \lambda_{0}}(y+\bar{x}^{k},\tau+\bar{t}_{k})+\varepsilon_{k}\eta_{k}(y+\bar{x} ^{k},\tau+\bar{t}_{k})}{(-\tau)^{\frac{n}{2}+1+s}}e^{\frac{|y|^{2}}{4\tau}}\, \mathrm{d}y\,\mathrm{d}\tau\] \[= C_{0}r^{2s}\int_{-\infty}^{-r^{2}}\int_{B_{r}^{c}(0)}\frac{w_{ \lambda_{0}}(y+\bar{x}^{k},\tau+\bar{t}_{k})}{(-\tau)^{\frac{n}{2}+1+s}}e^{ \frac{|y|^{2}}{4\tau}}\,\mathrm{d}y\,\mathrm{d}\tau\leq 0\] by the definition of \(\lambda_{0}\). Thereby a combination of (5.10)-(5.12) leads to \[C_{0}r^{2s}\int_{-\infty}^{-r^{2}}\int_{B_{r}^{c}(0)}\frac{\bar{w}_{k}(y,\tau )}{(-\tau)^{\frac{n}{2}+1+s}}e^{\frac{|y|^{2}}{4\tau}}\,\mathrm{d}y\,\mathrm{d }\tau\to 0\mbox{ as }k\to\infty,\] which indicates that \[\bar{w}_{k}(x,t)\to 0\mbox{ for }(x,t)\in B_{r}^{c}(0)\times(-\infty,-r^{2}), \mbox{ as }k\to\infty. \tag{5.13}\] We further take the same translation for \(u\) as follows \[u_{k}(x,t):=u(x+\bar{x}^{k},t+\bar{t}_{k}),\] then applying Arzela-Ascoli theorem to deduce that there exists a subsequence of \(\{u_{k}\}\) (still denoted by \(\{u_{k}\}\)) such that \[u_{k}(x,t)\to u_{\infty}(x,t)\mbox{ in }B_{2R}(0)\times(-R^{2},-r^{2}), \mbox{ as }k\to\infty \tag{5.14}\] for a fixed radius \(R>\max\{2r,\,\lambda_{0}\}\) to be determined later. Combining (5.13) with (5.14) and the definition of \(\bar{w}_{k}\), we derive \[0\leftarrow\bar{w}_{k}(x,t) = w_{k}(x+\bar{x}^{k},t+\bar{t}_{k})\] \[= w_{\lambda_{0}}(x+\bar{x}^{k},t+\bar{t}_{k})+\varepsilon_{k} \eta_{k}(x+\bar{x}^{k},t+\bar{t}_{k})\] \[= u(x+\bar{x}^{k},t+\bar{t}_{k})-u_{\lambda_{0}}(x+\bar{x}^{k},t+ \bar{t}_{k})\] \[= u_{k}(x,t)-(u_{k})_{\lambda_{0}}(x,t)\] \[\rightarrow u_{\infty}(x,t)-(u_{\infty})_{\lambda_{0}}(x,t),\ \mbox{in}\ \left(B_{R}(0)\setminus B_{r}(0)\right)\times(-R^{2},-r^{2}),\ \mbox{as}\ k\rightarrow\infty,\] which implies that \[u_{\infty}(x,t)-(u_{\infty})_{\lambda_{0}}(x,t)\equiv 0,\ \mbox{in}\ \left(B_{R}(0) \setminus B_{r}(0)\right)\times(-R^{2},-r^{2}).\] Hence, it follows that \[u_{\infty}(x^{\prime},x_{n},t)=u_{\infty}(x^{\prime},x_{n}+\lambda_{0},t)=u_{ \infty}(x^{\prime},x_{n}+2\lambda_{0},t)=...=u_{\infty}(x^{\prime},x_{n}+i \lambda_{0},t) \tag{5.15}\] for any fixed \((x,t)\in(B_{R}(0)\setminus B_{r}(0))\times(-R^{2},-r^{2})\) with \(|x^{\prime}|>r\), and \(i\in\mathbb{N}\) such that \[(x^{\prime},x_{n}+(i-1)\lambda_{0})\in B_{R}(0)\setminus B_{r}(0)\ \mbox{and}\ (x^{\prime},x_{n}+i\lambda_{0})\not\in B_{R}(0)\setminus B_{r}(0).\] However, in terms of the uniform convergence condition on \(u(x,t)\) and \(\bar{x}_{n}^{k}\) is bounded due to \(\bar{x}^{k}\in B_{1}(x^{k})\) and \(x_{n}^{k}\in[-a,a]\), we choose the radius \(R\) large enough such that \[u_{\infty}(x,t)\leq\delta-1\ \mbox{for}\ x_{n}\leq-\frac{R}{2},\ \mbox{and}\ u_{\infty}(x,t)\geq 1-\delta \ \mbox{for}\ x_{n}\geq\frac{R}{2},\] which contradicts the equality (5.15) by selecting \[(x,t)\in(B_{R}(0)\setminus B_{r}(0))\times(-R^{2},-r^{2})\ \mbox{with}\ |x^{\prime}|>r\ \mbox{and}\ x_{n}\leq-\frac{R}{2},\] as illustrated in Figure 5 below. Therefore, we conclude that the assertion (5.7) holds. In the sequel, we continue to show that there exists a small positive constant \(\varepsilon\) such that \[w_{\lambda}(x,t)\leq 0,\ \mbox{in}\ \mathbb{R}^{n}\times\mathbb{R}\ \mbox{for any}\ \lambda\in(\lambda_{0}-\varepsilon,\lambda_{0}] \tag{5.16}\] Figure 5. The choice of points. in the case of \(\lambda_{0}>0\). Note that combining the aforementioned conclusion (5.7) and the continuity of \(w_{\lambda}\) with respect to \(\lambda\), there exists a small positive constant \(\varepsilon\) such that \[\sup_{(x,t)\in(\mathbb{R}^{n-1}\times[-a,a])\times\mathbb{R}}w_{\lambda}(x,t) \leq 0\text{ for any }\lambda\in(\lambda_{0}-\varepsilon,\lambda_{0}]. \tag{5.17}\] Then in order to verify the validity of (5.16), it suffices to prove that \[\sup_{(x^{\prime},t)\in\mathbb{R}^{n-1}\times\mathbb{R},\,|x_{n}|>a}w_{\lambda }(x,t)\leq 0\text{ for any }\lambda\in(\lambda_{0}-\varepsilon,\lambda_{0}].\] Otherwise, there exists some \(\lambda\in(\lambda_{0}-\varepsilon,\lambda_{0}]\) such that \[\sup_{(x^{\prime},t)\in\mathbb{R}^{n-1}\times\mathbb{R},\,|x_{n}|>a}w_{ \lambda}(x,t)=:A>0. \tag{5.18}\] If we directly choose \(\Omega=\{(x,t)\in\mathbb{R}^{n}\times\mathbb{R}\mid|x_{n}|>a\}\) as an unbounded set to use the maximum principle established in Theorem 1.4, the "size" of \(\Omega^{c}\) is "too small" as compared to the "size" of \(\Omega\) in the sense of limit condition (1.16), then this condition is not valid for such \(\Omega\). Hence, we further need to shrink the unbounded set \(\Omega\). More precisely, employing the uniform convergence condition of \(u(x,t)\), we can select a sufficiently large constant \(M>a\) such that \[w_{\lambda}(x,t)\leq\frac{A}{2}\text{ for any }(x,t)\in\mathbb{R}^{n}\times \mathbb{R}\text{ with }|x_{n}|\geq M. \tag{5.19}\] Let \[\Omega:=\{(x,t)\in\mathbb{R}^{n}\times\mathbb{R}\mid a<|x_{n}|<M\}\] Figure 6. The stripe region \(\Omega\). be the blue stripe-shaped region in Figure 6, and we denote \[v_{\lambda}(x,t):=w_{\lambda}(x,t)-\frac{A}{2},\] then a combination of (5.17) and (5.19) yields the exterior condition \[v_{\lambda}(x,t)\leq 0,\ \text{in}\ \Omega^{c}\times\mathbb{R}.\] Now we prove the following differential inequality \[(\partial_{t}-\Delta)^{s}v_{\lambda}(x,t)=(\partial_{t}-\Delta)^{s}w_{\lambda} (x,t)=f(t,u(x,t))-f(t,u_{\lambda}(x,t))\leq 0 \tag{5.20}\] is fulfilled at the points in \(\Omega\times\mathbb{R}\) where \(v_{\lambda}(x,t)>0\). Since \(v_{\lambda}(x,t)>0\) infers that \(w_{\lambda}(x,t)>0\), if \(a<x_{n}<M\), then it follows from (5.3) that \[1\geq u(x,t)>u_{\lambda}(x,t)\geq 1-\delta\] for any \((x^{\prime},t)\in\mathbb{R}^{n-1}\times\mathbb{R}\) at the points where \(v_{\lambda}(x,t)>0\). Applying the non-increasing assumption on \(f(t,u)\) with respect to \(u\) in \([1-\delta,1]\), we obtain \[(\partial_{t}-\Delta)^{s}v_{\lambda}(x,t)=f(t,u(x,t))-f(t,u_{\lambda}(x,t))\leq 0\] at the points in \(\Omega\times\mathbb{R}\) with \(a<x_{n}<M\) where \(v_{\lambda}(x,t)>0\). While if \(-M<x_{n}<-a\), then (5.3) implies that \[-1\leq u_{\lambda}(x,t)<u(x,t)\leq-1+\delta\] for any \((x^{\prime},t)\in\mathbb{R}^{n-1}\times\mathbb{R}\) at the points where \(v_{\lambda}(x,t)>0\). Using the non-increasing assumption on \(f(t,u)\) for \(u\in[-1,-1+\delta]\), we derive \[(\partial_{t}-\Delta)^{s}v_{\lambda}(x,t)=f(t,u(x,t))-f(t,u_{\lambda}(x,t))\leq 0\] at the points in \(\Omega\times\mathbb{R}\) with \(-M<x_{n}<-a\) where \(v_{\lambda}(x,t)>0\). In conclusion, we verify that the differential inequality (5.20) is valid. Applying Theorem 1.4 to \(v_{\lambda}\) and combining with the definition of \(v_{\lambda}\), we deduce that \[w_{\lambda}(x,t)\leq\frac{A}{2},\,(x,t)\in\mathbb{R}^{n}\times\mathbb{R},\] which is a contradiction with (5.18). It follows that the assertion (5.16) is true, which contradicts the definition of \(\lambda_{0}\), and hence \(\lambda_{0}=0\) and \(u(x,t)\) is increasing with respect to \(x_{n}\) for any \(t\in\mathbb{R}\). We end up this step by further proving that \(u(x,t)\) is strictly increasing with respect to \(x_{n}\), i.e., \[w_{\lambda}(x,t)<0,\ \text{in}\ \mathbb{R}^{n}\times\mathbb{R}\ \text{for any}\ \lambda>0. \tag{5.21}\] If (5.21) is violated, then there exist a fixed \(\lambda_{0}>0\) and a point \((x^{0},t_{0})\in\mathbb{R}^{n}\times\mathbb{R}\) such that \(w_{\lambda_{0}}(x^{0},t_{0})=0\), and \((x^{0},t_{0})\) is a maximum point of \(w_{\lambda_{0}}(x,t)\) in \(\mathbb{R}^{n}\times\mathbb{R}\). On one hand, we directly calculate \[(\partial_{t}-\Delta)^{s}w_{\lambda_{0}}(x^{0},t_{0})=C_{n,s}\int_{-\infty}^{ t_{0}}\int_{\mathbb{R}^{n}}\frac{-w_{\lambda_{0}}(y,\tau)}{(t_{0}-\tau)^{ \frac{n}{2}+1+s}}e^{-\frac{|x^{0}-y|^{2}}{4(t_{0}-\tau)}}\mathrm{d}y\,\mathrm{ d}\tau\geq 0.\] On the other hand, we have \[(\partial_{t}-\Delta)^{s}w_{\lambda_{0}}(x^{0},t_{0})=f(t_{0},u(x^{0},t_{0}))-f(t _{0},u_{\lambda_{0}}(x^{0},t_{0}))=0.\] As a consequence of the above estimates, we derive \[w_{\lambda_{0}}(x,t)\equiv 0,\ \text{in}\ \mathbb{R}^{n}\times(-\infty,t_{0}),\] which contradicts the uniform convergence condition condition of \(u(x,t)\) with respect to \(x_{n}\) for any fixed \((x^{\prime},t)\in\mathbb{R}^{n-1}\times(-\infty,t_{0})\). Therefore, we deduce that (5.21) is valid, and then \(u(x,t)\) is strictly increasing with respect to \(x_{n}\). **Step 3.** We finally prove that the entire solution \(u(x,t)\) is one-dimensional symmetry for any \(t\in\mathbb{R}\), that is, \(u(x,t)=u(x_{n},t)\) is independent of \(x^{\prime}\). By proceeding similarly as in Step 1 and Step 2, we can derive \[u(x+\lambda\nu,t)>u(x,t),\ \text{in}\ \mathbb{R}^{n}\times\mathbb{R}\ \text{for any}\ \lambda>0,\] and every vector \(\nu=(\nu_{1},...,\nu_{n})\) with \(\nu_{n}>0\). This implies that \(u(x,t)\) is strictly increasing along any direction which has an acute angle with the positive \(x_{n}\) axis. Let \(\nu_{n}\to 0\), then by the continuity of \(u(x,t)\), the inequality is still preserved in the sense \[u(x+\lambda\nu,t)\geq u(x,t),\,(x,t)\in\mathbb{R}^{n}\times\mathbb{R}\ \text{for any}\ \lambda>0.\] Note that \(\nu\) can be any given direction perpendicular to \(x_{n}\) axis, then we conclude that \(u(x,t)\) must be independent of \(x^{\prime}\), i.e., \(u(x,t)=u(x_{n},t)\) for any \(t\in\mathbb{R}\). Hence, we complete the proof of Gibbons' conjecture for master equation. ## Acknowledgments The work of the first author is partially supported by MPS Simons foundation 847690, and the work of the second author is partially supported by the National Natural Science Foundation of China (NSFC Grant No.12101452).
2305.16474
FairDP: Certified Fairness with Differential Privacy
This paper introduces FairDP, a novel mechanism designed to achieve certified fairness with differential privacy (DP). FairDP independently trains models for distinct individual groups, using group-specific clipping terms to assess and bound the disparate impacts of DP. Throughout the training process, the mechanism progressively integrates knowledge from group models to formulate a comprehensive model that balances privacy, utility, and fairness in downstream tasks. Extensive theoretical and empirical analyses validate the efficacy of FairDP and improved trade-offs between model utility, privacy, and fairness compared with existing methods.
Khang Tran, Ferdinando Fioretto, Issa Khalil, My T. Thai, NhatHai Phan
2023-05-25T21:07:20Z
http://arxiv.org/abs/2305.16474v2
# FairDP: Certified Fairness with Differential Privacy ###### Abstract This paper introduces **FairDP**, a novel mechanism designed to simultaneously ensure differential privacy (DP) and fairness. FairDP operates by independently training models for distinct individual groups, using group-specific clipping terms to assess and bound the disparate impacts of DP. Throughout the training process, the mechanism progressively integrates knowledge from group models to formulate a comprehensive model that balances privacy, utility, and fairness in downstream tasks. Extensive theoretical and empirical analyses validate the efficacy of FairDP, demonstrating improved trade-offs between model utility, privacy, and fairness compared with existing methods. ## 1 Introduction The proliferation of machine learning (ML) systems in decision-making processes has brought important considerations regarding privacy, bias, and discrimination. These requirements are becoming pressing as ML systems are increasingly used to make decisions that significantly impact individuals' lives, such as in healthcare, finance, and criminal justice. These concerns underscore the need for ML algorithms that can guarantee both privacy and fairness. Differential Privacy (DP) is an algorithmic property that helps protect the sensitive information of individuals by preventing disclosure during computations. In the context of machine learning, it enables algorithms to learn from data while ensuring they do not retain sensitive information about any specific individual in the training data. However, it has been found that DP systems may produce biased and unfair results for different groups of individuals [2; 12; 40], which can have a significant impact on their lives, particularly in areas such as finance, criminal justice, or job-hiring [11]. The issue of balancing privacy and fairness in ML systems has been the subject of much discussion in recent years. For example, [6] showed the existence of a tradeoff between differential privacy and equal opportunity, a fairness criterion that requires a classifier to have equal true positive rates for different groups. Different studies have also reported that when models are trained on data with long-tailed distributions, it is challenging to develop a private learning algorithm that has high accuracy for minority groups [33]. These findings have led to the question of whether fair models can be created while preserving sensitive information and have spurred the development of various approaches [17; 24; 35; 36; 37] (see Appendix D for further discussion on related work). While these studies have contributed to a deeper understanding of the trade-offs between privacy and fairness, as well as the importance of addressing these issues in a unified manner, they all share a common limiting factor: _the inability to provide formal guarantees for both privacy and fairness simultaneously_. This aspect is essential and cannot be overstated. In many critical application contexts, such as those regulated by policy and laws, these guarantees are often required, and failure to provide them can prevent adoption or deployment [34]. This paper aims to address this gap by proposing novel mechanisms that simultaneously achieve differential privacy and provide certificates on fairness. The main challenges in developing such a mechanism are: **(1)** Designing appropriate DP algorithms that can limit the impact of privacy-preserving noise on the model bias; and **(2)** Balancing the trade-offs between model utility, privacy, and fairness, while simultaneously providing useful fairness certificates. **Contributions.** The paper makes two main contributions to address these challenges. First, it introduces a novel DP training mechanism called FairDP, which ensures certified fairness. The mechanism controls the amount of noise injected into groups of data points classified by fairness-sensitive attributes, such as race and gender. By controlling the disparate effects of noise on model fairness through group-specific clipping terms, FairDP enables the derivation and tightening of certified fairness bounds. Throughout the training process, the mechanism progressively integrates knowledge from each group model, leading to improved trade-offs between model utility, privacy, and fairness. Second, it conducts extensive experiments to analyze the interplay among utility, privacy, and fairness using various benchmark datasets. The results show that FairDP provides a better balance between privacy and fairness compared to existing baselines, including both DP-preserving mechanisms with or without fairness constraints. The significance of our theoretical and empirical analysis becomes apparent as it emphasizes the need to develop novel approaches for effectively combining data privacy preservation and fairness. In this context, FairDP represents an innovative solution that bridges this critical void. ## 2 Background and Research Goal The paper considers datasets \(D=\{(x_{i},a_{i},y_{i})\}_{i=1}^{n}\) whose samples are drawn from an unknown distribution. Therein, \(x_{i}\in\mathcal{X}\subset\mathbb{R}^{d}\) is a sensitive feature vector, \(a_{i}\in\mathcal{A}=[K]\) is a (group of) protected group attribute(s), and \(y_{i}\in\mathcal{Y}=\{0,1\}\) is a binary class label, similar to previous work [5; 38; 18]. For example, consider a classifier for predicting whether individuals may qualify for a loan. The data features \(x_{i}\) may describe the individuals' education, current job, and zip code. The protected attribute \(a_{i}\) may describe the individual's gender or race, and the label \(y_{i}\) indicates whether the individual would successfully repay a loan or not. The paper also uses notation \(D_{k}=\{(x_{i},a_{i}=k,y_{i})\}_{i=1}^{n_{k}}\) to denote a non-overlapping partition over dataset \(D\) which contains exclusively the individuals belonging to a protected group \(k\) and \(\cap_{k}D_{k}=\emptyset\). Although the results in this paper consider only one protected attribute, the results can be directly generalized to multiple protected attributes (see Appendix E). **Research Goal.** The paper studies models \(h_{\theta}:\mathcal{X}\rightarrow[0,1]\) parameterized by \(\theta\in\mathbb{R}^{r}\) and the learning task optimizes the empirical loss function: \[\mathcal{L}(D)=\min_{\theta}\;\sum_{(x_{i},a_{i},y_{i})\in D}\ell\left(h_{ \theta}(x_{i}),y_{i}\right),\] where \(\ell:\mathcal{Y}\times\mathcal{Y}\rightarrow\mathbb{R}_{+}\) is a differentiable loss function. The goal is to train models satisfying three key properties: **(1)**_Privacy_: the model parameters \(\theta\) are protected to prevent information leakage from the training data; **(2)**_Fairness_: the released model is unbiased towards any protected group, with theoretical guarantees; and **(3)**_Utility_: at the same time, the model's utility is maximized. The paper uses \(h_{\theta}\) and \(h_{\theta_{k}}\) to denote, respectively, the models minimizing the empirical loss \(\mathcal{L}(D)\) over the entire dataset and that minimizing \(\mathcal{L}(D_{k})\) using data from the corresponding group \(k\). **Differential Privacy.** Differential privacy (DP) [9] is a strong privacy concept ensuring that the likelihood of any outcome does not change significantly when a record is added or removed from a dataset. An adjacent dataset (\(D^{\prime}\)) of \(D\) is created by adding or removing a record from \(D\). Such a relation is denoted by \(D\sim D^{\prime}\). t **Definition 2.1** ([9]).: _A mechanism \(\mathcal{M}\!:\!\mathcal{D}\!\rightarrow\!\mathcal{R}\) with domain \(\mathcal{D}\) and range \(\mathcal{R}\) satisfies \((\epsilon,\delta)\)-differential privacy, if, for any two adjacent inputs \(D\sim D^{\prime}\!\in\!\mathcal{D}\), and any subset of outputs \(R\subseteq\mathcal{R}\):_ \[\Pr[\mathcal{M}(D)\in R]\leq e^{\epsilon}\Pr[\mathcal{M}(D^{\prime})\in R]+\delta.\] Parameter \(\epsilon>0\) describes the _privacy loss_ of the algorithm, with values close to \(0\) denoting strong privacy, while parameter \(\delta\in[0,1)\) captures the probability of failure of the algorithm to satisfy \(\epsilon\)-DP. The global sensitivity \(\Delta_{f}\) of a real-valued function \(f:\mathcal{D}\rightarrow\mathbb{R}\) is defined as the maximum amount by which \(f\) changes in two adjacent inputs: \(\Delta_{f}=\max_{D\sim D^{\prime}}\|f(D)-f(D^{\prime})\|\). In particular, the Gaussian mechanism, defined by \(\mathcal{M}(D)=f(D)+\mathcal{N}(0,\sigma^{2}\,\mathbf{I}),\) where \(\mathcal{N}(0,\sigma^{2})\) is the Gaussian distribution with \(0\) mean and standard deviation \(\sigma^{2}\), satisfies \((\epsilon,\delta)\)-DP for \(\sigma=\Delta_{f}\sqrt{2\log(1.25/\delta)}\big{/}_{\epsilon}\). **Group Fairness and Certified Guarantee.** This paper considers a general notion of statistical fairness metrics, defined as follows: **Definition 2.2**.: _General Notion of Fairness. The fairness of a given model \(h_{\theta}(\cdot)\) is quantified by_ \[\texttt{Fair}(h_{\theta})=\max_{u,v\in[K]}|Pr(\hat{y}=1|a=u,e)-Pr(\hat{y}=1|a= v,e)|\,, \tag{1}\] _where \(\hat{y}\) is the model's prediction and \(e\) is a random event._ The fairness notion in Equation (1) captures several well-known fairness metrics, including demographic parity [23] (when \(e=\emptyset\)), equality of opportunity [14] (when \(e\) is the event "\(y=1\)"), and equality of odd [14] (when \(e=y\)). If a model \(h_{\theta}\) satisfies \(\texttt{Fair}(h_{\theta})\leq\tau\), for \(\tau\in[0,1]\), then we say that \(h_{\theta}\)_achieves certification of \(\tau\)-fairness_. Intuitively, as \(\tau\) decreases, the model's decision becomes more independent of the protected attribute, given the random event \(e\). ## 3 Certified Fairness with DP (FairDP) This section introduces FairDP, a mechanism that addresses two key objectives: **(1)** the realization of \((\epsilon,\delta)\)-differential privacy (DP) and **(2)** the provision of a provable \(\tau\)-fairness guarantee. Central to this approach is the use of a stochastic gradient descent (SGD) training process. However, developing FairDP poses a significant challenge in balancing the disparate impact of the DP-preserving noise on specialized model predictions for different protected groups while also ensuring \(\tau\)-fairness certification. Moreover, as the model parameters are updated under DP preservation during each training round, they become intricate in infinite parameter space, adding complexity to achieving \(\tau\)-fairness guarantees. Finding a solution to these intertwined challenges is difficult since DP preservation and fairness can substantially reduce the model's performance, particularly without a carefully calibrated noise injection process. ### FairDP and Privacy Guarantee To overcome these challenges, FairDP relies on two key strategies: Firstly, it restricts the model parameters within a finite space, enabling us to establish a tractable boundary for the model's DP-preserving noise-influenced predictions. In a neural network, FairDP uses an \(l_{2}\)-norm clipping on the final layer weights of model \(h_{\theta}\), a technique also applicable to models prioritizing privacy. Second, rather than training a single model \(h_{\theta}\), FairDP trains a set of group-specific models \(\{h_{\theta_{k}}\}_{k=1}^{K}\) with each \(\theta_{k}\) being independently learned to minimize the loss \(\mathcal{L}(D_{k})\). This approach not only allows to preserve each group's privacy, enhancing control over noise injection per group, but also progressively aggregates group models to construct a (general) model \(h_{\theta}\). In doing so, FairDP effectively combines and propagates knowledge from each group to balance privacy, fairness, and utility. **FairDP.** A schematic illustration of the algorithm is shown in Figure 1 and its training process is outlined in Algorithm 1. Let us consider, without loss of generality, \(h_{\theta}\) as an \(L\)-layers neural network, where \(\theta=\{W_{1},\ldots,W_{L}\}\), \(W_{j}\) contains the weights at the \(j^{th}\) layer, and the activation of the last Figure 1: FairDP: A schematic overview. layer is a sigmoid function for a binary classification task. In each training round \(t\), FairDP clips the \(l_{2}\)-norm of the final layer's weights \(W_{L}^{(t-1)}\in\theta^{(t-1)}\) by \(M\) (line 4). For each group \(k\), FairDP initializes the group model parameters \(\theta_{k}^{(t-1)}\) using the clipped model parameters \(\theta^{(t-1)}\) (line 5). It then draws a batch of data points \(B_{k}\) from the corresponding group dataset \(D_{k}\) with probability \(q\). The \(l_{2}\)-norm of the gradient derived from each data point in the batch is then constrained by a predefined upper-bound \(C\) (line 9). Next, Gaussian noise \(\mathcal{N}(0,C^{2}\sigma^{2}\mathbf{I}_{r})\) is introduced to the sum of clipped gradients \(\Delta\tilde{g}_{k}\) from all data points, ensuring DP preservation. Here, \(r\) denotes the number of model weights, \(\mathbf{I}_{r}\) is an identity matrix of size \(r\), and \(\sigma\) is a DP-preserving noise scale (line 11). The group model's parameters \(\theta_{k}\) are updated using DP-preserving gradients through standard SGD with a learning rate \(\eta\) (line 12). In order to construct the (general) model \(h_{\theta}\), the parameters of the group models are aggregated as: \(\theta^{(t)}=\frac{\theta_{1}^{(t)}+\cdots+\theta_{K}^{(t)}}{K}\) (line 14). The aggregated model parameters \(\theta^{(t)}\) are used as the parameters for every group model in the next training round (line 5). These aggregation and propagation steps (lines 5 and 14) ensure that the final model parameters \(\theta^{(T)}\), where \(T\) is the number of update steps, are close to the parameters of every group, reducing bias towards any specific group and distilling knowledge from every group to improve model utility. The parameters \(\theta^{(T)}\) returned by the model satisfy \((\epsilon,\delta)\)-DP. **Theorem 3.1**.: _Algorithm 1 satisfies \((\epsilon,\delta)\)-DP where \(\epsilon\) is calculated by the moment accountant [1] given the sampling probability \(q\), \(T\) update steps, and the noise scale \(\sigma\)._ The proof of all theorems are reported in the Appendix. ### Fairness Certification To derive fairness certification, we focus on the last layer (\(L^{th}\)) of the (general) model \(h_{\theta}\) since it directly produces the predictions. The \(L^{th}\) layer consists of an input \(z_{L-1}\in\mathbb{R}^{f}\) and an output \(z_{L}\in\mathbb{R}\), before the application of the sigmoid activation function. If \(sigmoid(z_{L})>0.5\) (equivalent to \(z_{L}>0\)), then the prediction of (general) model \(h_{\theta}\) is \(\hat{y}=1\); otherwise the prediction is \(\hat{y}=0\). Given a group model \(h_{\theta_{k}}\), DP-preserving noise injected into clipped gradients \(\Delta\tilde{g}_{k}\) (line 11) transforms the gradients of the last layer, denoted as \(\mu_{k}\), into a random variable following a multivariate Gaussian distribution \(\mathcal{N}(\mu_{k};\sigma^{2}C^{2}\mathbf{I}_{f})\), as follows: \(\tilde{\mu}_{k}=\mu_{k}+\mathcal{N}(0;\sigma^{2}C^{2}\mathbf{I}_{f})\). As a result, the weights at the last layer for the group \(k\) at every step \(t\), denoted by \(W_{L,k}^{(t)}\), becomes a random variable with the following distribution \(\mathcal{N}(W_{L,k}^{(t-1)}-\eta\mu_{k};\eta^{2}\sigma^{2}C^{2}\mathbf{I}_{f})\). Notice that the weight \(W_{L}^{(t)}\) of the (general) model \(h_{\theta}\) is a linear combination of the \(K\) multivariate Gaussian random variables \(\{W_{L,k}^{(t)}\}_{k\in[K]}\). Based on the fact that the linear combination of multivariate Gaussian random variables is also multivariate Gaussian distributed [3], the weight \(W_{L}^{(t)}\) follows a multivariate Gaussian distribution, as follows: \[W_{L}^{(t)}\sim\mathcal{N}\Big{(}\frac{1}{K}\sum_{k=1}^{K}W_{L,k}^{(t-1)}- \frac{\eta}{K}\sum_{k=1}^{K}\mu_{k};\frac{\eta^{2}\sigma^{2}C^{2}}{K}\mathbf{ I}_{f}\Big{)}\;\;\text{or}\;\;W_{L}^{(t)}\sim\mathcal{N}\Big{(}W^{(t-1)}-\eta\mu; \frac{\eta^{2}\sigma^{2}C^{2}}{K}\mathbf{I}_{f}\Big{)},\] where \(W^{(t-1)}=\frac{1}{K}\sum_{k=1}^{K}W_{L,k}^{(t-1)}\) and \(\mu=\frac{1}{K}\sum_{k=1}^{K}\mu_{k}\). Since the output \(z_{L}=W_{L}^{(t)}{}^{\tau}z_{L-1}\) is a linear combination of the Gaussian random variable, \(z_{L}\) is a random variable in one dimension following a Gaussian distribution: \[z_{L}\sim\mathcal{N}\Big{(}\langle W^{(t-1)}-\eta\mu,z_{L-1}\rangle;\frac{1}{K} \|z_{L-1}\|_{2}^{2}\eta^{2}\sigma^{2}C^{2}\Big{)}, \tag{2}\] where \(\langle\cdot,\cdot\rangle\) is the inner product between two vectors. As a result of Eq. (2), the (general) model \(h_{\theta}\) predicts \(z_{L}\) derived from a data point \(x\in\mathbb{R}^{d}\) as a positive value with the probability \(Pr(\hat{y}=1|x)=Pr(z_{L}>0)=1-Pr(z_{L}\leq 0)\), where the probability \(Pr(z_{L}\leq 0)\) can be computed as follows: \[Pr(z_{L}\leq 0)=\frac{1}{2}\Bigg{[}1+\texttt{erf}\Big{(}\frac{-\langle W^{(t-1)} -\eta\mu,z_{L-1}\rangle\sqrt{K}}{\|z_{L-1}\|_{2}\eta\sigma C\sqrt{2}}\Big{)} \Bigg{]};\,\texttt{erf}(\cdot)\text{ is the {error function}} \tag{3}\] Eq. (3) follows the cumulative distribution function of one-dimension Gaussian distribution up to \(z_{L}=0\)2[3]. Therefore, we have the following Footnote 2: If the prediction process uses a threshold other than 0.5, this probability can still be computed by an inverse of the sigmoid function to find the corresponding value for the cumulative distribution. \[Pr(\hat{y}=1|x) =1-Pr(z_{L}\leq 0)=\frac{1}{2}+\frac{1}{2}\texttt{erf}\Big{(}\frac{ \langle W^{(t-1)}-\eta\mu,z_{L-1}\rangle\sqrt{K}}{\|z_{L-1}\|_{2}\eta\sigma C \sqrt{2}}\Big{)} \tag{4}\] \[=\frac{1}{2}+\frac{1}{2}\texttt{erf}\Big{(}\frac{\langle W^{(t-1) }-\eta\mu,z_{L-1}\rangle\|W^{(t-1)}-\eta\mu\|_{2}\|z_{L-1}\|_{2}\sqrt{K}}{\|W^ {(t-1)}-\eta\mu\|_{2}\|z_{L-1}\|_{2}\|z_{L-1}\|_{2}\eta\sigma C\sqrt{2}}\Big{)}, \tag{5}\] Since \(\frac{\langle W^{(t-1)}-\eta\mu,z_{L-1}\rangle}{\|W^{(t-1)}-\eta\mu\|_{2}\|z_ {L-1}\|_{2}}=\cos\phi\), with \(\phi\) being the angle between vectors \((W^{(t-1)}-\eta\mu)\) and \(z_{L-1}\), we have the following \[Pr(\hat{y}=1|x)=\frac{1}{2}+\frac{1}{2}\texttt{erf}\Big{(}\frac{\|W^{(t-1)}- \eta\mu\|_{2}\sqrt{K}}{\eta\sigma C\sqrt{2}}\cos\phi\Big{)}. \tag{6}\] From Eq. (6), \(\cos(\phi)\in[-1,1]\), and the monotonicity of the error function3, one can upper bound and lower bound the probability that \(\hat{y}=1\) given \(x\), as follows: Footnote 3: The error function is an increasing function, i.e. if \(x_{1}<x_{2}\), then \(\texttt{erf}(x_{1})<\texttt{erf}(x_{2})\). \[\frac{1}{2}-\frac{1}{2}\texttt{erf}\Bigg{(}\frac{\|W^{(t-1)}-\eta\mu\|_{2}}{ \eta\sigma C\sqrt{\frac{2}{K}}}\Bigg{)}\leq Pr(\hat{y}=1|x)\leq\frac{1}{2}+ \frac{1}{2}\texttt{erf}\Bigg{(}\frac{\|W^{(t-1)}-\eta\mu\|_{2}}{\eta\sigma C \sqrt{\frac{2}{K}}}\Bigg{)}. \tag{7}\] Since the weights \(W^{(t-1)}\) and gradients \(\mu\) are bounded by the clipping in FairDP (lines 4 and 9); that is \(\|W^{(t-1)}\|_{2}\leq M\) and \(\|\mu\|_{2}\leq\frac{m}{K}C\) where \(m=\sum_{k=1}^{K}|B_{k}|\) is the total size of all training batches across protected groups \(|B_{k}|\) in a training round, we can derive a \(\tau\)-fairness certification of \(\texttt{Fair}(h_{\theta})\) from Eq. (7) in the following theorem: **Theorem 3.2**.: _A general model \(h_{\theta}\) optimized by Algorithm 1 satisfies \(\tau\)- fairness certification with,_ \[\texttt{Fair}(h_{\theta})\leq\texttt{erf}\Big{(}\frac{(MK+\eta mC)\sqrt{K}}{K \eta\sigma C\sqrt{2}}\Big{)}. \tag{8}\] **Remark.** Theorem 3.2 provides an upper-bound on the \(\tau\)-fairness certification, revealing a novel insight into the trade-off among privacy, fairness, and utility. It is worth noting that the upper bound of fairness certification \(\tau\) decreases as the DP-preserving noise scale \(\sigma\) increases. As a result, stronger privacy (larger \(\sigma\) values) correspond to enhanced fairness certification (smaller \(\tau\) values), due to the increased randomness influencing the model's decisions. While this theoretical impact of DP-preserving noise augments privacy and fairness assurances in our model, it could potentially diminish utility. Our theoretical observation is consistent with previous empirical studies [42, 25]. ### Tightening Fairness Certification While an important result, the \(\tau\)-fairness certification in Theorem 3.2 lacks sufficient tightness due to the batch size \(m\) and the learning rate \(\eta\) included in the error function. Larger \(m\) and smaller \(\eta\), which are common in typical model training, can result in a looser \(\tau\)-fairness. Therefore, in our second main contribution, we derive an empirical fairness bound that substantially tightens the \(\tau\)-fairness certification, enabling a pragmatic understanding of the privacy, fairness, and utility trade-offs. By leveraging Eq. (4), the empirical fairness bound can be calculated on-the-fly (i.e., during model training). For a specific group \(k\), at every update step \(t\), the probability \(Pr(\hat{y}=1|a=k,e)\) can be empirically computed as follows: \[P_{emp}(\hat{y}=1|a=k,e) =\frac{1}{n_{k,e}}\sum_{x\in D_{k,e}}P_{emp}(\hat{y}=1|x)=\frac{1 }{n_{k,e}}\sum_{x\in D_{k,e}}\!\!P_{emp}(z_{L}>0) \tag{9}\] \[=\frac{1}{2}+\frac{1}{2n_{k,e}}\sum_{x\in D_{k,e}}\!\!\texttt{erf }\Big{(}\frac{\langle W^{(t-1)}-\eta\mu,z_{L-1}\rangle\sqrt{K}}{\|z_{L-1}\|_{ 2}\eta\sigma C\sqrt{2}}\Big{)}, \tag{10}\] where we use the real values of \(W^{(t-1)}\), \(\mu\), and \(z_{L-1}\) at every round \(t\), and \(n_{k,e}\) is the size of \(D_{k,e}\). The empirical fairness certificate can be generalized to different fairness metrics by considering the event \(e\). In fact, \(D_{k,e}=D_{k}\) for _demographic parity_, \(D_{k,e}\) is the set of data point in \(D_{k}\) with the positive label for _equality of opportunity_, and \(D_{k,e}\) is the set of data point in \(D_{k}\) with the positive label when computing true positive rate or the negative label when computing false positive rate for _equality of odd_. Finally, the empirical \(\tau\)-fairness certification of the general model \(h_{\theta}\) can be computed by \(\arg\max_{u,v\in[K]}|P_{emp}(\hat{y}=1|a=u,e)-P_{emp}(\hat{y}=1|a=v,e)|\). **Proposition 3.3**.: _A model \(h_{\theta}\) optimized by Algorithm 1 satisfies **empirical \(\tau_{emp}\)-fairness certification with \(\tau_{emp}=\arg\max_{u,v\in[K]}|P_{emp}(\hat{y}=1|a=u,e)-P_{emp}(\hat{y}=1|a=v, e)|\)._ **Utility, Privacy, and Fairness Trade-offs.** FairDP is, to our knowledge, the first mechanism that simultaneously preserves DP and attains both theoretical and empirical certification of \(\tau\)-fairness, all without sacrificing model utility, as demonstrated in the experimental results below. Additionally, Theorems 3.2 and Proposition 3.3 provide an insightful theoretical understanding of the interplay between privacy, fairness, and utility. A stronger privacy guarantee (larger noise scale \(\sigma\)) tends to result in better fairness certification (smaller \(\tau\) value), even though it could potentially compromise model utility. **Remark.** Practitioners can leverage our results to more effectively balance the trade-offs among privacy, fairness, and utility by adaptively adjusting the training process of FairDP. For example, the application of optimizers like Adam [20] at the training onset may lead to enhanced model utility and convergence rate under identical DP protection. As the model nears convergence, practitioners can transition to SGD to secure fairness certification, enabling us to overcome tight constraints on the weights of the last layer. Also, practitioners can adjust the hyper-parameter \(M\) to achieve better fairness. As in Theorem 3.2, the lower value of \(M\), the fairer the model is. However, small \(M\) could degrade model utility since it constrains the decision boundary in a smaller parameter space (see Figure 13, Appendix F, for details). ## 4 Experimental Results In this section, a comprehensive evaluation of FairDP and several baseline methods are conducted on various benchmark datasets. The evaluation primarily focuses on two aspects: **(1)** Examining the trade-off between model utility, privacy, and fairness, and **(2)** Assessing the accuracy of the fairness certification by comparing it with empirical results obtained from multiple statistical fairness metrics. ### Datasets, Metrics and Model Configurations The evaluation uses four datasets: Adult and Abalone datasets from the UCI Machine Learning Repository [7], Default of Credit Card Clients (Default-CCC) dataset [44] and Law School Admissions (Lawschool) dataset [39]. Details of the datasets are presented in Table 1. Data preprocessing steps are strictly followed as outlined in previous works such as [16; 32; 35]. On the Lawschool, Adult, and Abalone4 datasets, the model's performance is evaluated by _accuracy_ as in previous studies [10; 13; 18; 41]. In contrast, the Default-CCC dataset evaluates it by _precision_ due to its heavy imbalance. A _higher_ accuracy/precision indicates _better_ performance. _Demographic parity_[8], _equality of opportunity_, and _equality of odds_[15] are used as primary fairness metrics. Footnote 4: We present the results on the Abalone dataset, which is smaller than others, in Appendix. We employ a multi-layer perceptron (MLP) with ReLU activation on hidden layers and Sigmoid activation on the last layer for binary classification tasks. The baseline models use Adam optimizer [20] during the complete training process, while FairDP uses Adam for the first \(90\%\) of the training steps and then switches to vanilla SGD for the remaining steps. For FairDP, we set the weight clipping hyper-parameter \(M\in[0.1,1.0]\) and initialize the learning rate \(\eta=0.02\) when using Adam, and then reduce it to \(\eta=0.005\) when switching to SGD. ### Baselines To thoroughly evaluate FairDP, we consider a variety of DP-preserving mechanisms, fairness training algorithms, and combinations of these as our baselines. This results in eight baselines, including a standard MLP, four existing mechanisms that either preserve DP or promote fairness, one adapted mechanism that achieves both DP and fairness, and two variants of FairDP. **Established Baselines.** We consider **DPSGD**[1], functional mechanism **(FM)**[45], **DPSGDF**[41], and **FairSmooth**[18] as baselines. Both **DPSGD** and **FM** are well-established DP mechanisms with many applications in DP research [29; 31; 27]. **DPSGDF** is designed to alleviate the disparate impact of DPSGD by focusing on accuracy parity. **FairSmooth** is a state-of-the-art mechanism that assures group fairness by transforming the model \(h_{\theta}\) into a smooth classifier as \(\hat{h}_{\theta}=\mathbb{E}_{\nu}[h_{(\theta+\nu)}]\) where \(\nu\sim\mathcal{N}(0,\bar{\sigma}^{2})\) in the inference process, where \(\bar{\sigma}\) is the standard deviation of the Gaussian noise. Moreover, we introduce a new baseline, **DPSGD-Smooth**, by applying **FairSmooth** to a logistic regression model trained by **DPSGD**. This gives rise to the only baseline offering both DP and fairness guarantee, which we employ for comparison against FairDP. **Variants of FairDP.** To examine how different features of FairDP affect the model performance and fairness, we introduce two FairDP variants, called **FairFM**, and **FairFM-Smooth**. **FairFM** (refer to Appendix A) distinguishes itself from FairDP by incorporating noise into the objective function as a pre-processing step to preserve DP. The mechanism trains group-specific model parameterized by \(\theta_{k}\), relative to dataset \(D_{k}\), to optimize the objective \(\theta^{*}_{k}=\operatorname*{arg\,min}_{\theta_{k}}\frac{1}{|D_{k}|}\sum_{( x_{i},y_{i})\in D_{k}}\ell(h_{\theta_{k}}(x_{i}),y_{i})\). In the preprocessing step, the objective function of each group is approximated using a second-order Taylor's expansion [4], and the corresponding polynomial form \(\mathcal{L}_{k}(\theta_{k})=\theta^{\top}_{k}\lambda^{(2)}_{k}\theta_{k}+ \theta^{\top}_{k}\lambda^{(1)}_{k}+\lambda^{(0)}_{k}\) is derived, where \(\lambda^{(j)}_{k},j=0,1,2\) are the coefficients of the order \(j^{th}\) associated with group \(k\). Then, Laplace noise [9] is added to the coefficients to derive the DP-preserving objective function \(\tilde{\mathcal{L}}_{k}(\theta_{k})\) and each group's perturbed objective function is optimized using SGD. The **FairFM-Smooth** mechanism is a variant of FairFM that applies the FairSmooth method [18] to the model trained by FairFM during the inference process. The experiments use a range of privacy budgets across different datasets. For the Adult dataset, we set \(\epsilon\in[0.1,2.0]\); for other datasets, we use an expansive range with \(\epsilon\in[0.5,10.0]\). Although DP is celebrated for using small values of \(\epsilon\), most current deployments5 report \(\epsilon\) larger than \(1\) with many of them use \(\epsilon\) larger than 5 and 10. Therefore, since fairness is affected by privacy loss, we believe our study is important to highlight and justify the trade-offs between privacy and fairness within this privacy loss regime. Statistical tests used are two-tailed t-tests. Footnote 5: [https://desfontain.es/privacy/real-world-differential-privacy.html](https://desfontain.es/privacy/real-world-differential-privacy.html) \begin{table} \begin{tabular}{l c c c c c} \hline \hline Data set & \# data points & \# features & Protected Attribute & \# positive label & Size of minor group \\ \hline Lawschool & 86,022 & 33 & Race & 23,243 & 15,311 \\ Default-CCC & 30,000 & 89 & Gender & 6,636 & 11,460 \\ Adult & 48,842 & 41 & Gender & 11,687 & 16,192 \\ Abalone & 1,418 & 7 & Gender & 915 & 654 \\ \hline \hline \end{tabular} \end{table} Table 1: Evaluation Datasets. ### Results **Utility, Privacy, and Fairness Trade-offs.** Figures 2 and 5 (Appendix F) show the performance of each algorithm w.r.t. model utility, fairness, and privacy. In Figure 2, points positioned closer to the bottom-right corner denote superior balance among model performance (characterized by higher accuracy/precision), privacy (illustrated by strict DP protection), and fairness (represented by lower empirical values of statistical fairness metrics). Darker and smaller points indicate the application of smaller privacy budgets, translating to stricter DP protection, and the inverse holds as well. Remarkably, our proposed FairDP consistently outperforms all baselines across all datasets, striking a balance among model utility, privacy, and fairness. For instance, in the Lavsehood dataset, FairDP attains lower demographic parity (\(0.149\) vs \(0.2\) in DPSGD, \(p\)-value = \(2.53e^{-9}\)) with small degeneration in model utility (83.6% vs 87.1% in DPSGD) and similar DP protection (\(\epsilon\in[0.5,10.0]\)). Despite having better Equality of Opportunity, DPSGD-Smooth suffers an 18.1% performance drop compared to FairDP (\(p\)-value \(p=4.85e^{-5}\)). Compared to the best fairness algorithm, FairFM-Smooth, FairDP achieves superior accuracy (83.6% vs. 69.5%, \(p\)-value = \(2.52e^{-9}\)) and highly competitive demographic parity, enhancing fairness under stringent DP protection. Similar findings can be observed in the Adult, Default-CCC, and Abalone datasets (see Appendix F). **Remark.** The promising results of FairDP can be attributed to its unique approach of controlling the amount of DP-preserving noise injected into each group, enforcing a constraint on the decision boundary, and fusing the knowledge learned from all groups together at each training step. This approach fundamentally differs from existing methods, leading to superior performance in FairDP. Another noteworthy observation is that treating fairness as a constraint, as in the case of DPSGDF, does not consistently improve the trade-offs among model utility, privacy, and fairness. For instance, Figure 2: Trade-off among model performance, DP-preservation, and fairness. in the Lawschool dataset, DPSGDF is less fair than the original DPSGD in terms of demographic parity (\(0.23\) compared with \(0.2\) in demographic parity with \(p=3.84e^{-7}\)). A similar effect is observed in the Abalone dataset (Figure 5, Appendix F). This can be attributed to the fact that handling all groups simultaneously, within the noisy SGD process, can hide the information from minor groups, leading to degradation in fairness. Also, the fairness constraints, employed as penalty functions, have an impact on the optimization of the model, leading to a deterioration in its utility. These issues can be mitigated by separating the DP-preserving training process from the methods developed to attain fairness during inference, as in the case of DPSGD-Smooth and FairFM-Smooth. These methods achieve better \(\tau\)-fairness with relatively competitive model utility under equivalent DP protection. However, this approach does not effectively balance the trade-offs among model utility, privacy, and fairness as effectively as FairDP does. _These insights highlight the need to explore novel approaches to seamlessly integrate DP-preserving and fairness rather than treating them as independent (constrained) components._ FairDP _represents a pioneering step in this direction._ **Tightness of the Fairness Certification.** Figure 3 and 7 to 9 (Appendix F) show the empirical fairness results and the certification value \(\tau_{emp}\). In most instances for the Lawschool, Adult, and Default-CCC datasets, our empirical certifications are substantially lower than the empirical fairness values of the baselines, particularly for DP-preserving mechanisms, without a significant drop in model performance. In particular, in the Default-CCC dataset, our empirical certifications are significantly smaller than the empirical fairness results of the state-of-the-art FairSmooth and DPSGDF (\(p=3.07e^{-8}\)), while maintaining a small gap with the empirical fairness results of FairDP (\(<5\%\) of deviation). That illustrates the tightness of our certification of \(\tau_{emp}\)-fairness across datasets and privacy budgets, further strengthening the advantages of FairDP in both theoretical guarantees and empirical results compared with existing baselines. **Imbalanced Protected Group.** Practitioners can tune FairDP to find an appropriate setting that balances the level of DP protection with the desired level of fairness and model utility. Figures 4 and Figure 4: The trade-off among utility, privacy, and fairness for various \(\rho\) values on Lawschool dataset. Figure 3: Tightness of fairness certification compared to empirical results of demographic parity and equality of opportunity for different privacy budgets. 10 through 12 (Appendix F) illustrate the effect of the ratio \(\rho\) between the size of the datasets of the minor and major groups: \(\rho=(\arg\max_{a\in[K]}n_{a})/(\arg\min_{b\in[K]}n_{b})\). For a specific \(\rho\), we randomly sample data points from the majority group, reducing the size of the major group training set to the desired \(\rho\), while the test sets remain unchanged for all groups. In general, increasing \(\rho\) values lead to a greater number of data points from the majority group being utilized for training the model, thereby improving its accuracy. However, the effect on the model's fairness across different fairness metrics is not consistently observed. Nonetheless, our theoretical guarantee remains applicable across various degrees of dataset imbalance. Lower privacy budgets (indicating stronger privacy guarantees) contribute to improved fairness in the model's decisions, thereby reinforcing the theoretical assurances provided by FairDP. ## 5 Conclusion This paper introduced FairDP, a novel mechanism that, for the first time, ensures both differential privacy and certified group fairness, while sustaining superior model performance. FairDP provides a comprehensive understanding of the influence of noise on model fairness. Besides the theoretical analysis, the paper examined the empirical certification bounds and showed that FairDP offers enhanced trade-offs among model utility, privacy, and fairness, outperforming an array of baselines.
2304.01221
Real-Time Tilt Undersampling Optimization during Electron Tomography of Beam Sensitive Samples using Golden Ratio Scanning and RECAST3D
Electron tomography is a widely used technique for 3D structural analysis of nanomaterials, but it can cause damage to samples due to high electron doses and long exposure times. To minimize such damage, researchers often reduce beam exposure by acquiring fewer projections through tilt undersampling. However, this approach can also introduce reconstruction artifacts due to insufficient sampling. Therefore, it is important to determine the optimal number of projections that minimizes both beam exposure and undersampling artifacts for accurate reconstructions of beam-sensitive samples. Current methods for determining this optimal number of projections involve acquiring and post-processing multiple reconstructions with different numbers of projections, which can be time-consuming and requires multiple samples due to sample damage. To improve this process, we propose a protocol that combines golden ratio scanning and quasi-3D reconstruction to estimate the optimal number of projections in real-time during a single acquisition. This protocol was validated using simulated and realistic nanoparticles, and was successfully applied to reconstruct two beam-sensitive metal-organic framework complexes.
Timothy M. Craig, Ajinkya A Kadu, Kees Joost Batenburg, Sara Bals
2023-04-01T20:26:01Z
http://arxiv.org/abs/2304.01221v1
Real-Time Tilt Undersampling Optimization during Electron Tomography of Beam Sensitive Samples using Golden Ratio Scanning and RECAST3D \({}^{\dagger}\) ###### Abstract Electron tomography is a widely used technique for 3D structural analysis of nanomaterials, but it can cause damage to samples due to high electron doses and long exposure times. To minimize such damage, researchers often reduce beam exposure by acquiring fewer projections through tilt undersampling. However, this approach can also introduce reconstruction artifacts due to insufficient sampling. Therefore, it is important to determine the optimal number of projections that minimizes both beam exposure and undersampling artifacts for accurate reconstructions of beam-sensitive samples. Current methods for determining this optimal number of projections involve acquiring and post-processing multiple reconstructions with different numbers of projections, which can be time-consuming and requires multiple samples due to sample damage. To improve this process, we propose a protocol that combines golden ratio scanning and quasi-3D reconstruction to estimate the optimal number of projections in real-time during a single acquisition. This protocol was validated using simulated and realistic nanoparticles, and was successfully applied to reconstruct two beam-sensitive metal-organic framework complexes. ## 1 Introduction Nanomaterials are materials with at least one dimension in the nanoscale, usually ranging from 1 to 100 nanometers [34]. They have unique physical, chemical, and spectroscopic properties compared to their bulk counterparts, which can be used for various commercial, industrial, and medicinal purposes [3, 27]. These properties are largely influenced by the three-dimensional (3D) structure and morphology of the nanomaterial. Therefore, it is essential to accurately characterize the nanomaterial's structure to understand its behavior and predict its potential applications [8, 11, 34]. High-resolution imaging techniques such as transmission electron microscopy (TEM) and annular dark-field scanning transmission electron microscopy (ADF-STEM) can provide insights into the structure of nanomaterials [13, 38, 42]. However, these techniques only produce two-dimensional (2D) projections, which may not accurately represent the true 3D structure of the material. To overcome this limitation, techniques such as electron tomography (ET) have been developed to enable the three-dimensional characterization of nanomaterials [25, 30, 37]. In a typical ET procedure, a series of 2D images are obtained at incremental angles (\(1-3^{\circ}\)) over a range of approximately \(\pm 70-80^{\circ}\)[4, 20, 33]. These images are then aligned and processed using reconstruction algorithms such as filtered back projection (FBP) [15], simultaneous iterative reconstruction technique (SIRT) [18], or expectation maximization (EM) [26] to generate a 3D volume of the nanomaterial. Overall, the use of ET has significantly improved the scientific understanding of nanomaterials and their potential applications. Exposure to the electron beam during the tomographic acquisition of nanomaterials can cause significant deformation due to various factors, including radiolysis, atomic displacement, heating, charge accumulation, and knock-on effects [2, 12, 23, 36, 41]. This electron beam-induced damage has been observed in a wide range of nanomaterials, including silicates, zeolites, and metal-organic frameworks (MOFs) [23, 28, 39, 40, 50]. Prior studies have investigated various approaches to minimize beam damage, with the most reliable method being limiting beam exposure [16]. Low-dose methods, which employ acquisition regimes with high signal-to-noise ratios [29], such as ptychography [9] or integrated differential phase contrast [28] microscopy, allow for the collection of the same signal with significantly less exposure. However, these techniques require specialized detectors and setups that may not be readily available. Tracking and focusing optimization during acquisition have also been used to reduce beam exposure. For example, in 2018, a twofold reduction in beam exposure was achieved by accelerating the high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM) acquisition to a few minutes through simultaneous scanning, tracking, and focusing [47, 49]. Another commonly applied technique to reduce beam damage is undersampling. Undersampling reduces beam exposure by reducing (i) the information encoded into an image (image undersampling) [10] or (ii) the number of images collected (tilt undersampling) [14]. However, undersampling also has its own challenges; typically in the introduction of new artefacts. This has been extensively studied for various undersampling schemes by Vanrompay et. al. [46]. For instance, tilt undersampling has been shown to amplify missing wedge artefacts by decreasing the angular range (e.g., from 70\({}^{\circ}\) to 15\({}^{\circ}\)) [4, 17, 46]. Whilst these artefacts can be compensated using algorithms such as discrete algebraic reconstruction tomography (DART), these algorithms assume Figure 1: Phantom Shepp-Logan (a) undersampled with 21 projections by reducing the annular range (\(\pm 20^{\circ}\), \(2^{\circ}\) step, 21 projections) (b) and the sampling density (\(\pm 70^{\circ}\), \(7^{\circ}\) step, 21 projections) (c). By decreasing the sampling density further (\(\pm 70^{\circ}\), \(70^{\circ}\) step, three projections), undersampling artefacts become apparent (d). prior knowledge of the sample's properties, which may not always be applicable.[5, 20, 33] The missing wedge is an issue that arises in ET when only projections over limited angular range are collected. This can be mitigated during undersampling by evenly distributing the few images across the entire available annular range, thus decreasing the sampling density rather than the sampling range.[46] For instance, both an acquisition at \(\pm 20^{\circ}\) in \(2^{\circ}\) increments and \(\pm 70^{\circ}\) in \(7^{\circ}\) increments have 21 projections. However, the missing wedge is minimized in the latter case, where the images are more evenly distributed. Nevertheless, even in this case, significantly decreasing the sampling density can result in artefacts in the reconstructed image (Figure 1). Therefore, it is important to find a balance between minimizing beam damage and avoiding undersampling artefacts in order to ensure the quality of the reconstruction. When assessing the quality of a reconstruction, researchers often compare it to a reference structure collected using a standard \(2-3^{\circ}\) tilt increment.[46] However, this approach is not suitable for optimizing tilt undersampling of beam-sensitive samples. Firstly, it is not guaranteed that the reference, collected under standard imaging conditions, does not contain beam damage artefacts. Secondly, during incremental scanning (IS), the sampling density remains constant while the sampling range increases with each new projection (Figure 2a). Therefore, prematurely ending the acquisition will result in a large missing wedge in the tilt-series. In order to find the optimal number of projections, the 3D reconstruction of multiple tilt-series collected with different numbers of projections should be compared. This process is time-consuming, as it involves the microscopist alternating between the microscope and post-processing steps at a workstation.[48] Furthermore, when multiple acquisitions are performed on the same particle, the damage induced in previous acquisitions is often evident in the new tilt-series. Therefore, it is necessary to perform each acquisition on a new particle, making it difficult to directly compare the resulting 3D reconstructions. In such cases, the microscopist must rely on a qualitative judgement to determine the optimal number of projections. The requirement for multiple tilt-series in tomographic acquisitions can be mitigated using an acquisition scheme with a semi-constant sampling range, and a sampling density that increases as new projections are added. Golden Ratio Scanning (GRS) proposed by Kaestner et al. satisfies this requirement for 4D neutron microtomography.[24] In GRS, the tilt angle \(\theta\) in radians is given by \[\theta=i\,\alpha\,\left(\frac{1+\sqrt{5}}{2}\right)\,\,\,\,\mathrm{mod}\,( \alpha)-\frac{\alpha}{2}, \tag{1}\] where \(i\) is the image index, \(\,\,\,\mathrm{mod}\,\) is the modulo function, and \(\alpha\) is the annular range in radians. In GRS, the majority of the annular range is occupied within the first \(3-4\) projections, and subsequent projections increase the sampling density (Figure 2b). Therefore, acquisition can be terminated early without significant missing wedge artefacts.[24] In practice, however, it is impossible to know how many projections are required to minimize undersampling and beam damage artefacts without knowledge of the 3D structure during acquisition. Quasi-3D reconstruction allows for real-time viewing of 3D data by limiting the computational requirements of reconstruction. This was achieved using the software RECAST3D (Reconstruction of Arbitrary Slices in Tomography), which reduces the computational burden by only reconstructing only a few arbitrary slices at a time using the computationally efficient FBP algorithm.[6] Here, we present a protocol, referred to as _Tilt Undersampling Optimized Tomographic Acquisition_ (TUOTA), that combines GRS with real-time analysis of quasi-3D reconstructions provided by RECAST3D to determine the optimal number of projections for beam-sensitive samples. TUOTA was tested using simulated and experimental datasets, and was applied to two beam-sensitive MOF nanoparticle (NP) composites: NU-1000 encapsulating an Au bipyramidal nanoparticle (Au@NU-1000) and ZIF-8 encapsulating an Au/Pd nanorod (Au/Pd@ZIF-8). ## 2 Method ### Tilt Undersampling Optimized Tomographic Acquisition The TUOTA protocol for optimizing the number of projections consists of the following stages: 1. Obtaining projections using GRS with an annular range of \(\pm 70^{\circ}\) (\(\alpha=7\pi/9\)) in real-time at the microscope, with acquisition ending at the discretion of the microscopist. 2. Processing the projections using RECAST3D to reconstruct three slices using FBP, with slices being updated as new projections are acquired. 3. Evaluating slice quality quantitatively based on the number of projections used. 4. Conducting a final reconstruction using the EM algorithm with the optimal number of projections determined in step 3. #### 2.1.1 Quantification. In order to determine the optimal number of projections, the reconstruction quality of the slices computed in step 2 is quantitatively assessed in step 3. A reliable approach to assess reconstruction quality is to compare it to a reliable reference Figure 2: First 10 projections acquired using IS (\(\pm 70^{\circ}\), \(10^{\circ}\) increment)(a) and GRS (\(\pm 70^{\circ}\))(b). The collectable missing wedge due to early termination of acquisition (grey) and the inaccessible missing wedge due to the holder geometry (black) are shown for both acquisition schemes. standard, which is commonly computed as the shape error (\(E_{s}\)) or the normalized root-mean-squared difference between the Otsu threshold [31] binarized reconstruction (\(V_{\text{rec}}\)) and reference (\(V_{\text{ref}}\)), defined as \[E_{s}=100\ \frac{\|V_{\text{ref}}-V_{\text{rec}}\|}{\|V_{\text{ref}}\|}, \tag{2}\] where \(\|\cdot\|\) represents the Euclidean norm, _i.e.,_\(\|x\|=\sqrt{\sum_{i=1}^{n}x_{i}^{2}}\). The reference is typically the sample collected using standard tomographic acquisition, which is assumed to be a reliable representation of the true 3D structure. However, due to beam damage, this assumption may not always be reliable. Additionally, subsequent acquisitions of the same particle may differ from the reference solely due to beam damage induced during the reference acquisition, making it impossible to obtain a reliable reference structure. In this case, the only information available from RECAST3D is three arbitrary slices. For convenience, in this paper, all calculations were determined from the \(xy\), \(yz_{s}\) and \(xz\) orthdisclices passing through the origin. If the positions of these orthdisclices are fixed for the acquisition duration, the change in these orthdisclices can be observed as a function of the number of projections. In ET, as more projections are provided, the reconstruction converges towards a 3D structure, and each projection becomes a smaller portion of the complete set of \(N\) projections. Therefore, each projection contributes less to the reconstruction as more projections are added, and the difference between the 3D reconstruction with \(N\) and \(N-1\) projections tends towards zero. Applied to the RECAST3D orthdisclices, a measure for the convergence can be obtained as a function of \(N\) by finding the normalized root mean squared difference between the set of orthdisclices (\(O_{N}\)) and the orthdisclices obtained with \(N-1\) projections (\(O_{N-1}\)). This measure is \[\text{SROD}(N)=\frac{\|O_{N}-O_{N-1}\|}{\|O_{N}\|}. \tag{3}\] This metric, referred to as the self-referential orthoslice difference (SROD), can be obtained solely from the RECAST3D orthdisclices without a known accurate reference structure. The lower the SROD, the more closely \(O_{N-1}\) and \(O_{N}\) resemble each other. Sufficient convergence for reconstruction is achieved when the SROD is lower than a user-defined threshold value. For this work, an arbitrary threshold of 0.1 was applied. Higher or lower threshold values may be utilized depending on the desired frequency resolution. The SROD metric only monitors convergence and undersampling. For beam-sensitive samples, beam-induced artifacts may reduce the reconstruction quality before adequately sampling the structure. To monitor this, the signal-to-noise ratio (SNR) of each set of orthdisclices is measured as a function of the number of projections, _i.e.,_ \[\text{SNR}(N)=20\log_{10}\left(\frac{\mu(O_{N})}{\sigma(O_{N})}\right), \tag{4}\] where \(\mu\) is the average and \(\sigma\) is the standard deviation in the signal of each pixel in the set of orthdisclices \(O_{N}\). It is noted that the SNR typically increases as more projections are added to the tilt-series and tends to decrease in electron microscopy images as a response to beam damage [21]. The optimum number of projections is determined by the intersection of the convergence of the SROD and the decline in SNR due to beam damage. The optimum number of projections for a given sample can be obtained by analyzing the SROD and SNR curves as a function of the number of projections. This allows for the determination of the optimum number of projections for a given sample without the need for a reliable reference structure. Refer to the supporting information regarding its implementation in code. #### 2.1.2 Post-processing. The optimal number of projections for the tilt-series is used to reconstruct the complete 3D volume using the MATLAB ASTRA implementation of the EM algorithm [32, 43, 44]. In contrast, RECAST3D only provides orthdisclices using the FBP algorithm, which has been shown to perform poorly when the tilt-series is undersampled or contains a missing wedge [43]. Therefore, we prefer the EM algorithm for complete volume reconstruction. ### Method evaluation To evaluate the validity of TUOTA, we compared the suggested optimum reconstructions to a standardized method for evaluating reconstruction accuracy, \(E_{s}\). However, \(E_{s}\) is not reliable for beam-sensitive samples because the reference sample is unreliable. Therefore, we performed simulated beam damage experiments where a phantom was used as an accurate reference of the initial structure before beam damage was applied. For beam-insensitive samples, it can be assumed that standard IS tomography provides a reasonably accurate reconstruction that can be used as a reference. To evaluate the proposed acquisition procedure and the reliability of TUOTA, we compared the TUOTA- and \(E_{s}\)-determined optimum number of projections for both simulated and experimental structures. #### 2.2.1 Simulations and Experimental Acquisition. Sample data were obtained through both microscopy simulations and experiments. The simulations were performed by iteratively deforming an original 3D structure using a Gaussian filter and a binomial probability mask implemented in MATLAB. After each iteration, the entire volume was saved. Tilt-series were simulated Figure 3: Procedure to optimize the number of projections during 3D reconstruction using TUOTA. Steps 1-3 are performed at the microscope, while the final step can be performed at the compute station (e.g. high-performance computer or server). by forward projecting the structure after each iteration of beam damage. As a result, the image corresponding to the first angle in the tilt-series was simulated by the forward projection of the structure after one iteration of beam damage, and the image for the second angle was obtained by forward projecting the structure after two iterations of beam damage. This process was repeated until images for all angles were obtained. The magnitude of the beam damage was adjustable by adjusting two deformation parameters (\(\beta_{1},\beta_{2}\)). See the supporting information for more details on the beam damage simulations. Simulations were performed for a nanoparticle with a hollow, cage-like structure, which we previously investigated experimentally [19]. We used four different deformation settings to simulate the beam damage (Figure 4, Movie S1), ranging from no deformation (NC-1) to severe deformation (NC-4). Beam damage in the hollow nanocages manifested as a slowly opening cavity in the structure and thinning of the cage walls. In addition, three experimental samples were characterized (Figure 1): an Au/Pd nanostar (NS) and two NP@MOF composites. The first composite was a Zn(2\(-\)methylimidazole)2 MOF ZIF-8 containing an Au/Pd nanorod (Au/Pd@ZIF-8), and the second composite was a NU-1000 MOF consisting of Zr\({}_{6}\)O\({}_{4}\)(OH)4 clusters and a 1,3,6,8\(-\)Tetra(4\(-\)carboxylphenyl) pyrene ligand encapsulating a bipyramidal Au NP (Au@Nu-1000). These samples were suspended in ethanol and drop-cast onto a carbon-coated Cu transmission electron microscopy (TEM) grid. Imaging was performed using a Thermo Fisher Scientific Tecnai Osiris TEM with an acceleration voltage of 200 kV, a screen current of 50 pA, and an imaging/scanning dwell time of 3.06/7.96 \(\mu\)s. Au/Pd NS samples were collected using HAADF-STEM, and MOF complexes were collected using ADF-STEM. Sample tracking and focusing were performed manually. During RECAST3D imaging, projection alignment was performed by centering the sample, masking the background with an Otsu threshold, and then aligning the projections in chronological order using intensity correlation. Post-processing reconstruction and alignment were performed using the ASTRA toolbox in MATLAB. Before post-processing, the tilt-series was sorted into annular order (lowest to highest, _e.g._, \(-\)70\({}^{\circ}\) to 70\({}^{\circ}\)) and projection alignments were performed using intensity correlation. For comparison, tilt-series were acquired with both IS and GRS. The simulated and acquired tilt-series throughout this work are summarized in Table 2. Approximately 70 projections of GRS acquisition were acquired regardless of the proposed termination point for comparison with the standard protocol of IS with a 2\({}^{\circ}\) increment (71 projections). IS acquisitions were collected with a tilt increment of 2\({}^{\circ}\), 5\({}^{\circ}\), 7\({}^{\circ}\), 10\({}^{\circ}\), 14\({}^{\circ}\), 35\({}^{\circ}\), or 70\({}^{\circ}\). These are the only integer tilt increments that result in all projections being equally spaced between \(\pm\)70\({}^{\circ}\). ## 3 Results ### Simulated evaluation of optimization protocol #### 3.1.1 Incremental scanning. The traditional approach to optimizing the number of projections is to vary the tilt increment during IS scanning. Therefore, to determine the optimum number of projections using IS for NC-1 to NC-4, we simulated tilt-series with tilt increments of 2\({}^{\circ}\), 5\({}^{\circ}\), 7\({}^{\circ}\), 10\({}^{\circ}\), 14\({}^{\circ}\), 35\({}^{\circ}\), and 70\({}^{\circ}\) (71, 29, 21, 15, 11, 5, 3 projections, respectively). We measured \(E_{s}\) for each tilt-series by comparing them to a ground truth structure, and the minimum \(E_{s}\) was obtained where the 3D reconstruction most accurately reflected the ground truth. As more beam damage was simulated from NC-1 to NC-4, the minimum \(E_{s}\) value was obtained with fewer projections, but the \(E_{s}\) value at the optimum number of projections increased (\(E_{s}\)/Projections: 5.1/11, 6.6/11, 8.7/5, 9.1/5) (Figure 4(a)-b, Figure 4(b)). While it is possible to estimate the optimum number of projections by simulating seven different tilt-series per sample, this process is infeasible for experimental beam damage analysis. An alternative method would be to take a single tilt-series using a fixed tilt increment and collect projections until an optimum is obtained. Therefore, we collected a tilt-series for NC-1 to NC-4 for IS with a 2\({}^{\circ}\) increment while monitoring the \(E_{s}\) as a function of the number of projections (Figure 4(c)). As more beam damage was simulated from NC-1 to NC-4, the \(E_{s}\) optimum was obtained earlier with an increased value (Figure 4(b)), indicating lower quality reconstructions (\(E_{s}\)/Projections: 6.6/71, 13.6/71, 30.2/68, 44.3/52). The same trend was observed for variable tilt increments, but the optimum number of projections occurred with far more projections compared to the results displayed in Figure 4(b). The late optimal number of projections in Figure 4(d) occurs because, until the final projection, each projection is filling a missing wedge in the tilt-series. In contrast, the projections are spread across the entire annular range by taking multiple tilt-series with a variable tilt increment. Therefore, when reducing the number of projections during an IS acquisition, the reduction in beam damage artifacts is counteracted by increased missing wedge artifacts. With NC-4, this is visually apparent. At the \(E_{s}\) optimum of 52 projections, a missing wedge artifact is compensated for when adding new projections, but adding extra Figure 4: Simulated beam damage on four nanocage samples with different simulation settings. A tilt-series is acquired for each sample by forward projecting after each iteration of simulated deformation. Each nanocage is shown after 0, 17, 34, 51 and 71 iterations of deformation. projections increases the beam damage artifact (Figure B2). Through visual inspection, it is apparent that by finding the optimum of multiple tilt-series (Figure 5e-i), undersampling artifacts are apparent in samples NC-2 to NC-4. However, by finding the optimum from a single typical acquisition (IS, 2\({}^{\circ}\))(Figure 5j-m), severe beam damage artifacts are apparent in NC-3 and NC-4 (Movie S2). In summary, current techniques for optimizing the number of projections either require multiple acquisitions or introduce substantial beam damage artifacts while correcting for missing wedge artifacts, limiting their feasibility for beam-sensitive samples. One possible solution to this problem is to use the GRS method, which allows for the determination of the optimum number of projections from a single acquisition. In the following subsection, we describe the GRS method and compare it to the traditional IS method for optimizing the number of projections in beam-sensitive samples. #### 3.1.2 Golden ratio scanning. In order to determine the optimum number of projections from a single tilt-series, we performed simulations for samples NC-1 to NC-4 according to the golden ratio scanning (GRS) method (Figure 2b) and evaluated the reconstruction quality using \(E_{s}\). For each sample, we obtained a local minimum \(E_{s}\) (Figure 6a). This minimum represents the optimum between undersampling and beam damage. As more damage was simulated from NC-1 to NC-4, the optimum was found with fewer projections, but the \(E_{s}\) value increased (\(E_{s}\)/Projections: 7.4/55, 10.4/21, 13.7/13, 14.9/13). Therefore, reconstructions with fewer projections were favored with increased beam damage simulation because beam damage artifacts outweighed undersampling. However, despite optimizing the number of projections, the overall reconstruction quality worsened as more beam damage was induced. It is surprising that a local minimum is achieved at 55 projections for NC-1, in which no beam damage was simulated. However, the Es at 71 projections (7.8%) differs from 55 projections by just 0.4%. Therefore, the obtained minimum is likely just statistical variance. The obtained optimum number of projections is consistent with the visual inspection of the samples. For NC-1 and NC-2, there is little notable distortion in the reconstruction at the optimum number of projections (Figure 6b-d). Slight surface defects were noted in NC-3 and NC-4 (Figure 6e-f). However, these were minor compared to the beam damage-induced cavities apparent in NC-3 and NC-4 with 71 projections (Figure 6g-j). When comparing the minimum \(E_{s}\) obtained from NC-1 to NC-4 for IS and GRS (IS(%)/GRS(%): 6.6/7.4, 13.6/10.4, 30.2/13.7, 44.3/14.9), the \(E_{s}\) value for IS is substantially larger than the same value obtained for GRS for NC-2 to NC-4, indicating that optimization of GRS acquisitions produces a substantially improved reconstruction compared to IS with a standard 2\({}^{\circ}\) increment for beam-sensitive samples. #### 3.1.3 Tuota. We have previously evaluated reconstruction quality using the metric \(E_{s}\), by comparison to a known ground truth structure. To determine the optimal number of projections in real experiments, it is necessary to do so without prior knowledge of the ground truth. TUOTA has been applied as a promising approach for this purpose, using samples NC-1 to NC-4 and monitoring the SROD and SNR as a function of the number of projections during GRS. The SROD threshold was reached for all NC samples at approximately 24 projections (Figure 7a). It is important to note that the number of projections optimized for \(E_{s}\) and SROD identify different properties. The SROD determines the number of projections beyond which additional projections are unlikely to significantly improve reconstruction, while the \(E_{s}\) criteria identify the number of projections that produce the most accurate reconstruction shape. This difference is particularly apparent in the case of NC1-GRS, where the \(E_{s}\) and SROD (using a threshold of 0.1 as described in Section 2.1.1) identified 55 and 22 projections, respectively. In the absence of damage, the acquisition could be continued indefinitely, but there was no visible change beyond a certain point (Figure B4). As damage increased from NC-1 to NC-4, the maximum SNR value decreased (Figure 7b), occurring at a later projection (SNR(dBm)/Projections: -13.6/70, -14.0/24, -14.4/16, -14.6/16). Thus, with more simulated damage, reconstructions using fewer projections were optimal. To validate the TUOTA results, we calculated the \(E_{s}\) for the optimal reconstructions determined by TUOTA and compared them to the full tilt series and the optimal reconstruction based on the minimum \(E_{s}\) determined in 3.1.2. While the optimal number of projections determined by \(E_{s}\), SROD, and SNR did vary, the reconstruction quality as determined by \(E_{s}\) remained largely the same (Figure 7c, Table 2, Movie S3). Visual inspection of the TUOTA-determined optimal reconstructions for NC-1 showed no artifacts. In contrast, NC-2 to NC-4 had a rippled texture due to an artifact at their TUOTA-determined optimal number of projections (Figure 7d-g). This is consistent with the optimal reconstructions obtained by \(E_{s}\) (Figure 6c-f). These results demonstrate that, for simulated \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline \hline Type & sample & name & \(\beta 1:\beta 2\) & acquisition type & annular range [\({}^{\circ}\)] & tilt step [\({}^{\circ}\)] \\ \hline \multirow{4}{*}{simulation} & \multirow{4}{*}{nanocage} & NC-1 & 0:0 & GRS & \(\pm\)70 & \multirow{4}{*}{2, 5, 7, 10, 14, 35, 70} \\ & & NC-2 & 0.3:0.03 & GRS & \(\pm\)70 & \\ & & NC-3 & 0.3:0.03 & IS & \(\pm\)70 & \\ & & NC-3 & 0.55:0.055 & GRS & \(\pm\)70 & \\ & & NC-3 & 0.55:0.055 & IS & \(\pm\)70 & 2, 5, 7, 10, 14, 35, 70 \\ & & NC-4 & 0.6:0.06 & GRS & \(\pm\)70 & \\ & & NC-4 & 0.6:0.06 & IS & \(\pm\)70 & 2, 5, 7, 10, 14, 35, 70 \\ \hline \multirow{4}{*}{real} & \multirow{4}{*}{nanostar} & Au/Pd NS & & GRS & \(\pm\)70 & \multirow{4}{*}{2, 5} \\ & & Au@NU-1000 & Au@NU-1000 & & GRS & \(\pm\)70 & \\ \cline{1-1} & & Au/Pd@ZIP-8 & Au/Pd@ZIP-8 & GRS & \(\pm\)70 & \\ \hline \hline \end{tabular} \end{table} Table 1: Collected tilt-series nanocages, TUOTA can accurately determine the optimal number of projections, comparable to a reference ground truth structure. ### Experimental validation #### 3.2.1 Au/Pd nanostar As mentioned earlier, a challenge in optimizing tilt undersampling for beam-sensitive samples is the lack of knowledge of the material's true volume. Simulated experiments address this challenge by allowing the comparison of the reconstruction to a simulated "true" reference volume. However, challenges with focusing and aligning projections are not present in simulated data. For samples resistant to beam damage, it can be assumed that the reconstruction obtained with standard ET is a reasonable representation of the sample's true volume. As such, the reconstruction quality can be determined using \(E_{s}\), where the reference is a non-beam-sensitive sample acquired with standard Figure 5: To change the projection number while maintaining a constant annular range, samples NC-1 to NC-4 were collected with a variable tilt increment of \(2^{\circ}\), \(5^{\circ}\), \(7^{\circ}\), \(10^{\circ}\), \(14^{\circ}\), \(35^{\circ}\), \(70^{\circ}\) (71, 29, 21, 15, 11, 5, 3 projections). An example is shown for the acquisition of 11 projections (\(14^{\circ}\) tilt increment) (a). The \(E_{s}\) of each acquisition was then determined (b). Alternatively, for samples NC-1 to NC-4, the tilt increment was fixed at a standard \(2^{\circ}\) and more projections were collected while increasing the annular range during a single acquisition. An example is shown for the acquisition of 11 projections (c). The \(E_{s}\) was plotted as a function of the number of projections (d). The 3D reference structure (e) is shown along with the optimum reconstructions for NC-1 to NC-4 determined with a variable tilt increment (f-i) and \(2^{\circ}\) increment (j-m). ET. To validate TUOTA using experimental data, an Au/Pd nanostar was used as a beam damage-resistant sample. Three acquisitions were performed sequentially on the same sample: a GRS acquisition (71 projections) and an IS acquisition with a \(2^{\circ}\) (71 projections) and \(5^{\circ}\) (29 projections) increment. A \(0^{\circ}\) projection was acquired before and after all collections were completed. Visual inspection of these images showed no obvious signs of beam damage (Figure 8a-b). For GRS reconstructions, the \(E_{s}\) was measured as a function of the number of projections by comparing it to a reference sample collected with IS using a \(2^{\circ}\) increment. Similar to the simulated results for NC-1, with a non-deforming sample, the \(E_{s}\) tends to decrease as more projections are added, but a local minimum is Figure 6: (a) Shape error as a function of the number of projections for NC1-4 nanocages and their determined optimum number of projections with GRS acquisition scheme. (b) Inset: nanocage before beam damage simulation. 3D reconstruction of NC-1-4 with their optimum number of projections (c-f) and 71 projections (g:j). Figure 7: (a) SROD showing the 0.1 threshold and (b) SNR of NC-1 to NC-4 orthoslices determined by TUOTA. (c) Comparison of the \(E_{s}\) for the optimum reconstruction determined from the shape error, SNR, SROD, and complete tilt-series of 71 projection. (d-g) reconstruction of NC-1 to NC-4 with the optimum number of projections determined by SNR. never achieved and the \(E_{s}\) plateaus around 58 projections (\(E_{s}=8.52\%\)) (Figure 8c). When adding further projections, the \(E_{s}\) reduces insubstanially to 8.47% (71 projections), indicating a limited improvement to the reconstruction. At 58 and 71 projections, the 3D structure is visually indistinguishable (Figure 8). When applying TUOTA, the SNR increases but plateaus as more projections are added (Figure 8d). The maximum SNR (-16.6 dBm) is obtained when the full tilt-series is collected, indicating there is no beam damage reducing the signal quality. As for the SROD, the threshold is reached at 53 projections, indicating a termination point where further projections are unnecessary (Figure 8e). At 53 projections, the \(E_{s}\) varies from the minimum \(E_{s}\) by only 2.43% (Table 3), indicating a limited difference between the reconstruction with 71 and 53 projections. Through visual inspection of the sample, no artifacts are apparent when comparing the full GRS reconstruction with the IS reconstruction with 2\(\circ\) steps (Figure 8f-g). However, when the number of projections decreases to 29 projections, artifacts are apparent for both GRS and IS (Figure 8h-i, Movie S4). As mentioned earlier, reconstruction convergence was identified at 53 projections using TUOTA. Hence, the GRS tilt-series with 53 projections was reconstructed (Figure 8j). No noticeable difference between this reconstruction and the reference structure could be seen (Table 3). Overall, as expected for Au/Pd nanoparticles, beam damage could not be identified either through visual inspection of the sample or analysis. An optimum number of projections between 29 and 71 projections was obtained through IS acquisition. Using TUOTA, the optimum number of projections was narrowed to 53 projections during a single acquisition. This demonstrates the effectiveness of TUOTA in determining an optimal number of projections without the need for a ground truth reference structure, even when applied to beam-resistant samples. #### 3.2.2 NP@MOF composite The technique TUOTA was applied to two MOF composites: Au@NU-1000 and Au/Pd@ZIF-8, which are known to undergo significant changes in shape and crystallinity when exposed to a beam.[28, 35] Using a GRS technique, the degradation and contamination of the samples were observed by comparing the first and last projections collected (Figure 6). The crystal facets became less defined and a large, blurry ring appeared around the sample, indicating the presence of carbon contamination. The SNR was also analyzed (Figure 9a). The Au/Pd@ZIF-8 sample had a higher maximum SNR (SNR of -9.84 dBm with 66 projections) than the Au@NU-1000 sample (SNR of -13.1 dBm with 43 projections). Additionally, the maximum SNR was achieved at the end of the tilt-series for Au/Pd@ZIF-8, while it occurred at a local maximum for Au@NU-1000. This suggests that the Au@NU-1000 sample is more sensitive to beam-induced deformation, consistent with the SROD results. The SROD threshold for Au/Pd@ZIF-8 was obtained at 31 projections, indicating that while the SNR improved with additional projections, the reconstruction showed minimal change past 31 projections. In contrast, the SROD threshold for Au@NU-1000 was achieved later, at 43 projections, and was generally higher and less consistent than that of Au/Pd@ZIF-8, indicating difficulty in converging to a consistent reconstruction due to additional noise (Figure 9b). Visual inspection of the samples supports the findings of the TUOTA analysis. For Au@NU-1000, the NU-1000 shell displayed substantially more surface detail at the SNR optimum compared to the sample particle undersampled with 20 projections. Using the full tilt-series, the MOF shell had significantly shrunk, indicating continued deformation (Figure 9c-f, Movie S5). In the case of Au/Pd@ZIF-8, little difference was observed between the full tilt-series, SNR optimum, and SROD optimum. Undersampling with 20 projections resulted in a reconstruction in which the Au/Pd nanoparticle could not be properly segmented (Figure 9g-j, Movie S6). Overall, the results of this study suggest that TUOTA can be used to determine the optimal acquisition point for MOF samples to prevent beam damage. For the NU-1000 sample, acquisition should be terminated at 43 projections. For Au/Pd@ZIF-8, beam damage is evident, but it has a limited impact on reconstruction quality, and acquisition can be terminated after 31 projections to obtain good results. ## 4 Discussion ### Tilt scheme In our study, we found that using RECAST3D and GRS scanning in both experimental and simulated cases resulted in reconstructions that were comparable to or better than those obtained with IS using a standard tilt increment of 2\(\circ\). However, when the number of projections in the IS scan was optimized to be similar to that of the GRS scan, there was a slight decrease in the reconstruction quality measured with Es. One possible explanation for this finding is that GRS tends to sample almost the entire range of tilts, or "annular range," but falls short of fully covering it. For instance, in a tilt range of \(\pm\)70\(\lx@math@degree\) (140\(\lx@math@degree\) total), the first ten projections of a GRS scan cover about 119.6\(\lx@math@degree\), increasing to 127.4\(\lx@math@degree\) (20 projections) and 132.2\(\lx@math@degree\) (30 projections). In contrast, IS always samples the full annular range regardless of the tilt increment used. It is noted that whilst optimized IS may provide slight improvement on the reconstruction quality over GRS, optimizing the number of projections in an IS scan is not feasible for beam-sensitive samples. Furthermore, even if optimization were possible, the need to acquire multiple tilt series would make the process time-consuming. As an alternative, it may be beneficial to consider a two-step approach in which the optimal \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{Sample} & \multicolumn{2}{c}{\(E_{s}\)} & \multicolumn{2}{c}{SROD} & \multicolumn{2}{c}{SNR} & \multicolumn{2}{c}{full} \\ & NPs & \(E_{s}\) & NPs & \(E_{s}\) & NPs & \(E_{s}\) & NPs & \(E_{s}\) \\ \hline NC-1 & 55 & 7.4 & 22 & 8.8 & 70 & 7.7 & 71 & 7.7 \\ NC-2 & 24 & 10.4 & 22 & 11.3 & 24 & 11.3 & 71 & 14.0 \\ NC-3 & 13 & 13.7 & 24 & 15.7 & 16 & 14.9 & 71 & 28.7 \\ NC-4 & 13 & 14.9 & 24 & 19.2 & 16 & 16.6 & 71 & 44.3 \\ \hline \hline \end{tabular} \end{table} Table 2: Shape error \(E_{s}\) and number of projections (NPs) of GRS acquired reconstructions using various optimization criteria, along with full tilt-series \begin{table} \begin{tabular}{c|c c} \hline \hline Optimization Criteria & number of projections & \(E_{s}\)(\%) \\ \hline Minimum \(E_{s}\) & 71 & 8.47 \\ SROD & 53 & 10.9 \\ SNR & 71 & 8.47 \\ Full & 71 & 8.47 \\ \hline \hline \end{tabular} \end{table} Table 3: \(E_{s}\) and number of projections of a Au/Pd nanostar acquired by GRS terminated using various optimization criteria. number of projections is first determined using GRS, followed by the acquisition of a second tilt series using IS with a tilt increment that approximates the optimal number of projections found using GRS. This approach could potentially lead to a slightly improved reconstruction while also reducing the beam exposure time due to the tracking and refocusing steps required in GRS imaging. It is worth noting that in our study, GRS tracking and focusing were performed manually, but automated tracking could significantly reduce the beam exposure time in GRS imaging. ### Software architecture Most of TUOTA is implemented using RECAST3D, as described in previous studies [6, 7]. However, there are additional constraints for quantification that require modifications to RECAST3D. In particular, the orthoslices at \(N-1\) projections must have the same orientation and tilt axis as the orthoslices with \(N\) projections. While RECAST3D allows these parameters to be adjusted in real-time, doing so would invalidate the quantification results of TUOTA and prevent the user from visually inspecting other regions of the sample or correcting the tilt-axis alignment. Additionally, the default orthoslices selected by RECAST3D (\(xy\), \(xz\), and \(yz\) slices passing through the origin) may not be representative slices of the entire volume. For example, in the case of an 8-dendrite nanostar, these slices could go through the center and miss every dendrite, resulting in a large region of the sample being outside the inspected area (Figure B7). To address this issue, it is possible to visually inspect the sample and adjust the orthoslice selection by rotating the \(xy\) and \(xz\) planes 45\({}^{\circ}\), resulting in a more representative slice of the volume. Figure 8: 0\({}^{\circ}\) projections of Au/Pd NS before (a) and after (b) collection of three tilt-series (IS 2\({}^{\circ}\), 5\({}^{\circ}\) increment and GRS 71 projections). The shape error as a function of the number of projections (c) for GRS reconstructions was determined by comparison to the IS reconstruction with a 2\({}^{\circ}\) increment. The SNR (d) and SROD (e) were determined in real-time using TUOTA. The Au/Pd NS was reconstructed with 71 (f-g) and 29 projections (h-i) for GRS and IS. The GRS tilt-series was also reconstructed with the optimum number of projections determined from the SROD threshold (j). ### Alignment Projection alignment is a major challenge during TUOTA. In previous studies, projection and tilt axis alignment have been performed in real-time using RECAST3D.[48] However, when applying TUOTA to beam-sensitive samples, there are some additional challenges to consider. Firstly, intensity cross-correlation can result in poorly aligned projections due to the inclusion of other features in the images, such as beam-damaged regions of the carbon mesh, other particles, or the grid. To address this issue, we use watershed segmentation to identify the largest particle in the image and mask out everything else. The second challenge is that GRS typically has large annular distances between projections, which can lead to inaccuracies during cross-correlation. For example, the second projection (-37.0\({}^{\circ}\)) and the third projection (49.6\({}^{\circ}\)) are separated by 86.5\({}^{\circ}\). To address this issue, we index images by both angle and chronology and align each projection to the closest projection by angle, rather than aligning to the previously collected projection. Overall, our method for addressing these challenges has been successful in ensuring accurate projection alignment during TUOTA of beam-sensitive samples. ## 5 Conclusions In conclusion, we have developed a novel protocol for optimizing tilt undersampling during a single acquisition using GRS and RECAST3D. Our simulations have demonstrated that reconstructions of beam-sensitive samples optimized using this method have higher fidelity with the pre-damaged sample than reconstructions using standard incremental acquisition. We have validated our approach through simulations and experimental 3D imaging of Au/Pd nanostars and applied it to the characterization of highly sensitive NP@MOF complexes. Our approach, which is based on golden ratio acquisition and quasi-real-time reconstruction, provides an effective solution for balancing undersampling, beam damage, and reconstruction quality on a sample-by-sample basis. While similar results can be achieved with undersampling optimization of IS, our method is far more efficient and less time-consuming. Future work may involve further optimization and testing of the TUOTA protocol on a wider range of beam-sensitive samples and comparing its performance to other acquisition schemes. Additionally, exploring the use of more advanced reconstruction algorithms in conjunction with TUOTA could potentially lead to even higher-quality reconstructions. ## Funding Statement This project received funding received from the European Union's Horizon 2020 research and innovation programme under grant agreement no. 860942 and from the European Research Council under the ERC Consolidator Grant no. 815128 REALNANO. ## Conflicts of interest The authors declare no financial interests/personal relationships that may be considered potential competing interests. ## Acknowledgements The authors would like to acknowledge the financial support received from the European Union's Horizon 2020 research and innovation program through grant agreement no. 860942 - HEATNMOF. S.B. and A.A.K. also acknowledge support from the European Research Council (ERC Consolidator Grant no. 815128 REALNANO). The authors are grateful for the assistance provided by Armand Beche, Lars Riekehr, and Daniel Arenas Esteban at the EMAT of the University of Antwerp, including training on and use of the TEM, as well as assistance with 3D visualization and rendering of nanomaterials. The authors also acknowledge the Figure 9: SNR (a) and SROD (b) as a function of the number of projections for Au@NU-1000 and Au/Pd@ZIF-8. 3D reconstructions were acquired with the complete tilt-series, SNR optimum, SROD threshold, and 20 projections (left to right) for Au@NU-1000 (c-f) and Au/Pd@ZIF-8 (g-j). contribution of samples from Pablo del Pino and his research group at the University of Santiago de Compostela for use in the characterizations presented in this study.
2306.10555
Summarization from Leaderboards to Practice: Choosing A Representation Backbone and Ensuring Robustness
Academic literature does not give much guidance on how to build the best possible customer-facing summarization system from existing research components. Here we present analyses to inform the selection of a system backbone from popular models; we find that in both automatic and human evaluation, BART performs better than PEGASUS and T5. We also find that when applied cross-domain, summarizers exhibit considerably worse performance. At the same time, a system fine-tuned on heterogeneous domains performs well on all domains and will be most suitable for a broad-domain summarizer. Our work highlights the need for heterogeneous domain summarization benchmarks. We find considerable variation in system output that can be captured only with human evaluation and are thus unlikely to be reflected in standard leaderboards with only automatic evaluation.
David Demeter, Oshin Agarwal, Simon Ben Igeri, Marko Sterbentz, Neil Molino, John M. Conroy, Ani Nenkova
2023-06-18T13:35:41Z
http://arxiv.org/abs/2306.10555v1
Summarization from Leaderboards to Practice: Choosing A Representation Backbone and Ensuring Robustness ###### Abstract Academic literature does not give much guidance on how to build the best possible customer-facing summarization system from existing research components. Here we present analyses to inform the selection of a system backbone from popular models; we find that in both automatic and human evaluation, BART performs better than PEGASUS and T5. We also find that when applied cross-domain, summarizers exhibit considerably worse performance. At the same time, a system fine-tuned on heterogeneous domains performs well on all domains and will be most suitable for a broad-domain summarizer. Our work highlights the need for heterogeneous domain summarization benchmarks. We find considerable variation in system output that can be captured only with human evaluation and are thus unlikely to be reflected in standard leaderboards with only automatic evaluation. \({}^{1}\)Northwestern University \({}^{2}\)University of Pennsylvania \({}^{3}\)IDA/CCS \({}^{4}\)Adobe Research {ddemeter,simon.benigeri,markosterbentz2023}@u.northwestern.edu [email protected] {npmlin,conroy}@super.org [email protected] ## 1 Introduction Academic papers on automatic document summarization have been published since the 1950s (Luhn, 1958) but broadly applicable summarizers not constrained by document type have only recently become widely available.1 The literature contains a wealth of information on model architectures for summarization, yet it remains hard to decide from published evaluations which are "the best" components (data and model) for a good quality customer-facing summarizer. Footnote 1: [https://ai.googleblog.com/2022/03/auto-generated-summaries-in-google-docs.html](https://ai.googleblog.com/2022/03/auto-generated-summaries-in-google-docs.html), [https://quillbot.com/summarize](https://quillbot.com/summarize), [https://smurry.com](https://smurry.com) Here we make the idealized assumption that size and inference cost of the models are not an issue. We seek to find the best backbone for a neural summarizer from freely available research components, producing the best summaries, and a confirmation that the summarizer will work well for varied types of input documents. For this purpose, we fine-tune and evaluate popular off-the-shelf pre-trained models BART (Lewis et al., 2020), PEGASUS (Zhang et al., 2020) and T5 (Raffel et al., 2020) on six summarization datasets. We also create mixed training datasets with a balanced representation of each of the domains. We find that fine-tuning on mixed-domain text, smaller in size than most of the in-domain training set, yields a robust system performing on par with models fine-tuned on the order of magnitude more data when tested in-domain. In addition to evaluation with automatic metrics, we conduct a human evaluation. BART summaries were preferred more often than those produced by PEGASUS and T5. Additionally, summaries generated with BART trained on mixed data are preferred over those generated with BART trained on the most popular summarization research dataset, CNN/Daily Mail, even though the mixed-domain dataset is the smaller of the two. Summaries from this system were even preferred over those produced by BART, fine-tuned on in-domain data matching each test sample. This preference is not captured by automatic metrics. BART fine-tuned on the mixed domain, and often produced summaries deemed more informative than the human reference for the respective input. This was not the case for summarizers obtained by fine-tuning using data from a single source. ## 2 Related Work Some hints that domain robustness is a problem but that summarizers can to an extent generalize across domains are found in the literature. Yu et al. (2021) observe catastrophic forgetting during domain adaptation via continual pre-training. This is concerning if the goal is to have a robust system that serves multiple domains. They do not explicitly measure how much systems degrade when evaluated out of domain, though it is implied by the task and results that there is degradation. There are a few direct studies of summarization cross-domain robustness. Sandu et al. (2010) tested if meetings summarization data is useful for email summarization. They find that training on email data is best, but in the absence of such data training on meetings is helpful. Bar-Haim et al. (2020) train a system for extracting key points on argumentation datasets and then evaluate the same system on municipal surveys and user reviews. The systems perform well, exhibiting robustness. In our work, we carry out a similar evaluation but we examine the robustness of abstractive summarizers on a diverse set of datasets. These findings on cross-domain robustness are encouraging and in line with Hua and Wang (2017)'s findings that some of the capabilities for identifying summary-worthy content are transferable between domains. They study news and opinion piece summarization for texts drawn and find that a model trained on out-of-domain data can learn to detect summary-worthy content, but may not match the generation style in the target domain. Stylistic markers of a domain i.e. as in typical phrasing used to talk about certain topics are not captured. ## 3 Experimental Design Abstractive summarizers generate a short plain text summary capturing the main points of a longer text. The current state-of-the-art models for the task are transformer-based encoder-decoder text-to-text models, such as BART Lewis et al. (2020), PEGASUS Zhang et al. (2020) and T5 Raffel et al. (2020). The models are pre-trained on large general-purpose corpora followed by fine-tuning on specific summarization datasets. ### Pre-trained Models We work with pre-trained BART, PEGASUS, and T5 models, using the model and implementation in Huggingface Wolf et al. (2020). We then fine-tune these for summarization ourselves, on six summarization datasets. All three models use a sequence length of 512 tokens and truncate inputs longer than this. Further details for each model can be found in the appendix. ### Datasets We use six datasets covering diverse domains, namely arXiv Cohan et al. (2018), billsum Kornilova and Eidelman (2019), CNN/DailyMail Hermann et al. (2015), GovReport Huang et al. (2021), Pubmed Cohan et al. (2018) and Reddit TIFU Kim et al. (2019). The texts in each dataset differ by length and stylistic features such as formality of style, letter casing, and punctuation. These distinctions are compelling for exploring cross-domain robustness. Statistics on domain, length, and summary source are shown in Table 1. We use the dedicated training set to fine-tune the three models we compare and a balanced subset of 250 samples from each domain for evaluation.2 Footnote 2: Inference time is approximately one week to generate summaries for the full test sets on a machine configured with three Quadro-RTX 8000 GPUs. We construct one additional training dataset derived from mixing the original sources (_Mixed_). We uniformly sample each of the six publicly available datasets up to the number of individual examples in the dataset with the fewest observations (GovReport). This results in a training set with 105k observations. The mixed-domain dataset is larger than BillSum, GovReports and Reddit, but smaller than the training split of the other three datasets. We fine-tune models on the mixed domain dataset to evaluate if robustness can be improved with a data-only solution, where the system is exposed to heterogeneous fine-tuning data. We use the mixed domain test set as a single test set for evaluating summarizer robustness. \begin{table} \begin{tabular}{l l r r r r} \hline \hline Dataset & Domain & \# docs & doc len & summary src & sum len \\ \hline arXiv & scientific papers & 215k & 4938 & paper abstract & 220 \\ Billsum & U.S. Congressional bills & 23k & 1382 & Congressional Research Service & 197 \\ & California state legislative bills & & 1684 & state Legislative Counsel & \\ CNN/DailyMail & news & 300k & 781 & article bullet highlights & 56 \\ GovReport & U.S. Gov reports & 19k & 9017 & experts & 542 \\ PubMed & biomedical papers & 133k & 3016 & paper abstract & 203 \\ TIFU & Reddit & 120k & 432 & post TL;DR & 23 \\ Mixed-domain & All & 105k & & & \\ \hline \hline \end{tabular} \end{table} Table 1: Dataset statistics. Average lengths are in words. ### Evaluation Settings We explore three fine-tuning and testing configurations. _In-domain_ testing is when the source of the test sample matches the fine-tuning source, as is conventionally done in summarization research. _Cross-domain_ testing is when a summarizer fine-tuned on one source of data is used to generate summaries for another source. We also perform _mixed-domain_ testing, in which we evaluate the summarizers fine-tuned on mixed-domain data on each of the six summarization datasets. _In-domain_ summaries align well with prior published results based on standard datasets, developed for convenience and fast evaluation. _Mixed-domain_ evaluation and summarizers are the most relevant to real-world use cases among the regimes studied in this work. ## 4 Automatic Evaluation We first evaluate the summarizers using three automatic metrics: ROUGE-2 Lin (2004), sacreBLEU Post (2018) and BERTscore Zhang* et al. (2020). The goal of this evaluation is to glean insights about system performance to inform the choice of specific comparisons that can be done with human evaluation. We show the average in-domain and the average cross-domain scores for each model in Table 2. Based on the automatic scores, BART is the best backbone model, with the best performance on all three automatic evaluations both in in-domain and in cross-domain evaluation. PEGASUS is better than T5 in in-domain evaluation, but both are similar in cross-domain evaluation. All three automatic scores are much lower for cross-domain evaluation compared to in-domain evaluation, suggesting that domain robustness poses a problem for a practical system. The drop in ROUGE2 and BLEU is much higher than that in BERTscore. We also show the average automatic scores on the six test datasets with BART trained on different settings (Table 3). The in-domain score reports the average of the six models trained on each of the datasets and evaluated in-domain. CNN represents a single model trained on just CNN and evaluated on each of the six datasets. Similarly, mixed-domain is a single model trained on the mix-domain training set and evaluated on each of the test sets. All three scores show that in-domain is better than mixed-domain, which in turn is better than CNN. CNN is the largest dataset so the scores are not dependent on the training data size, rather it is the domain that matters. For a detailed view, in Table 4, we show the in-domain scores along with the respective average deterioration in cross-domain evaluation. The cross-domain panel lists for the training set, the average of the difference between the score on the in-domain test data and that on each of the cross-domain test datasets. The smaller this difference is, the more robust the summarizer is in cross-domain evaluation. The summarizer fine-tuned on mixed-domain data has the smallest cross-domain degradation on all three automatic evaluation scores, for all pre-trained models. Training on mixed-domain data yields the most robust summarizer. ## 5 Human Evaluation Automatic evaluations consistently indicated that _(i)_ BART produces better summaries than T5 and PEGASUS across the six domains we study, and _(ii)_ the summarizer trained on mixed domain data is the most robust to domain changes. To confirm this finding, we also conduct a manual human evaluation. We sample 10 examples from each domain, for a total of 60 documents3. Each example has a \begin{table} \begin{tabular}{l l r r r} \hline \hline & & BART & PEGASUS & T5 \\ \hline \multirow{3}{*}{in-domain test} & ROUGE2 & **17.3** & 15.9 & 14.3 \\ & BLEU & **12.9** & **12.9** & 11.8 \\ & BERTscore & **89.7** & 89.0 & 88.6 \\ \hline \multirow{3}{*}{cross-domain test} & ROUGE2 & **7.5** & 6.5 & 6.4 \\ & BLEU & 2.7 & **2.8** & **2.8** \\ \cline{1-1} & BERTscore & **86.6** & 85.2 & 85.6 \\ \hline \hline \end{tabular} \end{table} Table 2: Average automatic scores for in-domain, cross-domain and mixed-domain evaluation. These scores exclude the mixed domain summarizer. Columns are the pre-trained models used. The highest score in each row is boldfaced. \begin{table} \begin{tabular}{l r r r} \hline \hline & in-domain & CNN-DM & mixed-domain \\ \hline ROUGE2 & **17.3** & 7.5 & 15.7 \\ BLEU & **12.9** & 2.7 & 9.6 \\ BERTscore & **89.7** & 87.3 & 89.5 \\ \hline \hline \end{tabular} \end{table} Table 3: Average automatic scores on all test datasets for BART trained on different datasets. Columns are the training datasets used. in-domain is the average of scores with six models evaluated on their respective test splits or the mixed-domain test data. CNN and mixed-domain are single models evaluated on each test set. human reference summary and 5 automatic summaries. The same trends for automatic scores are observed for these 60 documents as the 1500 documents in the last section. Footnote 1: [https://www.faceface.com/face.php/face.php](https://www.faceface.com/face.php/face.php) ### Evaluation Setup Three of the authors carried out two rounds of evaluation. In the first round, we compared the human summaries to summaries produced by BART, T5 and PEGASUS fine-tuned on the mixed-domain training set. The goal of this comparison is to find which of the models produced the best summaries. Overall, BART was the most preferred system, consistent with automatic evaluation. In the second round, we compared three BART summarizers: fine-tuned on the mixed domain; fine-tuned on CNN/Daily Mail; fine-tuned on data matching the input source. Given the automatic evaluation, we expect that the in-domain summarizer will be best. However, the mixed-domain BART summarizer was the most preferred one. The judges were first asked to read all four summaries for a given input, without seeing the input itself. The human summary was always placed first in the interface and marked as human. The other three summaries were displayed next, presented in random order for different inputs and listed as Summary A, B, and C, concealing the system that produced the summary. The judges were asked to compare the relative quality of the human and the machine summaries: "Do some automatic summaries provide better content? 5 (a lot of better content) to 1 (no better content)". After the judges read all four summaries and answered the above question for the human summary, they were shown three consecutive pages, each listing one of the summaries and the following questions: **readability**: Is the summary easy to read (formatting, length, style) 5 (very easy to read) to 1 (not at all easy to read)? **recall**: Does the summary provide good information 5 (a lot of good info) to 1 (no good info)? **precision**: Does the summary have unnecessary information 5 (lots of unnecessary info) to 1 (no unnecessary info)? **hallucination**: Does the summary contain apparent hallucinations 5 (no discernable hallucinations) to 1 (obvious hallucinations)? **orthography**: Is the summary formatted according to the rules of English? (yes/no) **repetition**: Does the summary have repetitions? (yes/no) \begin{table} \begin{tabular}{l l l r r r r r r r} \hline \hline & & & & \multicolumn{6}{c}{Training Dataset} \\ \cline{4-11} & & & arXiv & BillSum & CNN & Gov & PubMed & TIFU & Mixed \\ \hline \multirow{4}{*}{BART} & \multirow{2}{*}{in-domain} & ROUGE2 & 15.9 & 29.7 & 15.5 & 15.9 & 18.2 & 8.6 & 18.1 \\ & & BLEU & 11.6 & 18.1 & 13.8 & 11.8 & 16.3 & 5.9 & 10.4 \\ & & BERTscore & 89.2 & 90.6 & 90.1 & 88.9 & 88.9 & 90.5 & 89.9 \\ \cline{2-11} & \multirow{4}{*}{Avg cross-domain \(\Delta\)} & ROUGE2 & -6.2 & -22.6 & -9.4 & -6.4 & -8.2 & -3.9 & -2.4 \\ & & BLEU & -6.9 & -15.8 & -13.3 & -5.9 & -11.6 & -5.5 & -0.8 \\ & & BERTscore & -1.9 & -3.7 & -3.2 & -2.5 & -1.3 & -5.4 & -0.4 \\ \hline \multirow{4}{*}{T5} & \multirow{4}{*}{in-domain} & ROUGE2 & 12.2 & 30.2 & 13.7 & 7.3 & 16.1 & 6.2 & 16.7 \\ & & BLEU & 8.2 & 25.5 & 12.3 & 5.4 & 15.3 & 3.8 & 11.0 \\ \cline{1-1} & & BERTscore & 87.3 & 90.3 & 90.0 & 86.5 & 87.7 & 89.8 & 88.8 \\ \cline{1-1} \cline{2-11} & \multirow{4}{*}{Avg cross-domain \(\Delta\)} & ROUGE2 & -4.7 & -22.0 & -8.1 & -0.7 & -7.6 & -2.0 & -2.9 \\ \cline{1-1} & & BLEU & -3.3 & -22.0 & -11.9 & -1.4 & -9.9 & -3.3 & -1.0 \\ \cline{1-1} & & BERTscore & -2.6 & -3.2 & -3.4 & -1.1 & -1.8 & -5.3 & -0.5 \\ \hline \multirow{4}{*}{PEGASUS} & \multirow{4}{*}{in-domain} & ROUGE2 & 13.6 & 30.7 & 14.4 & 11.0 & 18.2 & 7.7 & 16.6 \\ & & BLEU & 9.8 & 24.3 & 12.0 & 8.5 & 17.7 & 4.8 & 11.0 \\ \cline{1-1} & & BERTscore & 87.9 & 90.3 & 89.8 & 87.6 & 88.3 & 90.1 & 88.9 \\ \cline{1-1} \cline{2-11} & \multirow{4}{*}{Avg cross-domain \(\Delta\)} & ROUGE2 & -7.1 & -23.5 & -8.0 & -2.4 & -11.0 & -2.6 & -2.2 \\ \cline{1-1} & & BLEU & -5.5 & -20.4 & -11.3 & -3.3 & -13.2 & -4.2 & -1.4 \\ \cline{1-1} & & BERTscore & -3.7 & -4.9 & -2.9 & -0.8 & -3.5 & -6.0 & -0.4 \\ \hline \hline \end{tabular} \end{table} Table 4: Scores for in-domain testing and the average degradation in the score w.r.t. in-domain score for out-of-domain testing. Columns represent models finetuned on each of the domains. ### Comparing Model Architectures In the first round, BART trained on mixed domain data emerged as the clearly preferred model over T5 and PEGASUS. Table 5 shows the average rater score for the mixed domain test set summaries produced by each model. For precision and repetition, a lower score is better. For all other dimensions, a higher score is better. BART has a higher score that denotes that summaries conform to the rules of English orthography when compared to other models, though the absolute score is low. BART fine-tuned on mixed-domain data is also rated as having summaries with the best information recall and readability. It does not produce summaries with repeated content within the summary, while T5 often and PEGASUS occasionally do. BART summaries have the least amount of unnecessary information i.e. high precision for information content. The manual evaluation confirms the findings from the automatic evaluation. PEGASUS is rated as the next choice, over T5 on all dimensions. These findings align with the automatic evaluation but provide considerably more nuance with respect to the dimensions in which the summaries differ. Hallucinations were rarely detected for any of the systems, though the judgment was made on the basis of the human summary alone, rather than the full input text. T5 produces the most apparent hallucinations. It also produces significantly more unnecessary content than the other models and its summaries often contain repetitions. Empirical benchmarking presented in published research had not prepared us to expect these. Orthography is problematic for all models, with less than half of the summaries rated as acceptable. In many cases, the summarizers faithfully imitate the incorrect formatting, tokenization and orthography of the fine-tuning data for each domain and the rating often reflects this aspect of system behavior4. The datasets are developed for research purposes, without forward planning to present the results in front of human readers. Most summaries also end mid-sentence, which is jarring when summaries are intended for people. Footnote 4: Only the CNN-Daily Mail fine-tuning dataset follows orthography conventions. ### Comparing Training Data Next, we repeat the same evaluation protocol to compare a BART summarizer fine-tuned on three different types of datasets. In round 2 evaluation, BART fine-tuned on mixed data was rated best for the information its summaries contained and as having the least unnecessary content. In this second round of evaluation, _the human ratings revealed preferences different from what the automatic scores suggested_. The expectation from the automatic evaluation was that the in-domain system would produce the best summaries, possibly with a difference that is not statistically significant. This expectation does not bear out in the human evaluation. The mixed-domain BART system has higher readability scores than the in-domain system, has better information recall as well as precision, and produces more reasonable orthography. BART fine-tuned on mixed-domain is better than the in-domain system--a strong result with practical significance. BART fine-tuned on CNN-DM produces the most readable summaries also following English orthographic rules, but these summaries contain the least useful information, with a point and a half drop on the five-point scale compared to the mixed-domain system. It also generates much more unnecessary information, with a difference of one whole point on the five-point scale. Ideally both the summary content will be good and the text will be readable. In our evaluation, we find that the \begin{table} \begin{tabular}{l r r r} \hline \hline model & in-domain & CNN-DM & mixed \\ \hline readability & 3.77 & **4.13** & 4.06 \\ recall & 3.57 & 2.27 & **3.76** \\ precision & 1.72 & 2.53 & **1.45** \\ hallucination & 4.86 & **4.89** & 4.85 \\ orthography & 0.26 & **0.37** & 0.31 \\ repetition & **0.01** & 0.02 & **0.01** \\ \hline \hline \end{tabular} \end{table} Table 6: Human evaluation comparing BART fine-tuned in-domain, CNN-DM and the mixed-domain datasets. A lower score is better for precision and repetition. A higher score is better for other dimensions. \begin{table} \begin{tabular}{l r r r} \hline \hline model & BART & Pegasus & T5 \\ \hline readability & **3.97** & 3.70 & 3.46 \\ recall & **3.72** & 3.42 & 3.07 \\ precision & **1.48** & 1.89 & 2.66 \\ hallucination & **4.84** & 4.83 & 4.75 \\ orthography & **0.37** & 0.29 & 0.27 \\ repetition & **0.01** & 0.19 & 0.44 \\ \hline \hline \end{tabular} \end{table} Table 5: Human evaluation comparing the three models fine-tuned on mixed-domain data. A lower score is better for precision and repetition. A higher score is better for other dimensions. system that produces the most readable summaries generates poor summaries content-wise. If forced to choose one, the system fine-tuned on mixed-domain will be the uncontroversial choice. ### Automatic Summaries Better than Human Reference The superiority of the summarizer fine-tuned on mixed-domain data also emerges in comparison with the human reference summary. The mixed-domain system produced a summary rated higher than the human summary for 18 of the 60 examples, while the in-domain system did so for only 5. The BART-large model fine-tuned on mixed-domain was the most preferred summarizer in our manual evaluation. We found that it often produced summaries judged to be better than the human reference summary for the same document. Table 7 shows the number of documents, out of 10, where the automatic summary was given a higher score than the respective human summary by at least two judges. The model fine-tuned on the mixed-domain data had the overwhelming share of summaries which provided better content than the human summaries. While such summaries were present in each of the six domains, CNN/Daily Mail was the domain with the largest, followed by Reddit. We give samples of such summaries in the appendix of the paper. This summarizer is not only better than other alternatives we studied, but it is also at times better than human summaries in domains where the human summary is just a teaser to invite a full reading of the text. ### Human Summary Evaluation The manual evaluation was a difficult and frustrating experience. To give a sense of the problem, we show in Table 8 the readability scores for _the human summaries_ across domains, broken down by annotator. The most readable were the CNN/Daily Mail, the only cased domain, while the least readable were arXiv and PubMed, which were not only lowercased, but also contained math symbols replaced by templates. The government reports were excruciatingly hard to read in plain text. They are typically long, around 500 words. On the government website, these were formatted in three or more paragraphs, with some visual support in the form of a graph or chart to help in understanding. Learning to generate automatic summaries of such length without segmenting the text into paragraphs is probably a wasteful effort because people are not likely to read the plain text output. Annotator A gave much lower scores to the human summaries for all but the CNN and Reddit domains. In a post hoc discussion, they shared that they were reading as if the task is to tell in their own words what the text is about. The other two annotators in contrast were mostly skimming, not looking for deep comprehension. Superficial reading is unlikely to be sufficient in tasks where annotators are asked to compare the content quality in two summaries. Similarly, a person would be unable to make that judgment if they cannot understand what the text is about. The process was tedious, despite the fact that our human annotators were researchers with considerable experience in summarization. In light of these considerations, it is hard to imagine that it is ethical to crowdsource evaluations except for the news and Reddit domains. These are however the least representative of documents people may be reading for their work, where a summarizer can be helpful. Despite the difficulty of reading the summary text, on average for the entire test set the human evaluation scores are remarkably consistent. BART fine-tuned on mixed-domain data was evaluated in Round 1, as well as in Round 2. The first columns in Tables 5 and 6 are the average human ratings for the same summaries. The differences are minor, and all conclusions hold if the first columns in the two tables were swapped. \begin{table} \begin{tabular}{l c c c} \hline \hline & in-domain & CNN-DM & Mixed \\ \hline arXiv & 1 & 0 & 2 \\ BillSum & 0 & 0 & 2 \\ CNN & 0 & 0 & 8 \\ Gov & 1 & 0 & 1 \\ PubMed & 1 & 0 & 1 \\ TIFU & 2 & 3 & 4 \\ \hline All & 5 & 3 & 18 \\ \hline \hline \end{tabular} \end{table} Table 7: Number of test examples for which a BART summary was given an information recall score greater than that for the human summary by at least two annotators, indexed by domain and model. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline **Expert** & arXiv & BillSum & CNN & Gov & PubMed & TIFU \\ \hline **A** & 2.5 & 3.4 & 5.0 & 2.8 & 3.0 & 5.0 \\ **B** & 3.9 & 4.9 & 4.8 & 4.5 & 3.8 & 3.8 \\ **C** & 3.8 & 4.5 & 5.0 & 4.0 & 3.9 & 4.0 \\ \hline \hline \end{tabular} \end{table} Table 8: Average readability scores of human summaries by each human annotator. Conclusions We study the cross-domain robustness of neural summarizers. We find that models fine-tuned on only one domain suffer cross-domain deterioration of performance. We find that BART is the best pre-trained model for summarization. It is especially effective when fine-tuned on mixed-domain data. In the human evaluation, this summarizer is rated as producing better summaries than an in-domain summarizer and often produces summaries better than the human summary. This is not reflected in the automatic scores and will therefore not be captured by leaderboards. We also find that most existing datasets do not support efforts toward developing a customer-facing summarizer. The data is poorly formatted and hard to read, so the resulting summaries are unlikely to lead to a delightful customer experience and are hard to read in manual evaluation. Much like the Google team that deployed the auto-summarization feature, we conclude that high-quality, and heterogeneous, fine-tuning data will be necessary to develop such a system. ## 7 Limitations This work presents an expansive analysis of the cross-domain robustness of neural summarizers using automatic metrics and human evaluations. The test sets for summarization datasets selected for our analysis range from about 900 to 12,000 observations, making exhaustive manual evaluation infeasible. Instead, we elect to evaluate the first 250 observations from each dataset. While we believe this sample is sufficient to be representative of the whole dataset, we recognize that a larger-scale human evaluation using crowd-sourced workers may be beneficial. Our human evaluations are created with only three annotators. A larger-scale evaluation with a diverse set of crowd-sourced workers would also address this potential issue. In addition, annotators only compare machine-generated summaries with human ones when performing our human evaluations and do not work with the original passage. While comparing summaries with original passages may be ideal, some datasets' length and technical detail make this difficult, even with crowd-sourced workers. We work with only three neural summarizers and in one size per model. These summarizers are available in multiple sizes models, and other summarization models are available. We elected to forgo these because we are studying cross-domain performance in general rather than trying to explain how model-specific differences manifest themselves in performance. Lastly, we worked with only six publicly available summarization datasets and constructed the Mixu dataset using uniform sampling on each dataset. While we could have studied a larger number of datasets, we believe that the diverse nature of our selections yields a representative analysis.
2305.09554
Plasmonic detection of the parity anomaly in a two-dimensional Chern insulator
In this work, we present an analytical study on the surface plasmon polaritons in a two dimensional parity anomaly Chern insulator. The connections between the topology in the bulk implied by the BHZ model and the dispersion relations of the surface plasmons have been revealed. Anisotropy has been considered during the calculations of the dispersion relations which allows different permittivities perpendicular to the conductive plane. Two surface plasmon modes each contains two branches of dispersion relations have been found. The topologically non-trivial case gives quite different Hall conductivities compared with the trivial one, which leads to significant modifications of the dispersion curves or even the absence of particular branch of the surface plasmons. Our investigations pave a possible way for the detection of the parity anomaly in a two-dimensional Chern insulator via plasmonic responses.
M. N. Chen, Yu Zhou
2023-05-16T15:52:21Z
http://arxiv.org/abs/2305.09554v2
# Plasmonic detection of the parity anomaly in a two-dimensional Chern insulator ###### Abstract In this paper, we present an analytic study on the surface plasmon polaritons in two-dimensional parity anomaly Chern insulators. The two-dimensional conductivity derived from the BHZ model are antisymmetric, based on which two surface plasmon modes each contains two branches of dispersions have been found. In the absence of parity anomaly, the Hall conductivities with positive and negative Dirac mass terms differ by a sign; two branches of each surface plasmon mode are exactly degenerate. However, the parity anomaly can lift such degeneracy and lead to significant modifications of these dispersion curves or even the occurrence of an extra branch of surface plasmons under particular condition. Our investigations pave a possible way for the detection of the parity anomaly in a two-dimensional Chern insulator via plasmonic responses. ## I Introduction Topological materials have attracted much attention in both theoretical and experimental aspects in recent years. As a typical class of topological materials, topological insulators (TIs) have exotic metallic surface (boundary) states protected by time-reversal symmetry, whose topological charge is identified as the \(\mathbb{Z}_{2}\) invariant [1]. Unlike TIs, the Chern insulators (CIs), or named as quantum anomalous Hall insulators, break the time-reversal symmetry, which belongs to the \(\mathbb{Z}\)-topological classification and the corresponding topological charge is the first Chern number. A spin-conserved TI may be viewed as two copies of CIs carrying opposite spin polarizations and counter propagating edge states, respectively [2; 3]. Both TIs and CIs have nontrivial responses to external electromagnetic fields. For TIs, we have \(j_{\mu}^{s}=\sigma_{xy}^{s}\epsilon_{\mu\nu\tau}\partial^{\nu}\Omega^{\tau}\) with \(\sigma_{xy}^{s}\) the spin-Hall conductivity and \(\Omega\) being a pure gauge [4; 5; 6; 7]; for CIs, it has form \(j_{\mu}=\sigma_{xy}\epsilon_{\mu\nu\tau}A^{\mu}\partial^{\nu}A^{\tau}\) with \(\sigma_{xy}\) the Hall conductivity and \(A^{\mu}\) the gauge fields. Therefore, nontrivial electromagnetic responses originate from topologically-nontrivial bulk band structures, which may lead to nontrivial collective excitations. Surface plasmons are collective oscillations of free electrons coupled with light existing at the metal-dielectric interface, of which the electric fields are tightly confined and decay exponentially away from the surface [8; 9]. The permittivities at two sides usually possess opposite signs, i.e. one is positive and the other is negative; otherwise, no dispersion relations can be found for these surface waves. After the discovery of graphene, researchers have realized that such one-atom thick material can support exceedingly strong surface plasmons that is detectable through, for example, scanning near-field infrared microscopy [10; 11; 12; 13; 14; 15; 16]. This can be understood that the conductivity of doped graphene is large enough to cause significant in-plane currents and charge oscillations under incident light pushing the corresponding Drude plasma frequency into the infrared region [17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28]. Although the optical conductivity of graphene is isotropic, its imaginary part can be positive or negative depending on the Fermi level as well as the photon energy. With a positive imaginary part, graphene resembles a thin metallic film supporting transverse magnetic- (TM) polarized surface plasmons; however, with a negative imaginary part, it is more like a thin dielectric film and the surface plasmons are transverse electric- (TE) polarized [29]. Due to the isotropy, TM- and TE-polarized modes are decoupled and their dispersion relations can be found separately. Anisotropy can lead to the coupling of these two polarizations. For example, in phosphorene the electron masses are largely different along the zigzag and armchair directions due to its puckered structure. After being doped with electrons, phosphorene can become metallic supporting surface plasmons [30; 31; 32; 33]; the optical conductivities differ along the zigzag and armchair directions. The iso-frequency contour of the in-plane surface plasmons is in most cases elliptic. With proper electron doping, the conductivities along two directions can have opposite signs and the corresponding iso-frequency contour becomes a hyperbola. In this case, they are called hyperbolic surface plasmons [34]. In the calculation of dispersion relations, one must solve all the field components since the anisotropy usually mixes the two polarizations mentioned above. As for the systems with nonzero Hall conductivities such as CIs, the off-diagonal terms of the conductivity tensor induce currents orthogonal to the applied electric fields, which immediately leads to the situation where all the field components are interrelated and should be all considered simultaneously during the calculation of the surface plasmons [35; 36; 37; 38; 39; 40]. The dispersion relation strongly depends on the conductivity as well as the permittivity of the surrounding materials. Most 2D CIs are encapsulated with optically anisotropic dielectrics showing different permittivities parallel and perpendicular to the conductive surface. Such anisotropy can significantly modify the dispersion relations of the surface plasmons as well. In this paper, we intend to reveal the connections between the parity anomaly in a two-dimensional Chern insulator and the dispersion relations of the surface plasmons. Given by the symmetry of the BHZ model, two surface plasmon modes have been found, each of which contains two branches of dispersion relations. Two modes are respectively characterized by \(E_{z}=0\) and \(H_{z}=0\); the expressions of their dispersion relations indicate that two branches of each mode are exactly degenerate without parity anomaly. One arrives at the same dispersion curve with positive and negative Dirac mass terms. However, in the presence of the parity anomaly, such degeneracy would be lifted; the Hall conductivities in the topologically trivial and non-trivial situations no longer just differ by a sign and an extra branch of surface plasmons might be found under particular condition, as schematically shown in Fig.1. Our investigations pave a possible way for the detection of the parity anomaly in a two-dimensional Chern insulator via plasmonic responses. This paper has been organized as follows. In Sec.II, details regarding the BHZ model describing the two-dimensional Chern insulator and the calculations of the optical conductivities are given. The real and imaginary parts of both the longitudinal and Hall conductivities have been derived. In Sec.III, the expressions of the dispersion relations of the surface plasmons are presented based on the two dimensional conductivity tensor given by the BHZ model. Two modes each with two branches have been found. In Sec.IV, the dispersion relations or equivalently the effective indices of the surface plasmons have been numerically calculated for all the cases. In Sec.V, a conclusion has been given. ## II Model and optical conductivities Let us start from the minimal Hamiltonian for the Bernevig-Hughes-Zhang (BHZ) model, which can be written as [4; 5] \[\hat{H}=v\hbar(k_{x}\hat{\sigma}_{x}+k_{y}\hat{\sigma}_{y})+m_{\bf k}\hat{ \sigma}_{z}\, \tag{1}\] where \(v\) stands for the Fermi velocity, \(m_{\bf k}=mv^{2}-b\hbar^{2}k^{2}\) is the regularized Dirac mass term with \(k^{2}=k_{x}^{2}+k_{y}^{2}\), and \(\{\hat{\sigma}_{i}\}\) are Pauli matrices with \(i=x,y,z\). The Hamiltonian (1) is usually used to describe the Chern insulator, where we have assumed that the spin is fully polarized and thus the spin freedoms can be ignored. After straightforward diagonalization, one can obtain the eigenvalues: \[\epsilon_{\pm}(k)=\pm\sqrt{v^{2}\hbar^{2}k^{2}+m_{\bf k}^{2}}\, \tag{2}\] and the corresponding eigenvectors are \[|u_{-}\rangle=\begin{pmatrix}\sin\theta_{k}\mathrm{e}^{-\mathrm{i}\varphi_{k} }\\ -\cos\theta_{k}\end{pmatrix},\quad|u_{+}\rangle=\begin{pmatrix}\cos\theta_{k} \mathrm{e}^{-\mathrm{i}\varphi_{k}}\\ \sin\theta_{k}\end{pmatrix}, \tag{3}\] where \(\varphi_{k}=\arg(k_{x}+\mathrm{i}k_{y})\) and \(2\theta_{k}=\arccot(m_{\bf k}/v\hbar k)\). The optical conductivity tensor can be obtained by the standard Kubo formula [41] for the \(d\)-dimensional system, \[\sigma_{ij}(\omega)=-\frac{\mathrm{i}}{L^{d}}\frac{e^{2}}{\hbar}\sum_{n,m} \frac{[n_{\rm F}(\epsilon_{n})-n_{\rm F}(\epsilon_{m})]v^{i}_{nm}v^{j}_{mn}}{ (\epsilon_{n}-\epsilon_{m})(\epsilon_{n}-\epsilon_{m}+\hbar\omega)} \tag{4}\] where \(v^{i}_{nm}=\langle u_{n}|\frac{\partial\hat{H}}{\partial k_{i}}|u_{m}\rangle\) is the velocity operator in the \(i\)-th direction with \(i=1,2,\ldots,d\). The indices \(m,n=+,-\) represent the conduction band and valence band, respectively, \(n_{\rm F}(\epsilon_{n})=1/\big{(}1+\mathrm{e}^{\beta(\epsilon_{n}-\mu)}\big{)}\) is the Fermi-Dirac distribution function with \(\mu\) being the chemical potential, \(\beta=1/(k_{\rm B}T)\) with \(k_{\rm B}\) being the Boltzmann constant and \(T\) is the temperature, and \(L\) is the length of the system. For the model we have considered in this work, \(d=2\). In what follows, we will include the impurity scattering processes, therefore, the frequency \(\omega\) should be replaced by \(\omega+\mathrm{i}/2\tau\) with \(\tau\) the elastic scattering time. Substituting Eq.(3) into Eq.(4) and after doing some algebras, the expression for the optical conductivity becomes \[\begin{split}\sigma_{ij}(\omega)&=-\mathrm{i}e^{2} \hbar\int\frac{\mathrm{d}^{2}{\bf k}}{(2\pi)^{2}}\frac{n_{\rm F}(\epsilon_{+}) -n_{\rm F}(-\epsilon_{+})}{2\epsilon_{+}}\\ &\quad\times\bigg{[}v^{i}_{+-}v^{j}_{-+}\frac{\hbar\omega-2 \epsilon_{+}}{(\hbar\omega-2\epsilon_{+})^{2}+\hbar^{2}/4\tau^{2}}-\mathrm{ i}\pi v^{i}_{+-}v^{j}_{-+}\delta(\hbar\omega-2\epsilon_{+})+v^{i}_{-+}v^{j}_{+-} \frac{1}{2\epsilon_{+}+\hbar\omega}\bigg{]}\.\end{split} \tag{5}\] Using Eq.(5), we can obtain the real part for the longitudinal conductivity \(\sigma_{xx}\) at zero temperature as (in units of Figure 1: Schematic illustration of the detection of the parity anomaly in 2D CIs. An extra branch of the surface plasmons may occur under particular condition with parity anomaly. Left/right panel corresponds to the the case without/with parity anomaly. \(e^{2}/h\)) \[\text{Re}\,\sigma_{xx}(\omega)=\frac{\pi v^{2}}{8}\frac{1}{v^{2}-2b(mv^{2}-b\hbar^ {2}k^{2})}\bigg{[}1+\frac{4(mv^{2}+b\hbar^{2}k^{2})^{2}}{(\hbar\omega)^{2}} \bigg{]}\Theta(\hbar\omega-2|m|v^{2}) \tag{6}\] where \(k^{2}\) is solved via the equation \[\frac{(\hbar\omega)^{2}}{4}=v^{2}\hbar^{2}k^{2}+(mv^{2}-b\hbar^{2}k^{2})^{2}\, \tag{7}\] the Fermi energy \(\epsilon_{\text{F}}\) is set to zero (i.e. stays in the band gap and hence the intraband contribution to the conductivity is zero), and the imaginary part of \(\sigma_{xx}\) is given by \[\text{Im}\,\sigma_{xx}(\omega)=\frac{v^{2}}{4}\int\text{d}\epsilon_{+}\frac{1 }{\sqrt{(1-4bm)v^{4}+4b^{2}\epsilon_{+}^{2}}}\bigg{[}1+\frac{(mv^{2}+b\hbar^{ 2}k^{2})^{2}}{\epsilon_{+}^{2}}\bigg{]}\bigg{[}\frac{\hbar\omega-2\epsilon_{+ }}{(\hbar\omega-2\epsilon_{+})^{2}+\frac{\hbar^{2}}{4\tau^{2}}}+\frac{1}{2 \epsilon_{+}+\hbar\omega}\bigg{]} \tag{8}\] The real part of the Hall conductivity \(\sigma_{xy}\) can be derived as \[\text{Re}\,\sigma_{xy}(\omega)=\frac{v^{2}}{4b\hbar\omega\xi}\sum_{s=+,-}(1-4bm +s\xi)\text{arccoth}\bigg{[}\frac{2\frac{(b\hbar)^{2}k\omega}{v^{3}}\sqrt{1+ \big{(}\frac{mv}{\hbar k}-\frac{b\hbar k}{v}\big{)}^{2}}}{s(1-4bm)+(1-2bm+\frac {2b^{2}k^{2}\hbar^{2}}{v^{2}})\xi}\bigg{]}\bigg{|}_{0}^{\infty} \tag{9}\] and the imaginary part of \(\sigma_{xy}\) \[\text{Im}\,\sigma_{xy}(\omega)=-\frac{\pi v^{2}}{2\hbar\omega}\frac{mv^{2}+bk ^{2}\hbar^{2}}{v^{2}-2b(mv^{2}-b\hbar^{2}k^{2})}\Theta(\hbar\omega-2|m|v^{2}) \tag{10}\] where \(\xi=\sqrt{1-4bm+(b\hbar\omega/v^{2})^{2}}\). Using the relation Figure 2: Plots of conductivities (a) Re\(\sigma_{xy}\), (b) Im\(\sigma_{xy}\), (c) Re\(\sigma_{xx}\), and (d) Im\(\sigma_{xx}\) as functions of the photon energy \(\hbar\omega\) in units of \(e^{2}/h\). Solid and dashed curves correspond to the cases without and with parity anomaly, respectively. The conductivities with positive and negative Dirac mass terms are plotted in blue and red, respectively. The cut-off energy is \(\epsilon_{\text{c}}=4|m|v^{2}\) in (d). Other parameters are \(\hbar v=0.5\)eV, \(\hbar^{2}b=0.2\)eV\(\cdot\)Å\({}^{2}\). \(\mathrm{arccoth}(x)=[\ln((x+1)/x)-\ln((x-1)/x)]/2\), Eq.(9) becomes \[\mathrm{Re}\,\sigma_{xy}(\omega)= \frac{v^{2}}{8\xi b\hbar\omega}\times\] \[\left[2(1-4bm)\ln\left|\frac{b\hbar\omega/v^{2}+\xi}{b\hbar\omega/ v^{2}+\xi}\right|-\sum_{s=+,-}\!\!g_{s}(\omega)\right]\,, \tag{11}\] where we have defined \[g_{s}(\omega)= \big{(}1-4bm+s\xi\big{)}\times\] \[\ln\left|\frac{2b^{2}|m|\hbar\omega/v^{2}+(1-4bm)s+\xi(1-2bm)}{2b ^{2}|m|\hbar\omega/v^{2}-(1-4bm)s-\xi(1-2bm)}\right|\,. \tag{12}\] In the DC limit (\(\omega\to 0\)), the real part of \(\sigma_{xy}\) becomes \[\mathrm{Re}\,\sigma_{xy}=\frac{e^{2}}{h}C\, \tag{13}\] where \(C=[\mathrm{sgn}(m)+\mathrm{sgn}(b)]/2\) is the first Chern number, which characterizes the topological properties of the Hamiltonian (1), as expected. Equation (9) agrees with the results obtained in Ref.[42]. All of the conductivities as functions of the photon energy \(\hbar\omega\) are shown in Fig.2. One can find that there exist peaks at \(\hbar\omega=2mv^{2}\), which are due to the Rabi resonance. One may also see from Figs.2(c) and 2(d) that the sign of \(m\) does not affect the longitudinal conductivities qualitatively, while it can significantly modify the Hall conductivities. This can be understood by fact that the topological nature for the Hall conductivity depends on the relative sign between the values of \(m\) and \(b\), which also determines the values of the Chern number \(C\). In the absence of the parity anomaly term (\(b=0\)), the conductivities can be reduced to \[\mathrm{Re}\,\sigma_{xy}(\omega)=\frac{e^{2}}{h}\frac{mv^{2}}{2\hbar\omega}\ln \left|\frac{2v^{2}|m|+\hbar\omega}{2v^{2}|m|-\hbar\omega}\right|\,, \tag{14}\] \[\mathrm{Im}\,\sigma_{xy}(\omega)=-\frac{e^{2}}{h}\frac{\pi}{2}\frac{mv^{2}}{ \hbar\omega}\Theta(\hbar\omega-2|m|v^{2})\, \tag{15}\] \[\mathrm{Re}\,\sigma_{xx}=\frac{e^{2}}{h}\frac{\pi}{8}\bigg{(}1+\frac{4m^{2}v^ {4}}{(\hbar\omega)^{2}}\bigg{)}\Theta(\hbar\omega-2|m|v^{2}) \tag{16}\] and \[\mathrm{Im}\,\sigma_{xx}(\omega) =\frac{e^{2}}{h}\frac{1}{4}\int\mathrm{d}\epsilon_{+}\bigg{(}1+ \frac{m^{2}v^{4}}{\epsilon_{+}^{2}}\bigg{)}\] \[\times\left[\frac{\hbar\omega-2\epsilon_{+}}{(\hbar\omega-2 \epsilon_{+})^{2}+\frac{\hbar^{2}}{4\tau^{2}}}+\frac{1}{2\epsilon_{+}+\hbar \omega}\right]\,. \tag{17}\] In fact, from the point view of the topological field theory, the Hamiltonian (1) is reduced to a (2+1)-dimensional massive Dirac one when \(b=0\), whose mass term \(mv^{2}\hat{\sigma}_{z}\) plays an important role in Chern-Simons field Lagrangian [43] \[\mathcal{L}_{\mathrm{CS}}=\frac{\mathrm{sgn}(m)}{2}\int\mathrm{d}^{2}x \mathrm{d}t\,\epsilon^{\mu\nu\tau}A_{\mu}\partial_{\nu}A_{\tau}\, \tag{18}\] where \(A_{\mu}\) is the gauge field with the space-time indices \(\mu=t,x,y\). The presence of the regulating term \(b\hbar^{2}k^{2}\) results in an additional term \(\mathrm{sgn}(b)/2\) to the coefficient of the Chern-Simons term (18), which leads to the parity anomaly and then Eq.(18) becomes \(\mathcal{L}_{\mathrm{CS}}=C\int\mathrm{d}^{2}x\mathrm{d}t\,\epsilon^{\mu\nu \tau}A_{\mu}\partial_{\nu}A_{\tau}\). In the following sections, we will consider the surface plasmonic responses in both \(C=1\) and \(C=0\) cases, with and without parity anomaly. ## III Surface plasmons The BHZ model gives us a two dimensional conductivity tensor as shown above which should support surface Figure 3: Propagation constants of the surface plasmon mode with \(H_{z}=0\) for the situations with (a) \(mv^{2}=0.05\) and (b) \(mv^{2}=-0.05\). Insets are the enlarge plots in the low photon energy ranges. The solid and imaginary parts are plotted in blue and red, respectively. Propagation constants correspond to the case with/without parity anomaly are plotted with dashed/solid curves. plasmons. Surface plasmons are coupled states of light and collective electron oscillations; one should work with Maxwell's equations at each side of the conductive layer and at same time consider the current density within the layer given by the conductivity tensor. The dispersion relations of the surface plasmons can be derived based on the following two boundary conditions: (i) the tangential electric fields are continuous across the 2D CIs; (ii) the current densities satisfy Ampere's law which causes discontinuity of the tangential magnetic fields at two sides. Due to the existence of the Hall conductivity, one cannot separate the surface plasmons into transverse electric (TE) and transverse magnetic (TM) polarized modes; in fact, they are coupled through \(\sigma_{xy}\). The Hall term has nothing to do with the ohmic losses but can seriously modify the dispersion relations of the surface plasmons as shown below. Considering the fact that the surrounding dielectrics at two sides are often composed of layered materials, anisotropy is allowed where the in-plane and our-of-plane permittivities are respectively denoted as \(\varepsilon_{\rm in}\) and \(\varepsilon_{\rm out}\). The wave number \({\bf k}=(k_{x},k_{y},k_{z})\) in the Cartesian coordinates can be separated into an in-plane part \(\mathbf{\beta}=(k_{x},k_{y})\) and a out-of-plane part \(k_{z}\). Since we are interested in the surface waves, it is common to set \(k_{z}={\rm i}\gamma\) with \(\gamma\) being real. In the surrounding dielectrics, one can derive the following expression from Maxwell's equations considering the rotation symmetry of system implied by the BHZ model: \[[(\gamma^{2}+k_{0}^{2}\varepsilon_{\rm in})(\beta^{2}-k_{0}^{2}\varepsilon_{ \rm out})-\beta^{2}\gamma^{2}](\beta^{2}-\gamma^{2}-k_{0}^{2}\varepsilon_{\rm out })=0\, \tag{19}\] where \(\beta=|\mathbf{\beta}|\) is the magnitude of the in-plane wave number, \(k_{0}=\omega/c\) is the wave number in vacuum with \(\omega\) being the angular frequency. The above expression leads to two modes with either \(\beta^{2}-\gamma^{2}-k_{0}^{2}\varepsilon_{\rm out}=0\) or \((\gamma^{2}+k_{0}^{2}\varepsilon_{\rm in})(\beta^{2}-\gamma^{2}-k_{0}^{2} \varepsilon_{\rm out})=\beta^{2}\gamma^{2}\). Assuming \(\gamma>0\), we have \(\gamma=\sqrt{\beta^{2}-k_{0}^{2}\varepsilon_{\rm out}}\) or \(\gamma=\sqrt{(\beta^{2}/\varepsilon_{\rm out}-k_{0}^{2})\varepsilon_{\rm in}}\) depending on the mode we are considering. The first expression clearly shows that \(\varepsilon_{\rm out}\) is not involved, thus \(E_{z}=0\). Further calculations of the polarization indicate that \(\mathbf{\beta}\cdot{\bf E}=0\). The second expression shows that \(E_{x}\), \(E_{y}\) and \(E_{z}\) are all involved, while further calculations indicate that \(H_{z}=0\) and \(\mathbf{\beta}\cdot{\bf H}=0\). In searching the surface plasmons, we have chosen to solve \(E_{x}\) and \(E_{y}\). Firstly, it is assumed that \(E_{z}\neq 0\), The electric fields are tightly confined near the conductive surface, thus they are proportional to \({\rm e}^{{\rm i}k_{x}x}{\rm e}^{{\rm i}k_{y}y}{\rm e}^{-\gamma z}\), and \(E_{z}={\rm i}\mathbf{\beta}\cdot{\rm e}^{{\rm i}k_{z}x}{\rm e}^{-\gamma z}\). The electric fields are tightly confined near the conductive surface, thus they are proportional to \({\rm e}^{{\rm i}k_{x}x}{\rm e}^{{\rm i}k_{y}y}{\rm e}^{-\gamma z}\), and \(E_{z}={\rm i}\mathbf{\beta}\cdot{\rm e}^{{\rm i}k_{z}x}{\rm e}^{-\gamma z}\). The electric fields are tightly confined near the conductive surface, thus they are proportional to \({\rm e}^{{\rm i}k_{x}x}{\rm e}^{{\rm i}k_{y}y}{\rm e}^{-\gamma z}\). \(\mathbf{E}\varepsilon_{\mathrm{in}}/\gamma\varepsilon_{\mathrm{out}}\). Consequently, \(H_{x}\) and \(H_{y}\) are given by \[\begin{split}-\frac{\varepsilon_{\mathrm{in}}}{\varepsilon_{ \mathrm{out}}}\bigg{(}\frac{k_{x}k_{y}}{\gamma}E_{x}+\frac{k_{y}^{2}}{\gamma}E _{y}\bigg{)}+\gamma E_{y}&=\mathrm{i}\omega\mu_{0}H_{x}\\ -\gamma E_{x}+\frac{\varepsilon_{\mathrm{in}}}{\varepsilon_{ \mathrm{out}}}\bigg{(}\frac{k_{x}^{2}}{\gamma}E_{x}+\frac{k_{x}k_{y}}{\gamma} E_{y}\bigg{)}&=\mathrm{i}\omega\mu_{0}H_{y}\end{split} \tag{20}\] where \(\mu_{0}\) is the permeability of vacuum. Based on the boundary conditions mentioned above, the equations regarding \(E_{x}\) and \(E_{y}\) can be derived which finally leads to the following expression for the dispersion relation of this surface plasmon mode \[\begin{split}&\big{(}\gamma^{(1)}+\gamma^{(2)}-k_{x}^{2}\Gamma- \mathrm{i}\omega\mu_{0}\sigma_{xx}\big{)}\big{(}\gamma^{(1)}+\gamma^{(2)}-k_{ y}^{2}\Gamma-\mathrm{i}\omega\mu_{0}\sigma_{yy}\big{)}\\ &=\big{(}k_{x}k_{y}\Gamma+\mathrm{i}\omega\mu_{0}\sigma_{xy} \big{)}\big{(}k_{x}k_{y}\Gamma+\mathrm{i}\omega\mu_{0}\sigma_{yx}\big{)}\,\end{split} \tag{21}\] where \(\Gamma=\varepsilon_{\mathrm{in}}^{(1)}/(\gamma^{(1)}\varepsilon_{\mathrm{out} }^{(1)})+\varepsilon_{\mathrm{in}}^{(2)}/(\gamma^{(2)}\varepsilon_{\mathrm{ out}}^{(2)})\). \(\gamma^{(i)}=\sqrt{\beta^{2}-k_{0}^{2}\varepsilon_{\mathrm{in}}^{(i)}}\), where the superscripts \(i=1,2\) denote the space above and below the conductive layer, respectively. The expression of the dispersion relation given above can be simply written as \[\begin{split}&\big{(}\gamma^{(1)}+\gamma^{(2)}-\mathrm{i} \omega\mu_{0}\sigma_{xx}\big{)}^{2}-(\omega\mu_{0}\sigma_{xy})^{2}\\ &=\big{(}\gamma^{(1)}+\gamma^{(2)}-\mathrm{i}\omega\mu_{0} \sigma_{xx}\big{)}\beta^{2}\Gamma\end{split} \tag{22}\] Solving \(\beta\) in the complex plane with \(\mathrm{Re}\,\beta\geq 0\) and \(\mathrm{Im}\,\beta\geq 0\) at each frequency one can find the dispersion curves of the surface plasmons. Below the gap, \(\sigma_{xx}\) is purely imaginary and \(\sigma_{xy}\) is purely real, hence one only needs to solve \(\beta\) on the real axis. The above equation contains two branches of dispersion relations, which can be written as \[\frac{\omega}{c}=\frac{Z_{0}(\mathrm{i}\sigma_{xx})[2(\gamma^{(1)}+\gamma^{(2 )})-\beta^{2}\Gamma]\pm\sqrt{Z_{0}^{2}\sigma_{xy}^{2}[2(\gamma^{(1)}+\gamma^{(2 )})-\beta^{2}\Gamma]^{2}+Z_{0}^{2}\beta^{4}\Gamma^{2}[(\mathrm{i}\sigma_{xx})^ {2}-\sigma_{xy}^{2}]}}{2Z_{0}^{2}[(\mathrm{i}\sigma_{xx})^{2}-\sigma_{xy}^{2}]} \tag{23}\] Figure 5: Effective indices as functions of the photon energy for the two branches of surface plasmons given by (a) Eq.(27) and (b) Eq.(29). The ratios \(\omega_{i}/\omega_{r}\) of the two branches of surface plasmons given by (c) Eq.(27) and (d) Eq.(29). where \(Z_{0}\) is the vacuum impedance. Equation (23) is one of our main results concerning the dispersion relations of the surface plasmons. As for the mode with \(E_{z}=0\), Eq.(20) is reduced to \[\gamma E_{y}=\mathrm{i}\omega\mu_{0}H_{x}\,\quad-\gamma E_{x}=\mathrm{i} \omega\mu_{0}H_{y}. \tag{24}\] Following the calculations as shown above, one can derive the expression of the dispersion relation of the surface plasmons as \[\begin{split}&\big{(}\gamma^{(1)}+\gamma^{(2)}-\mathrm{i}\omega \mu_{0}\sigma_{xx}\big{)}\big{(}\gamma^{(1)}+\gamma^{(2)}-\mathrm{i}\omega\mu _{0}\sigma_{yy}\big{)}\\ &+(\omega\mu_{0})^{2}\sigma_{xy}\sigma_{yx}=0\.\end{split} \tag{25}\] Considering the symmetry of the BHZ model, this expression can be further written as \[\big{(}\gamma^{(1)}+\gamma^{(2)}-\mathrm{i}\omega\mu_{0}\sigma_{xx}\big{)}^{2 }=(\omega\mu_{0}\sigma_{xy})^{2} \tag{26}\] which can be easily solved. We have found that it is more convenient working with complex angular frequency. Replacing \(\omega\) with \(\omega_{\mathrm{R}}-\mathrm{i}\omega_{\mathrm{I}}\), where \(\omega_{\mathrm{R}}\) and \(\omega_{\mathrm{I}}\) are respectively the real and imaginary parts, Eq.(26) can be broken down into two branches of dispersion relations written as follows: \[\begin{split}&\gamma^{(1)}+\gamma^{(2)}\\ &=\frac{[(\mathrm{Re}(\sigma_{xy})-\mathrm{Im}(\sigma_{xx}))^{2}+ (\mathrm{Im}(\sigma_{xy})+\mathrm{Re}(\sigma_{xx}))^{2}]\omega_{\mathrm{R}} \mu_{0}}{\mathrm{Re}(\sigma_{xy})-\mathrm{Im}(\sigma_{xx})}\end{split} \tag{27}\] with \[\omega_{\mathrm{I}}=\frac{\mathrm{Im}(\sigma_{xy})+\mathrm{Re}(\sigma_{xx})}{ \mathrm{Re}(\sigma_{xy})-\mathrm{Im}(\sigma_{xx})}\omega_{\mathrm{R}}\, \tag{28}\] and \[\begin{split}&\gamma^{(1)}+\gamma^{(2)}\\ &=-\frac{[(\mathrm{Re}(\sigma_{xy})+\mathrm{Im}(\sigma_{xx}))^{2}+ (\mathrm{Im}(\sigma_{xy})-\mathrm{Re}(\sigma_{xx}))^{2}]\omega_{\mathrm{R}} \mu_{0}}{\mathrm{Re}(\sigma_{xy})+\mathrm{Im}(\sigma_{xx})}\end{split} \tag{29}\] with \[\omega_{\mathrm{I}}=\frac{\mathrm{Im}(\sigma_{xy})-\mathrm{Re}(\sigma_{xx})}{ \mathrm{Re}(\sigma_{xy})+\mathrm{Im}(\sigma_{xx})}\omega_{\mathrm{R}}. \tag{30}\] These two solutions are related by time-reversal symmetry. Since \(\gamma^{(1)}+\gamma^{(2)}>0\), \(\mathrm{Re}(\sigma_{xy})-\mathrm{Im}(\sigma_{xx})>0\) and \(\mathrm{Re}(\sigma_{xy})+\mathrm{Im}(\sigma_{xx})<0\) must be satisfied in Eqs.(27) and (29), respectively. Also, \(\omega_{\mathrm{I}}\) must be positive. ## IV Results The Fermi energy locates within the gap, thus only interband transition of electrons needs to be considered during the calculations of the conductivities. Due to the absence of the intraband transitions, such optically conductive surface resembles a dielectric thin film rather than a metallic one. This fact can also be known from the imaginary parts of the longitudinal conductivities as shown in Fig.2 which is negative leading to positive effective permittivities. Without parity anomaly, the Hall conductivities with positive and negative Dirac mass terms differ just by a sign, as shown by the solid red and blue solid curves in Fig.2(a). In the presence of parity anomaly, the Hall conductivities are respectively zero and integer-valued in the topologically trivial and non-trivial situations. We have searched for the two surface plasmon modes mentioned in Sec.III, and for simplicity the surrounding dielectrics have been assumed to be isotropic with refractive indices \(n=3\), which is reasonable and will not affect our main conclusions in this paper. The dispersion relations of the surface plasmon mode with \(H_{z}=0\) are shown in Fig.3, where we have solved the dispersion relations in the complex plane and plotted the propagation constant \(\beta\) as functions of the photon energy. Figs. 3(a) and 3(b) correspond to \(mv^{2}=0.05\) and \(mv^{2}=-0.05\), respectively. The real and imaginary parts are plotted in blue and red, respectively. Propagation constants correspond to the case with/without parity anomaly are plotted with dashed/solid curves. Insets are the enlarged plots of the region below the transition threshold. Since the conductivities given by the BHZ model are not Drude-type, the dispersion curves as shown in Fig.3 are either straight lines corresponding to the light line in surrounding dielectrics or curves with relatively large imaginary parts. Straight lines indicate the absence of any surface-confined electromagnetic modes, and \(\mathrm{Im}(\beta)\gg\mathrm{Re}(\beta)\) simply means large energy dissipations. For \(mv^{2}=0.05\), the parity anomaly seems to lower the curves of the propagation constants, as shown in Fig. 3(a); while for \(mv^{2}=-0.05\) the propagation constants are increased, as shown in Fig. 3(b). It is clear that the parity anomaly in a two-dimensional Chern insulator can seriously modify the dispersion curves of this surface plasmon mode. As for the surface plasmon mode with \(E_{z}=0\), two branches of dispersion relations have been found by numerically solving Eqs.(27) and (29). The dominators of these two equations are respectively plotted as functions of the photon energy in Figs.4(a) and 4(b). From a mathematical point of view, Eqs.(27) and (29) respectively require \(\mathrm{Re}(\sigma_{xy})-\mathrm{Im}(\sigma_{xx})>0\) and \(\mathrm{Re}(\sigma_{xy})+\mathrm{Im}(\sigma_{xx})<0\). Without parity anomaly, as shown by the solid curves in Figs. 4(a)and 4(b), \(mv^{2}=0.05\) and \(mv^{2}=-0.05\) actually give the same dispersion curves, i.e. they are degenerate. However, with parity anomaly, such degeneracy is lifted, as shown by the dashed blue and red curves. For the branch corresponding to Eq.(27), both \(mv^{2}=0.05\) and \(mv^{2}=-0.05\) can lead to physical solutions since \(\mathrm{Re}(\sigma_{xy})-\mathrm{Im}(\sigma_{xx})>0\); while for the branch corresponding to Eq.(29), only \(mv^{2}=-0.05\) can lead to meaningful results where \(\mathrm{Re}(\sigma_{xy})+\mathrm{Im}(\sigma_{xx})<0\). The right hand sides of Eqs.(27) and (29) are solely determined by the photon energy which are plotted in Figs.4(c) and 4(d). Without parity anomaly, \(\gamma^{(1)}+\gamma^{(2)}\) of the two branches corresponding to Eq.(27) (\(mv^{2}=0.05\)) and Eq.(29) (\(mv^{2}=-0.05\)) are the same, as indicated by the fact that the solid blue curve in Fig.4(c) and solid red curve in Fig.4(d) coincide. With parity anomaly, as shown by the dashed curves in Figs.4(c) and 4(d), the branch corresponding to Eq.(27) has two solutions, where dashed blue and red curves in Fig.4(c) respectively denote the situations with \(mv^{2}=0.05\) and \(mv^{2}=-0.05\); while the branch corresponding to Eq.(29) has only one solution as indicated by the red dashed curve in Fig.4(d) denoting the situation with \(mv^{2}=-0.05\). These surface plasmons are weakly guided based on the observation that the dispersion relations are quite close to the light line in the surrounding dielectric material. We have solved the mode effective indices defined as \(\beta/k_{0}\) as functions of the photon energy, which are plotted in Figs.5(a) and 5(b). Again, without parity anomaly, two branches of this surface plasmon mode are degenerate and the solid blue and red curves in Figs. 5(a) and 5(b) are identical; such degeneracy can be lifted by introducing the parity anomaly term in the Hamiltonian. The effective index of the surface mode should slightly larger than the refractive index of the surrounding dielectric material. We further plot \(\omega_{i}/\omega_{r}\) as functions of the photon energy in Figs.5(c) and 5(d). \(\omega_{i}/\omega_{r}>0\) must be satisfied since the energy must be damped during propagation. Large ratios mean that these surface plasmons possess significant losses which can be ascribed to relatively small conductivities given by the BHZ model. ## V Conclusion In this paper, we have investigated the relations between the parity anomaly in a two-dimensional Chern insulator and the dispersion relations of the surface plasmons. Given by the symmetry of the model we have considered, two surface plasmon modes have been found. Each mode contains two branches of dispersion relations which are degenerate with regard to the sign of the Dirac mass term in the absence of the parity anomaly. Introducing the parity anomaly term into the Hamiltonian will lift this degeneracy and significantly modify the dispersions of the surface plasmons. In the presence of the parity anomaly, the band topology of the bulk states results in integer-valued Hall conductivity. Despite the fact that the Hall conductivity is shifted about \(e^{2}/2h\), it can cause significant changes and even leads to the occurrence of an extra branch of surface plasmons. Our findings have revealed the connections between the parity of two dimensional materials and the dispersion relations of their surface plasmons, which might become valuable in, for example, detection of the parity anomaly via plasmonic responses. ###### Acknowledgements. This work was supported by National Natural Science Foundation of China (Grant Nos.11804070, 61805062, 11975088).
2310.13962
Character of electronic states in the transport gap of molecules on surfaces
We report on scanning tunneling microscopy (STM) topographs of individual metal phthalocyanines (MPc) on a thin salt (NaCl) film on a gold substrate, at tunneling energies within the molecule's electronic transport gap. Theoretical models of increasing complexity are discussed. The calculations for MPcs adsorbed on a thin NaCl layer on Au(111) demonstrate that the STM pattern rotates with the molecule's orientations - in excellent agreement with the experimental data. Thus, even the STM topography obtained for energies in the transport gap represent the structure of a one atom thick molecule. It is shown that the electronic states inside the transport gap can be rather accurately approximated by linear combinations of bound molecular orbitals (MOs). The gap states include not only the frontier orbitals but also surprisingly large contributions from energetically much lower MOs. These results will be essential for understanding processes, such as exciton creation, which can be induced by electrons tunneling through the transport gap of a molecule.
Abhishek Grewal, Christopher C. Leon, Klaus Kuhnke, Klaus Kern, Olle Gunnarsson
2023-10-21T10:29:51Z
http://arxiv.org/abs/2310.13962v1
# Character of electronic states in the transport gap of molecules on surfaces ###### Abstract We report on scanning tunneling microscopy (STM) topographs of individual metal phthalocyanines (MPc) on a thin salt (NaCl) film on a gold substrate, at tunneling energies within the molecule's electronic transport gap. Theoretical models of increasing complexity are discussed. The calculations for MPCs adsorbed on a thin NaCl layer on Au(111) demonstrate that the STM pattern rotates with the molecule's orientations - in excellent agreement with the experimental data. Thus, even the STM topography obtained for energies in the transport gap represent the structure of a one atom thick molecule. It is shown that the electronic states inside the transport gap can be rather accurately approximated by linear combinations of bound molecular orbitals (MOs). The gap states include not only the frontier orbitals but also surprisingly large contributions from energetically much lower MOs. These results will be essential for understanding processes, such as exciton creation, which can be induced by electrons tunneling through the transport gap of a molecule. + Footnote †: [email protected] + Footnote †: [email protected] + Footnote †: [email protected] ## Introduction Imaging molecules on surfaces with scanning tunneling microscopy (STM) often involves resonant tunneling through its electronic molecular orbitals (MOs). This process leads to an extremely enhanced tunneling rate which facilitates high-resolution imaging of specifically chosen electronic orbitals. This mechanism is experimentally and theoretically well established [1; 2; 3; 4]. In contrast, off-resonant tunneling through the transport gap between two MOs can show interesting behaviors that venture far beyond this standard. This situation becomes particularly important when the two MOs are the highest occupied MO (HOMO) and the lowest unoccupied MO (LUMO), and tunneling through the energy gap is used, for example to create singlet excitons for photon emission [5; 6; 7; 8; 9; 10]. Tunneling through a transport gap occurs also for devices with negative differential resistivity [11]. The importance of these fundamental processes leads us to examine the details of electron tunneling within the molecule's electronic transport gap. We focus on the electron propagation from the substrate to the molecule. The molecule studied here, platinum(II) and magnesium phthalocyanine (PtPc and MgPc), are one atom thick molecules, the thinnest possible. Nevertheless, we find experimentally that the STM topography image is decisively influenced by the molecule, even for tunneling at energies in the electronic transport gap where the molecule is non-conducting. However, the images in the entire transport gap differ strongly from the images of the HOMO and LUMO, even when these orbitals are just a few tens of meV away from the tunneling electron energy. We find, theoretically, that the gap images of the molecule can be described to a good approximation by linear combinations of bound MOs. Surprisingly, we find that MOs at energies far below the gap play an essential role in the gap images, explaining why they look substantially different from both the HOMO, and the LUMO. In STM or STM-induced luminescence studies, the molecule is often deposited on a few layers of a large band gap insulator, such as NaCl [2; 5; 12]. The insulator is often considered as an uninteresting buffer, simply present to make the coupling between the molecule and the substrate weak, but is otherwise not very important. In a recent paper, we have shown that the conduction band of NaCl has mainly Cl character, like the valence band, contrary to common assumptions [10; 13]. The gap electrons then also have wave functions of mainly Cl character in the NaCl film, which influences the coupling of a molecule to NaCl. This important aspect is taken into account in our calculations. The coupling between the Au(111) substrate and the PtPc molecule via the NaCl film strongly favors specific PtPc MOs, which play an important role in the topography imaging at energies in the electronic transport gap of PtPc. We perform a set of calculations for models of increasing complexity. The purpose is to explain why gap images are strongly influenced by the MOs, even at energies in the transport gap. In particular, we consider an exactly solvable model of a substrate and an adsorbed molecule with a HOMO and a LUMO. We show that in the spatial range of the molecule, the wave function to a good approximation is a linear combination of the HOMO and LUMO, even for tunneling through the transport gap. This does not imply any violation of energy conservation whatsoever since the HOMO and LUMO are not eigenfunctions of the combined system - molecule with substrate. The calculations illustrate that the MOs provide a very good basis set. We then perform realistic model calculations for PtPc and MgPc adsorbed atop three layers of NaCl(100) on an Au(111) substrate, using all the MOs as an efficient basis set for expanding the wave function inside the molecule. The experimental topographic images of electrons tunneling through the gap are reproduced rather accurately. This effort revealed that absolute rotational orientation, adsorption site, and metal center are important, in this or der, to the gap images of these molecules. It is specifically dominated by orientation, spotlighting the importance of the MOs even for tunneling through the gap. ## Results and discussion ### Theoretical discussion of electron propagation through the transport gap To improve our understanding of gap states, we first consider a straightforward tight-binding model. As shown in the inset of Figure 1, we consider a molecule with just one orbital (HOMO), at the energy \(\varepsilon_{\rm H}<0\) eV, on a substrate, and its coupling to a metal tip. The voltage bias is \(U_{\rm bias}\leq 0\) eV. We include hopping matrix elements from each substrate level to the HOMO level and from the HOMO level to each tip level. We first calculate the states of the system without the tip. \(N(\varepsilon)\) shows the corresponding local density of states (DOS) on the HOMO. There is a narrow resonance around \(\varepsilon_{\rm H}\), but with tails extending to energies far away from \(\varepsilon_{\rm H}\). The hopping integrals between the molecule and the tip are turned on at some large negative time with a slow growth, \(e^{\varepsilon t}\), where we let \(\kappa\to 0^{*}\), to some very small positive value. The computed results are shown in Figure 1. For further details, such as the effects of introducing the Coulomb interaction, and how the hopping between the molecule and the tip is treated in first-order perturbation theory, see the Supporting Information (SI). For \(U_{\rm bias}<\varepsilon_{\rm H}\), there is a large current, as expected, since the tip Fermi energy is below the unperturbed HOMO level. The drop in current as the bias is made more negative reflects the semi-elliptic form of the DOS. However, even for \(\varepsilon_{\rm H}<U_{\rm bias}<0\), there is a non-vanishing current due to a small Lorentzian tail of the narrow resonance for \(\varepsilon>\varepsilon_{\rm H}\). Away from the resonance, the tail decays rather slowly as \(1/(\varepsilon-\varepsilon_{\rm H})^{2}\). We emphasize that this current, however small, is not negligible. Tunneling through the HOMO for \(\varepsilon_{\rm H}<U_{\rm bias}\) does not imply violation of energy conservation, since the HOMO is not an eigenstate of the Hamiltonian describing the combined substrate-HOMO system. In the following we show that this tunneling through the HOMO and, in particular, tunneling through lower-lying MOs as well as the LUMO is crucial for understanding the image of electrons tunneling through the molecule's transport gap. This set of considerations then provides a unified and consistent description of tunneling for all values of the bias voltage. We now discuss two essential assumptions in the model above. Firstly, the current flows entirely via the HOMO even for \(\varepsilon_{\rm H}<U_{\rm bias}\), since there is no direct hopping from the substrate to the tip. The tip then sees the lateral structure of the HOMO of the molecule, and it does not see the structure of the substrate. This is true even when \(\varepsilon_{\rm H}<U_{\rm bias}\) and the resulting hole has almost all the weight in the substrate. Secondly, we have assumed that there is only one orbital on the molecule. Including several orbitals would allow for interesting interferences effects between the hopping through different MOs. To discuss these assumptions, we study a one-dimensional (\(1d\)) model which can be solved exactly, so that there is no need to introduce a basis set or make assumptions about hopping matrix elements. This model is shown schematically in the inset of Figure 2A. To the left is a substrate (\(-64\leq z\leq 0\)) with the surface at \(z=0\), and to the right is a simplified molecule with two nuclei at \(z=10\) and \(z=13\), with the spatial coordinate \(z\) in Bohr radii (\(a_{0}\)). The substrate has the Fermi energy at \(-5.2\) eV and a potential of \(-10.4\) eV. The nuclei of the molecule are described by two \(\delta\)-functions whose intensities set the HOMO and LUMO at \(-7.1\) eV and \(-3.5\) eV, respectively. The specific energies here were chosen to represent P\(\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm\rm energies, \(\varepsilon<0\), the wave function grows exponentially unbounded on at least one side of the molecule, and it is, therefore, not a physically admissible solution. When the molecule sits in the presence of a solid, however, the wave function is allowed to be (and typically is) exponentially growing on the side facing the solid (and exponentially decaying as seen from the perspective of the solid), and therefore, energies in the gap are allowed. The presence of the solid completely changes the character of admissible wavefunctions, no matter how "weakly" it may perturb the system. Indeed, typically the presence of the molecule hugely enhances the wave function amplitudes rather than suppresses it. In fact, for the illustrative energy \(E^{*}\) in Figure 2B, its associated wave function even grows with \(z\) inside the molecule. When this wave function is compared with those associated with energies close to the HOMO or the LUMO, it is of course strongly reduced, as is seen in Figure 1. Although an example is not illustrated in Figure 2, it is possible, however, to choose the orbital energies such that the tunneling is reduced by presence of the molecule for some energies. Similar arguments apply to the NaCl film. For an infinite NaCl solid there are no physical states in the band gap. For the present system, however, Au states have exponentially decaying tails extending through the NaCl film and the molecule out to the tip, even for energies corresponding to the gaps of NaCl and PtPc. In the context of this model, the first assumption above implies that we only need to consider the indirect coupling between the substrate and the tip via the HOMO and LUMO. The blue curve in Figure 2B shows \[\Psi(z)-\sum_{i=1}^{2}\Phi_{i}(z)\int\Phi_{i}(z^{\prime})\Psi(z^{\prime}).dz^{\prime} \tag{1}\] The second term represents the expansion of the exact wave function using the bound solutions \(\Phi_{i}(z)\) of the free molecule. The blue curve in Figure 2B illustrates that in the range of the molecule, just a small remainder of \(\Psi(z)\) cannot be expanded in the bound solutions of the free molecule. Comparing the gray and red curves, we observe that the presence of the molecule and its attractive potential hugely enhances the wave function amplitude which results in an increased probability to find the electron at the position of the molecule. Given this result, that the molecule's presence hugely enhances the wave function amplitude, it is not surprising that the exact wave function primarily consists of a linear combination of the HOMO and LUMO in the range of \(z\) that overlaps with the molecule. This picture then justifies the first assumption that there is no direct hopping from the substrate to the tip; the tip just couples to the states on the molecule. In the example above, this means neglecting the coupling of the tip to the small residual (blue curve) in Figure 2B. In the model calculation to follow, we assume that this neglect remains a good approximation. In the full three-dimensional case, orbitals on the tip with a specific symmetry may additionally dominate the hopping. It is then essential how the important orbitals of the molecule and the underlying substrate couple to the tip orbitals, as such couplings can also strongly influence the tunneling from the molecule. Finally, we notice that in the three-dimensional case, there can also be direct tunneling of electrons from the Au/NaCl system to the tip without passing through the molecule. This contribution is neglected here. Further details of the model are presented in the SL. Concerning the second assumption above, that there is only one orbital on the molecule, we observe that in the range of the molecule, the wave function is now a linear combination of two functions. In the SI, we show that this results in a strong energy dependence of the wave function, which can easily be understood in terms of the coupling to the two MOs, in strong contrast to the simple model in Figure 2, which only has one MO. Although the direct coupling between the substrate and the tip may be minimal, it is indirectly affected by coupling via different Figure 2: (A) Wave function for an energy (\(-6.3\) eV) in the energy gap of the molecule as a function of spatial coordinate \(z\), which is in units of \(a_{0}\). The substrate is at the left (\(z<0\)) and a molecule with nuclei at \(z=10\) and \(z=13\) to the right (see insert). The HOMO and the LUMO are located at \(-7.1\) eV and \(-3.5\) eV, respectively. The amplitude of the wave function on the molecule is very small and barely visible on this scale. (B) Blow up of \(\Lambda\) in the region of the molecule. The figure shows how the amplitude of the wave function is hugely enhanced (factor 78 at \(z=17\)) when the molecule is included (red curve) compared with the case without a molecule (gray curve). It also shows the small fraction of the exact wave function which cannot be expanded in the HOMO and LUMO wave functions in the range (\(10\leq z\leq 13\)) of the molecule (blue curve). MOs. For the PtPc model studied below, 182 states on the molecule lead to a rich coupling to the substrate. We can now perform a much more realistic calculation for molecules on a gold substrate covered by a three-layer NaCl film. We study PtPc experimentally and theoretically and compare theoretical results for MgPc with experimental results by Miwa _et al._[14] We use a tight-binding model for Au, including 3\(d\), 4s, and 4\(p\) orbitals. For the NaCl film, we include the Na 3\(s\) and 3\(p\) orbitals. As discussed in our earlier work [13], the conduction band of NaCl is primarily of Cl character, in contrast to common belief that the conduction band is cationic in character. We thus include the Cl 3\(p\) and 4\(s\) levels and adjust the parameters so that the conduction band is mainly of Cl 4\(s\) character. The model for the Au-NaCl system is identical to the model in Ref. [13]. We then add a model of the adsorbed molecule, not included in the earlier work. The PtPc or MgPc molecule is described by including all 57 atoms. We use the empirical parameters of Harrison [15], but we have modified the parameters slightly, for e.g., to obtain the experimental PtPc HOMO-LUMO energy gap, including image effects, and to obtain the correct alignment of electronic structures in the sub systems. We did not tune parameters in order to improve the agreement with the experimental images. For details of the parameters employed, see the SI. The corresponding one-particle Hamiltonian is solved for energies in the gap of PtPc or MgPc. Even for tunneling through the transport gap, this approach allows for charge fluctuations on the molecule. To obtain STM images, we use the Slater [16] rules to construct orbitals on the atoms, which are combined with the eigenvectors of the Hamiltonian. For the interesting energy range, most of these wave functions are \(\pi\)-orbitals, i.e., mainly linear combinations of C and N 2\(p_{z}\) orbitals. For distances close to the molecular plane, these functions should provide a reasonable basis set. In what follows, we will focus on images at these distances, but also show images for a realistic tip-sample distance of 7 A as determined by point contact measurements [17]. For this purpose we introduce the approximations of Tersoff and Hamann [18; 19], making it sufficient to calculate the electron wave function at a fictitious center of an \(s\)-orbital on the tip. We assume that the potential in vacuum is constant inside a cylinder with radius 12 A and infinite outside. This radius is much larger than the distance from the cylinder axis to the outermost H atoms (7.6 A). It is then a good assumption to assume that the wave function of the tunneling electrons is localized within the cylinder. The Schrodinger equation in vacuum is solved using a basis set. We use functions \(e^{\pi m\phi}\) to describe the angular dependence, where \(m\) is an integer. The radial behavior is described by integer Bessel functions and the behavior perpendicular to the surface by exponential functions, \(e^{-\kappa x}\), where \(\kappa\) is related to the energy of the electron. The contact of the PtPc molecule to the rest of the system means that the PtPc charge is not conserved. As a result, PtPc has charge fluctuations. Projecting out the NaCl states in perturbation theory and considering states within \(\pm 3\) eV of the Fermi energy, we obtain fluctuations out of neutral PtPc of the order of \(10^{-3}\). ### Comparison between theory and STM measurements Calculations are performed for models of PtPc (Figure 3A) and MgPc on a trilayer NaCl(100) film on Au(111). For details of the parameters used, see the SI. Computed results for PtPc at a distance of 1 A are shown in Figure 3B-G for different values of the energy \(\varepsilon\) inside the gap. The theoretical images exhibit four lobes on the isomide units of the molecule, similar to experimental observation and in stark contrast to maps of both HOMO and the two overlapping degenerate LUMOs, which have eight lobes Figure 3: (A) Left panel: Ball and stick model of the PtPc molecule in top view and side view. Right panel: Top view of the adsorption geometry of PtPc on the NaCl layer. (B-G) Theoretical PtPc images at the energies indicated in the lower right corner of each panel. The images are calculated 1 A outside the molecular plane of PtPc adsorbed on a three layer NaCl(100) film on Au(111). The crystallographic axes of NaCl are indicated in panel C. Images sizes (\(16\times 16\) Å\({}^{2}\)). [10; 20]. In Figure 3 the small changes in the theoretical results as a function of energy may be due to details of the calculation. They are not found in experiment, even when the tip-molecule distance is as small as stable scanning permits. Figure 4A, B shows calculations at a more realistic tip-sample distance [17] of \(z=7\) A in comparison with constant height STM maps exhibiting a satisfactory agreement with experiment. The energies studied in Figure 4B, D are close to the LUMO (\(\varepsilon=1.7\) eV) (see the density of state spectra in Figure 4E) and demonstrate the amazingly rapid change from the orbital patterns to the gap images. The difference in size of the computed images and the STM topography are ascribed to the limited resolution of the experiment, which arises from the finite tip curvature. Figure 5 shows theoretical (Figures 5A and B) and experimental [14] (Figures 5C and D) results for MgPc and H\({}_{2}\)Pc. The experimental data is obtained using a carbon monoxide molecule decorated tip which is known to improve STM spatial resolution [21]. MgPc differs from PtPc in three ways. An Mg atom has replaced the central Pt atom, the molecule is adsorbed on a Cl atom site of NaCl instead of a Na site, and the molecular orientations on the NaCl differ substantially. As shown by Miwa _et al._[14], MgPc is oriented approximately 53\({}^{\circ}\) off the (010) axis of the underlying NaCl lattice (Figure 5C) in contrast to PtPc which is aligned with this axis. H\({}_{2}\)Pc on the other hand has no central metal atom and adsorbs atop a Na atom, similar to PtPc. All images in Figures 3, 4 and 5 are oriented such that the horizontal and vertical directions of the image correspond to the NaCl (010) and (100) axes. The theoretical images of MgPc (Figures 5A and B) well reproduce the four lobe structure with a central minimum experimentally observed by Miwa _et al._[14] (Figure 5C). In particular, the image in Figure 5A is rotated relative to the PtPc image. To check if the essential difference between MgPc and PtPc/H\({}_{2}\)Pc lies in the differences in molecular adsorption geometry, MgPc is purposely rotated in the calculation (Figure 5B) so that its orientation is the same as observed for PtPc and H\({}_{2}\)Pc, that is, along the (010) axis of NaCl. Finally, the MgPc center is placed atop a Na atom. The result is shown in Figure 5B. The image is now rather similar to the image for PtPc. We conclude that the difference in the molecule's orientation is the most crucial difference between PtPc and MgPc. It is striking that the orientation of the molecule is so crucial, even for tunneling in the conduction gap. To understand the shape of the images in Figures 3 and 4, we expand the wave function of the combined system in terms of the PtPc MOs. Inside the molecule, we write \[|\Psi_{i}\rangle=\sum_{j=1}^{182}c_{j}^{(i)}|j\rangle, \tag{2}\] where the sum is over the 182 eigenfunctions of the free molecule. We then focus on "diagonal" contributions \[f_{j}=C\sum_{-0.8\leq\varepsilon_{i}\leq 1.2}|c_{j}^{(i)}|^{2}, \tag{3}\] where \(C\) is chosen such that \(\sum_{j}f_{j}=1\). Here \(f_{j}\) adds up the weight on the PtPc's \(j\)th MO over all states within the energy range \(-0.8\) eV \(\leq\varepsilon_{i}\leq 1.2\) eV. We observe, however, that "off-diagonal" contributions \(\left[c^{(i)}\right]_{j}^{*}c_{j}^{(i^{\prime})}\), \(i\neq i^{\prime}\), also give substantial contributions. Figure 6 shows \(f_{j}\) for important MOs of \(\pi\)-character. The \(\pi\)-states are labeled by the number of angular nodal planes \(n_{p}\), i.e., planes through the center of the molecule and perpendicular to molecule and surface plane. In cases where such nodal planes are not well defined, we have labeled the corre Figure 4: Comparison of theoretical (A, B) and experimental (C, D) images for PtPc atop three layers of NaCl on Au(111) at energies given in the upper right of each panel. The tip-molecule distance in the calculation is 7 Å. The experimental images are constant height STM maps. The length scale of all panels is given in the panels C and D (image sizes 23\(\times\)23 Å\({}^{2}\)). Linear color scales are used for both experimental and theoretical data. The difference in apparent molecular size is ascribed to the finite tip radius in experiment, which is not accounted for in the calculation. (E) Logarithm of the calculated differential conductance \(|d|/d\mathrm{V}|\) as a function of bias \(V\) at tip-molecule distance 7 Å (solid black line) in comparison to an experimental \(d\mathrm{I}/d\mathrm{V}\) spectrum (blue markers). For details see the SL. sponding state "Undef". States with a given value of \(n_{p}\) have different numbers of "radial" nodes assuring orthogonality. Although the margins of the energy range approach the HOMO (at -1.3 eV) on one side and the LUMO (at 1.7 eV) on the other side to within 0.5 eV, HOMO and LUMO contribute only 3% and 10%, respectively, to the total weight. Next, we have selectively summed up only contributions from states with a well-defined \(n_{p}\)-value. The results are shown in Table 1. Interestingly, three \(n_{p}=0\) states (37%) and six (including degeneracy) \(n_{p}=1\) states (16%) contribute almost half of the weight (53%). The \(n_{p}=2\) states contribute little (3%). States with less well-defined angular nodes, shown in Figure 6, contribute 5%. Many other states, have smaller contributions and are not shown in the figure. Together they account for 26%. The two-fold degenerate \(n_{p}=1\) states have leading contributions of the type \(\sin^{2}(m\phi)\) and \(\cos^{2}(m\phi)\) with \(m=n_{p}=1\), where \(\phi\) is the azimuthal angle. They are planar two-lobe structures that lie along the \(y\)- and \(x\)-axis, respectively. When combined they provide an approximately \(\phi\)-independent, isotropic contribution, just like the \(n_{p}=0\) states. The weak four-fold pattern is partly due to a \(n_{p}=2\) function, with the symmetry \((x^{2}-y^{2})^{2}\) and 4 lobes directed along the cardinal directions. However, there are also contributions to the image from products of functions with different values of \(n_{p}\), e.g., of the type \(\cos(n_{p}\phi)\cos(n_{p}^{\prime}\phi)\), where \(n_{p}=0\) and \(n_{p}^{\prime}=4\). Such functions are positive for multiples of 90\({}^{\circ}\) and thus add weight along the \(x\)- and \(y\)-axis but subtract weight along the diagonals. These images in the energy gap are very different from, e.g., the HOMO (\(n_{p}=4\)), which is described by \(m=4v\) (\(v=1,2,\ldots\)) states, and the LUMO, which is described by odd \(m\)-value states with a significant weight for \(m=3\) and \(m=5\) (for illustration see refs. [10] and [20]). Figure 6 illustrates that the Au-PtPc coupling via NaCl is far from trivial. NaCl provides a buffer between the Au substrate and the PtPc molecule, but it influences the coupling in non-uniform ways, favoring the coupling to specific MOs. This has implications for STM topography imaging in the PtPc transport gap. This study reveals important facts for imaging of molecules and beyond. It shows that there is access to energetically deep MOs that are inaccessible by conventional STM because voltages of several eV between tip \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \(n_{p}=0\) & \(n_{p}=1\) & \(n_{p}=2\) & HOMO & LUMO & Other & Rest \\ \hline 0.37 & 0.16 & 0.03 & 0.03 & 0.10 & 0.05 & 0.26 \\ \hline \hline \end{tabular} \end{table} Table 1: Relative contributions (weights) to the gap states in the interval \(-0.8\leq\varepsilon\leq 1.2\) eV. Listed are the weights of some \(\pi\)-orbitals with different \(n_{p}\)-values, and the HOMO and LUMO orbitals. “Other” shows the contributions in Figure 6 from orbitals that were not assigned an \(n_{p}\) value, and “Rest” shows the many small contributions not shown in the figure. The weights represent the total weights of a given state, not just from the leading \(m\)-component. Figure 5: Comparison of theoretical (A, B) and experimental (C, D) images for MgPc and H\({}_{2}\)Pc on bilayer NaCl on Ag(111) at \(-0.55\) eV energy (32\(\times\)32 \(A^{2}\)). Panels (C, D) show constant current (\(I=1\) pA) topographic STM scans using raw data from a study by Miwa _et al._[14] The MgPc molecule in (C) is adsorbed on a Cl site and is rotated by 53\({}^{\circ}\) with respect to the NaCl (010) axis. Both effects are included in the respective calculation (A). H,Pc (as does PtPc) adsorbs on a Na site and is aligned with the NaCl lattice (D). This situation was included in the calculation for a tip distance of 7 Å above the molecule by orienting MgPc to the correct H\({}_{2}\)Pc adsorption geometry. The apparent molecule sizes of experiment and theory are closer to each other than in Figure 4 since the experiment by Miwa _et al._ used a CO-covered tip, known to provide sharper imaging. Finite tip size and the effect of CO are not accounted for in the calculation presented in the current work. Figure 6: Weights \(f_{j}\) (Eq. 3) of important MOs in PtPc summed over all states within the gap between \(\varepsilon=-0.8\) eV and 1.2 eV. The energy range over which the sum extends, is marked by the arrow “Measured”. Some \(\pi\) states are labeled by the number of nodal planes, \(n_{p}\). States with more complicated patterns are labeled “Undef”. The states represented in this figure contribute 74% of the total weight. The remaining contribution comes from many MOs each with rather small weights. The \(n_{p}\)+1 states are doubly degenerate, and the weights of the degenerate states were added. and sample may damage the molecule or the buffer layer. Moreover, theoretical models that exclude orbitals at energies far from the transport gap are unlikely to properly reproduce in-gap images. While it is a widespread assumption that the electron propagates through a molecule on an exponentially decaying tunneling trajectory, analogous to how an electron tunnels through the vacuum barrier between molecule and STM tip, the above analysis shows that propagation is via electronic states which are not exponentially decaying within the extension of the molecule. These results are particularly relevant for energy up-conversion light emission processes, like those of single isolated molecules studied with STM [8; 9; 10]. Figure 7A shows schematically the tunneling process discussed here for the case when the tip Fermi energy is in the transport gap. Figure 7B shows a similar process, which is the first step in an energy up-conversion process necessary to create a singlet exciton via creation of a lower energy triplet exciton. A spin down tip electron hops into the LUMO, different from a spin up electron hopping into the HOMO in Figure 7A. It illustrates how one can create a triplet exciton state without flipping a spin, which would otherwise require invoking the very weak spin-orbit coupling or some other weak mechanism. ## Conclusion Molecules adsorbed on thin insulating layers are supposed to behave as quasi-isolated quantum systems whose electronic structure can be studied by a scanning tunneling probe. Here we showed clear deviations from this simple picture by analyzing the electronic states in the energy gap between HOMO and LUMO and within the transport gap of the decoupling insulator. At these energies there exist no states of a perfectly isolated molecule, nor for an infinitely extended insulator. The proximity of molecule, insulator, and substrate result in a continuum of real electronic states within this gap that penetrate through insulator and molecule. Each of these states can be represented by a sum of many electronic eigenstates of the perfectly isolated system with significant weight on states even at energies far below the gap region. We have studied PtPc and MgPc (theoretically and experimentally) adsorbed on a NaCl film on an Au(111) substrate, focusing on the states in the transport gap of these molecules. Although PtPc and MgPc are only one atomic layer thick, the images are quite different from the image of the NaCl substrate. Replacing PtPc with MgPc primarily rotates the image by 53\({}^{\circ}\), corresponding directly to the rotation of the MgPc molecule. This shows that the image is mainly determined by the electronic structure of the adsorbed molecule, even when the tunneling is through the gap. We showed how the molecule's presence affects the tunneling current for three models of increasing complexity. It is then not surprising that the electronic states of the molecule strongly influences the shape of the image. We showed that the image is mainly determined by linear combinations of the bound states of the molecule. We find that for energies in the gap, not too close to the HOMO or LUMO, most of the contributions come from PtPc states at energies well below the HOMO, particularly from states with no or one angular node. Generally speaking, the NaCl film is often considered a buffer that allows access to the specific electronic [22] and topographic [23] properties of the substrate but ensures a sufficient electronic decoupling of an adsorbed molecule from the substrate. We find, however, that electronic states of an electrically insulating buffer influence the image of a molecule in its transport gap substantially. The character of the gap states is essential for more complex processes, for example, the emission of photons by a tunneling electron, where transport through the gap can play an important role. If we treat our molecular system in essence as a generic molecule adsorbed on an insulator on a metallic substrate, we arrive at the conclusion that we can potentially access information on energetic states that are nominally inaccessible through direct tunneling. This finding has very immediate and deep implications for imaging molecules on surfaces. ## Acknowledgements The authors thank H. Imada and Y. Kim for providing the experimental data for topographical images of MgPc and H\({}_{2}\)Pc shown in Figures 5C and D. ## Methods and experimental **Sample preparation** - The experiments were carried out with a home-built low-temperature STM operated at \(T=4.3\) K in an ultra-high vacuum (\(<10^{-11}\) mbar) [24]. The Au(111) single-crystal (\(>99.999\%\) purity) sample was cleaned by repeated cycles of Ar\({}^{+}\) ion sputtering at \(10^{-6}\) mbar range argon pressure with 600 eV acceleration energy and subsequent annealing to 873 K. The sample heating and cooling rate was about 1 K/s. NaCl was evaporated thermally from a Knudsen cell held at 900 K, with the Au(111) surface held at 300 K, to obtain defect-free, (100) Figure 7: (A) Tunneling process of the type discussed in this study. (B) Tunneling process leading to the creation of a triplet exciton, without invoking spin-orbit coupling to flip a spin. terminated NaCl islands. Next, PtPc was evaporated atop a liquid nitrogen cooled Au(111) substrate, partially covered with NaCl. The PtPc Knudsen cell was held at 710 K while the temperature of the Au(111) substrate was about 90 K. The sample was then transferred to the STM for characterization. An electrochemically etched gold wire [25] (99.95% purity) was used as a tip in the experiment. **STM measurements** - To ensure a metallic tip, the Au wire was further prepared by controlled tip indentations (\(\Delta z=1-3\) nm, \(V=50-100\) mV) in Au(111) until atomic resolution is obtained at the tunneling current set point: \(I_{T}=10\) pA, +1 V. This study always specifies bias voltages of the metal substrate with respect to the grounded tip.
2301.05841
Distributed Optimal Formation Control for an Uncertain Multiagent System in the Plane
In this paper, we present a distributed optimal multiagent control scheme for quadrotor formation tracking under localization errors. Our control architecture is based on a leader-follower approach, where a single leader quadrotor tracks a desired trajectory while the followers maintain their relative positions in a triangular formation. We begin by modeling the quadrotors as particles in the YZ-plane evolving under dynamics with uncertain state information. Next, by formulating the formation tracking task as an optimization problem -- with a constraint-augmented Lagrangian subject to dynamic constraints -- we solve for the control law that leads to an optimal solution in the control and trajectory error cost-minimizing sense. Results from numerical simulations show that for the planar quadrotor model considered -- with uncertainty in sensor measurements modeled as Gaussian noise -- the resulting optimal control is able to drive each agent to achieve the desired global objective: leader trajectory tracking with formation maintenance. Finally, we evaluate the performance of the control law using the tracking and formation errors of the multiagent system.
Clinton Enwerem, John Baras, Danilo Romero
2023-01-14T07:26:18Z
http://arxiv.org/abs/2301.05841v2
# Distributed Optimal Formation Control for an Uncertain Multiagent System in the Plane ###### Abstract In this paper, we present a distributed optimal multiagent control scheme for quadrotor formation tracking under localization errors. Our control architecture is based on a leader-follower approach, where a single leader quadrotor tracks a desired trajectory while the followers maintain their relative positions in a triangular formation. We begin by modeling the quadrotors as particles in the YZ-plane evolving under dynamics with uncertain state information. Next, by formulating the formation tracking task as an optimization problem -- with a constraint-augmented Lagrangian subject to dynamic constraints -- we solve for the control law that leads to an optimal solution in the control and trajectory error cost-minimizing sense. Results from numerical simulations show that for the planar quadrotor model considered -- with uncertainty in sensor measurements modeled as Gaussian noise -- the resulting optimal control is able to drive each agent to achieve the desired global objective: leader trajectory tracking with formation maintenance. Finally, we evaluate the performance of the control law using the tracking and formation errors of the multiagent system. multiagent systems, unmanned aerial vehicles, swarm coordination, formation control, optimal control. ## I Introduction The task of formation control is central to many problems in multiagent coordination and cooperative control, as it is usually the first problem one typically has to solve to achieve some collective objective with multiple agents. In the standard formation control problem, it is usually of interest to control a group of agents -- so that they converge to unique terminal states and with the goal of attaining a desired geometric pattern -- to facilitate a specific task. Such a control objective finds direct application in several areas such as reconnaissance, aerial coverage and monitoring, mobile target tracking, and in mobile communication network maintenance, to name a handful. In problems involving formation control, the prevailing assumptions are usually that all the agents have either the same forward velocity [1, 2] or angular velocity [3], and that information about the state of each agent is available to its neighbors. The agents obtain this state information from either a central station broadcasting to all agents or from a more complex distributed network topology that can be fixed, stochastic, or even have intrinsic dynamics [4]. Conventionally, the formation control problem takes one of two broad forms: group reference formation control and non-group reference formation control [4]. Group-reference formation control, also known as formation tracking, is the case where the agents move in formation while tracking a reference trajectory or _group reference_. In non-group reference formation control on the other hand, the agents are tasked with maintaining a specific geometric shape without following any trajectory setpoint. Unsurprisingly, much of the research on formation control is centered around the more challenging problem of formation tracking, and several methods have been proposed (see [5] for a detailed survey on the topic). There is the well-researched leader-follower paradigm where one agent is taken as the leader and the other agents, as followers, that must track the leader's motion while maintaining some pre-specified distance from themselves and the leader. Defining rules that govern the evolution of these inter-agent distances thus leads to the desired formation, and by varying the rules, a new formation results. Simultaneous tracking under this formation is then achieved by specifying the desired trajectory as the leader's path setpoint. To effectively track the leader, follower agents require sufficiently accurate estimates of the leader's pose in the inertial frame, which can be affected by noisy sensor measurements, exogenous disturbances from the environment, such as wind or downwash from nearby agents -- in the case where the agents are aerial vehicles -- or even uncertainty in the communication network from delays and packet drops. Thus, it is often the case that the multiagent system (MAS) will fail to track the reference trajectory while keeping formation, or deviate from the desired formation altogether, causing unintended and even unsafe effects [6]. Furthermore, the disturbances themselves may be difficult, computationally expensive, or impossible to estimate, making formation tracking under uncertainty both a safety-critical requirement and a nontrivial problem. Several studies have approached the formation tracking problem from an optimal control viewpoint. One of the earliest efforts at formulating the tracking with formation maintenance task as an optimal control problem was presented in [7]. Here, using an approach derived from the Riccati equation, the authors designed a distributed optimal formation control law -- for multiple UAVs with linear models -- by minimizing a non-quadratic cost function. The optimal control formulation was given here, however, without any consideration to the pairwise distances between agents. Following standard thinking based on Pontryagin's Minimum Principle (PMP), the authors in [8] presented an optimal formation control approach by minimizing the control energy of the system, with the agents evolving under perfect-state dynamic models. More recently, an identifier-critecator reinforcement learning based method was employed in [9] to select the optimal control policy for an MAS comprising agents with unknown and adaptively-identified nonlinear dynamics. In our work, we study the problem of formation tracking under localization errors where the leader in the MAS is required to track a sinusoidal reference. Simultaneously, the followers are required to keep their assigned planar positions with respect to the leader and themselves as defined by a triangular formation rule. In contrast to the aforementioned research articles, our work focuses on designing optimal formation tracking laws for a specific case where the agents are modeled as quadrotors in the plane under uncertainty (from sensor noise). We also formulate the formation tracking task as a dynamic optimization problem with a constrained-augmented Lagrangian and solve it using optimization software tools, as opposed to traditional analytical optimal control methods like PMP or the Riccati equation. ### _Contributions_ Our contributions are as follows: 1. Application of optimal control theory to a uniquely-formulated multiagent formation tracking problem. 2. Simulative validation of the effectiveness of the optimal control law in both the nominal setting and the case with Gaussian noise in state measurements. In what follows, we introduce the notation used in this work and discuss the setting under which we study the formation tracking problem (see Section II). Next, in Section III, we provide details about the planar quadrotor model under consideration. Section IV puts forward the optimal control component of our work. Following that, in Section V, we discuss motivations for electing a triangular formation as the reference formation in our work, along with a brief description of the properties of this desired formation. The simulation setup is provided in Section VI, with key simulation results following in Section VII. Finally, we conclude the paper with recommendations for future research in Section VIII. ## II Preliminaries We pose the formation tracking problem, as considered in this article, under the following assumptions: 1. All agents are homogeneous, i.e., they are identical, hence (iii) follows. 2. All but one (randomly-chosen) agent (the leader) belong to the follower group; information about the leader's state is available to all the follower agents through a common communication network shared by all agents. 3. The model of each agent can be approximated by a linear time-invariant continuous-time model. See Section III for a description of the model. 4. Each agent's roll angle - and thus, rate - is approximately zero, i.e., the agent moves to its position in the formation by maintaining a near-hover state. 5. Each follower agent is driven independently to execute the local task of keeping its pre-assigned position in the inertial frame and also to simultaneously achieve the collective task of maintaining a desired group formation with the other agents. This assumption implies that there are no adversarial agents within the group. 6. The uncertainty in the MAS is only due to localization errors from the state estimation module (see Figure 1), hence the agents' states are perturbed by sensor noise, and are thus taken to be imperfect. Effects from external disturbances such as wind gust and downwash are neglected. We denote the \(i^{\text{th}}\) agent as \(a_{i}\in\mathcal{A}\), where \(\mathcal{A}\) is the set of all agents. \(a_{\mathcal{L}}\in\mathcal{A}\) denotes the leader agent, while the follower agents are in the set \(\mathcal{A}\setminus\{a_{\mathcal{L}}\}\). Additionally, while it is possible to segment \(\mathcal{A}\) into a finite number of leader-follower subsets (e.g., in the multi-leader case [10]), we have assumed that there is only one leader (see assumption (ii)) and that all other agents are followers within any optimization horizon. A few other assumptions will be introduced in Fig. 1: Block Diagram of our proposed control architecture. later sections as we specify the notation required for their definition. However, with the above setting, we can now present the formation tracking problem as follows: Given \(N\) agents in total, \(N-1\) followers must keep their positions in the formation while the randomly-selected leader tracks a particular trajectory in space, with possibly inaccurate state information from sensor measurements. Essentially, we require that the group formation be preserved, with the least possible formation error, while the leader agent tracks a specified trajectory. ## III Planar Quadrotor Model We model the agents as quadrotors in the plane (see Figure 2), governed by the dynamics of a planar quadrotor linearized at the equilibrium (hover) state: \[\ddot{y}_{i} =-g\phi_{i} \tag{1a}\] \[\ddot{z}_{i} =-g+\frac{u_{1_{i}}}{m}\] (1b) \[\ddot{\phi}_{i} =\frac{u_{2_{i}}}{I_{xx}}, \tag{1c}\] where \(m\) is the mass of each agent, \(g\) is the gravitation constant, and \(I_{xx}\) is the \(x\) component of the (diagonal) inertia matrix. To simulate sensor noise, we introduce White Gaussian Noise (WGN) terms to the formulation in (1) to get: \[\ddot{y}_{i} =-g\phi_{i}+w_{1} \tag{2a}\] \[\ddot{z}_{i} =-g+\frac{u_{1_{i}}}{m}+w_{2}\] (2b) \[\ddot{\phi}_{i} =\frac{u_{2_{i}}}{I_{xx}}+w_{3}, \tag{2c}\] We can now write the planar quadrotor model in state-space form as: \[\dot{\mathbf{x}}_{i}=f_{i}(\mathbf{x}_{i},\mathbf{u}_{i},\mathbf{w})=\mathbf{ A}\mathbf{x}_{i}+\mathbf{B}\mathbf{u}_{i}+\mathbf{G}_{c}g+\mathbf{K}\mathbf{w}. \tag{3}\] where \(\mathbf{A}\) and \(\mathbf{B}\) are the plant and input matrices of the state-space model with appropriate dimensions, respectively, with \(\text{det}(\mathbf{A})\neq 0\). \(\mathbf{G}_{c}\) is the vector \(\left[\begin{smallmatrix}0&0&0&0&-1&0&0\end{smallmatrix}\right]^{T}\), which accounts for gravity compensation in the \(z_{\mathcal{O}}\)-direction. \(\mathbf{K}\in\mathbb{R}^{6\times 6}\) is the noise gain matrix while \(\mathbf{w}\in\mathbb{R}^{6}\) is the vector \(\left[\begin{smallmatrix}0&w_{1}&0&w_{2}&0&w_{3}\end{smallmatrix}\right]^{T}\), where each \(w_{j}\) (\(j=[1,2,3]\)) follows a Gaussian distribution with mean \(\mu\in\mathbb{R}\) and standard deviation, \(\sigma\in\mathbb{R}\). Table I, partly adapted from [11], lists the discussed parameters for the simulated planar quadrotor. ## IV Optimal Quadrotor Formation Control To achieve the formation tracking task as set forth in Section II, we solve the following initial-value, finite-horizon optimal control problem (FHOCP) for the \(i^{\text{th}}\) agent's control input, \(\mathbf{u}_{i};\ i=[1,2,\ldots,N]\), on the interval \(\tau=[0,T]\): \[\min_{\mathbf{u}_{i}} J_{i}\] subject to: \[\dot{\mathbf{x}}_{i}(\tau)=f_{i}(\mathbf{x}_{i}(\tau),\mathbf{u}_{i }(\tau),\mathbf{w}) \tag{4a}\] \[\mathbf{x}_{i}(0)=\mathbf{x}_{i}^{0},\] (4b) \[\left|\left|\mathbf{\Gamma}_{i}(\tau)-\mathbf{\Gamma}_{j}(\tau) \right|\right|_{2}=d_{ij}^{r};\ i\neq j\] (4c) \[|u_{1_{i}}|\leq u_{1_{\text{max}}};\quad|u_{2_{i}}|\leq u_{2_{ \text{max}}}. \tag{4d}\] Here, \(J_{i}\) is the objective for the \(i^{\text{th}}\) agent equal to the total expectation of the trajectory error, control, and Mayer costs defined as: \[\mathbb{E}\Bigg{[}\int_{\tau=0}^{T}L_{i}(\mathbf{\Gamma}_{i}[\tau],\mathbf{u}_ {i}[\tau])d\tau+h(\mathbf{x}_{i}(T))\Bigg{]}, \tag{5}\] where \(L_{i}(\mathbf{x}_{i}[\tau],\mathbf{u}_{i}[\tau]:\mathbb{R}^{2}\times\mathbb{R }^{2}\mapsto\mathbb{R}\) is the Lagrangian defined as follows (the \(\tau\) argument has been omitted for brevity): \[\mathbf{u}_{i}^{T}\mathbf{R}_{i}\mathbf{u}_{i}+(\mathbf{\Gamma}_{i}-\mathbf{ \Gamma}_{i}{}^{r})^{T}\mathbf{Q}_{i}(\mathbf{\Gamma}_{i}-\mathbf{\Gamma}_{i}{} ^{r}), \tag{6}\] and \(h:\mathbb{R}^{6}\mapsto\mathbb{R}\) is the terminal (Mayer) cost for the \(i^{\text{th}}\) optimal control problem, given as: \[h(\mathbf{x}_{i}(T))=\mathbf{x}_{i}{}^{T}(T)\mathbf{P}_{i}\mathbf{x}_{i}(T). \tag{7}\] In (5), \(\mathbb{E}\) is the expectation operator -- defined in terms of the instantaneous probabilities \(p(s=s(*(\tau))\) -- as: \[\mathbb{E}[s(*)]=\int_{\tau=0}^{T}s(*(\tau))p(s)d\tau, \tag{8}\] where \(*\) here represents a generic time-dependent argument of \(s\), a generic function. In the preceding equations, we denote the set of admissible control laws or policies as \(\mathcal{U}\subseteq\mathbb{R}^{2}\). \(T\in\mathbb{R}\) is the time horizon for the optimal control problem, and \(\mathbf{x}_{i}=\left[\begin{smallmatrix}y_{i}&y_{i}&z_{i}&z_{i}&\phi_{i}&\phi_ {i}\end{smallmatrix}\right]^{T}\in\mathbb{R}^{6}\) is the state of the \(i^{\text{th}}\) agent. \(y_{i}\) and \(z_{i}\) are the respective positions of the \(i^{\text{th}}\) agent along the \(Y\) and \(Z\) inertial axes, while \(\phi_{i}\) is its roll angle. The (') variables in \(\mathbf{x}_{i}\) represent the corresponding linear (\(y\) and \(z\)) and roll rates. \(\mathbf{x}_{i}{}^{r}\in\mathbb{R}^{6}\) is the \(i^{\text{th}}\) agent's reference state, \(\mathbf{u}_{i}=\left[\begin{smallmatrix}u_{1_{i}}&u_{2_{i}}\\ \end{smallmatrix}\right]\in\mathcal{U}\) is the \(i^{\text{th}}\) control, with \(u_{1_{i}}\) and \(u_{2_{i}}\) respectively equal to the effective gravity-opposing force produced by the propellers of the \(i^{\text{th}}\) agent and the torque about the suppressed inertial \(X\) axis. \(\mathbf{\Gamma}_{i}\) is the \(i^{\text{th}}\) agent's trajectory equal to the vector \(\left[\begin{smallmatrix}y_{i}&z_{i}\end{smallmatrix}\right]^{T}\in\mathbb{R}^{2}\) while \(d_{ij}^{r}\in\mathbb{R}_{+}\) is the prescribed inter-agent distance, which can be thought of as representing the limited communication range between agents or as a simple inter-agent proximity constraint for collision avoidance. \(\mathbf{x}_{i}^{0}\) is the initial state of the \(i^{\text{th}}\) agent. \(\mathbf{P}_{i}\in\mathbb{R}^{6\times 6}\) is the weight matrix of the \(i^{\text{th}}\) terminal cost, \(\mathbf{R}_{i}\in\mathbb{R}^{2\times 2}\) is the weight matrix corresponding to the control cost, and \(\mathbf{Q}_{i}\in\mathbb{R}^{2\times 2}\) is the weight matrix for the cost corresponding to the \(i^{\text{th}}\) trajectory error \((\mathbf{\Gamma}_{i}-\mathbf{\Gamma}_{i}{}^{r})\), with the \(i^{\text{th}}\) time-varying trajectory \(\mathbf{\Gamma}_{i}{}^{r}(\tau)\) as reference. \(\mathbf{P}_{i}\) and \(\mathbf{R}_{i}\) are taken to be positive definite (_p.d._) matrices, while \(\mathbf{Q}_{i}\) is chosen as follows: \[\mathbf{Q}_{i}\ is\ \begin{cases}p.d.,&\text{if }a_{i}=a_{\mathcal{L}}\\ 0\in\mathbb{R}^{2\times 2},&\text{otherwise.}\end{cases}\] Since the desired position \([\,{}_{y}\,{}_{z}\,]\) in the \(YZ\)-plane encodes the trajectory of the \(i^{\text{th}}\) agent, the choice of Lagrangian in (6) - with \(\mathbf{Q}_{i}\) as defined - ensures that the leader tracks a specific trajectory determined by a high-level trajectory planner (see Figure 1), while the other agents maintain their position in the formation. We define this desired formation by specifying rules that guide the inter-agent distances between the leader and follower agents and between the follower agents themselves (see Section V). We also set an upper bound on the magnitude of the control signals for each agent (\(u_{1_{\text{max}}}\) and \(u_{2_{\text{max}}}\)), which is standard in practice. \(f_{i}:\mathbb{R}^{6}\times\mathbb{R}^{2}\mapsto\mathbb{R}^{6}\) is the continuous linear time-variant state-space model describing the \(i^{\text{th}}\) agent, presented in Section III. With the model in (3), we rewrite the optimal control problem (4) in a more compact fashion as: \[\min_{\mathbf{u}_{i}} J_{i}+\ \lambda_{i}\cdot(d_{ij}^{r}-\big{|}\big{|}\mathbf{\Gamma}_{i}(t)- \mathbf{\Gamma}_{j}(t)\big{|}\big{|}_{2})\] subject to: \[\dot{\mathbf{x}}_{i}(\tau)=f_{i}(\mathbf{x}_{i}(\tau),\mathbf{u }_{i}(\tau),\mathbf{w}) \tag{9a}\] \[\mathbf{x}_{i}(0)=\mathbf{x}_{i}^{0}\] (9b) \[|u_{1_{i}}|\leq u_{1_{\text{max}}};\quad|u_{2_{i}}|\leq u_{2_{ \text{max}}}, \tag{9c}\] where we have introduced the inter-agent distance constraint as a penalty term in \(J_{i}\). The objective function in (9) is the \(i^{\text{th}}\) augmented Lagrangian. \(\lambda_{i}\) is a non-negative real term that specifies whether the inter-agent distance constraint is taken into account in the \(i^{\text{th}}\) optimal control problem, and to what degree if so. Thus, we set the value for \(\lambda_{i}\) as follows: \[\lambda_{i}=\begin{cases}0,&\text{if }a_{i}=a_{\mathcal{L}}\\ \beta>0,&\text{otherwise.}\end{cases}\] With this problem formulation and choice of \(\lambda_{i}\), only the leader tracks the desired trajectory, while the follower agents simply keep their respective positions in the formation as determined by the triangular formation rule and corresponding inter-agent distances. ## V Formation Specification In addition to tight trajectory tracking, we require the MAS to maintain a triangular formation. We elect this formation because it is geometrically well suited to the leader-follower concept and also ensures that the followers are uniformly distributed spatially on a line segment behind the leader. This pattern has been shown to be locally asymptotically stable under the assumption that the formation is infinitesimally rigid [12]. To this end, we require that the formation be rigid, and translation and rotation invariant, i.e., that: \[\big{|}\big{|}\mathbf{\Gamma}_{i}-\mathbf{\Gamma}_{j}\big{|}\big{|}_{2}=d_{ij} ^{r}\ \forall\ i,j\in\{1,2,\ldots,N\};\ i\neq j, \tag{10}\] and that there exists a \(\boldsymbol{\xi}_{i}\in\mathbb{R}^{2}\) for the \(i^{\text{th}}\) agent such that: \[\begin{bmatrix}\mathbf{R}_{\theta}&\boldsymbol{\tau}_{d}\\ 0&1\end{bmatrix}\cdot\begin{bmatrix}\boldsymbol{\xi}_{i}\\ 1\end{bmatrix}=\begin{bmatrix}\mathbf{\Gamma}_{i}\\ 1\end{bmatrix}, \tag{11}\] respectively, for some homogeneous transformation in \(SE(3)\) comprising a fixed rotation about the \(X\) axis by \(\theta\in\mathbb{R}\) - encoded by the rotation matrix \(\mathbf{R}_{\theta}\in SO(2)\) - and a fixed translation by \(\boldsymbol{\tau}_{d}\in\mathbb{R}^{2}\). Figure 3 depicts three agents in triangular leader-follower formation with the desired inter-agent distances labeled. Fig. 3: Three agents in the plane assuming a triangular leader-follower formation. \(d_{\mathcal{L}f}^{c}\) and \(d_{ff}^{r}\) are the desired leader-follower and follower-follower distances, respectively. Fig. 2: Planar Quadrotor Model and Coordinate Frames: \(\mathcal{O}\) is the origin of the inertial frame. A moving body frame, subscripted by \(\mathcal{B}\) and pictured in _blue_, is attached to the agent’s center of mass. \(u_{1}\) and \(u_{2}\) retain their former definitions. ## VI Simulation Studies For simulation, we solve (4) using the Ipopt Python optimization package [13], with the problem setup parameters outlined in Table II. \(1_{n}\) is the \(n\times n\) identity matrix. In our problem setup, we also assume that the dynamics of each agent is propagated forward in time. ## VII Results & Analysis ### _Trajectory Tracking_ Figures 4 and 5 show the optimal state variables and control inputs, respectively, for the leader agent. As expected, the optimal \(\phi\) and \(u_{2}\) values over the time horizon are approximately zero implying near hover state. Concerning tracking performance, we can see from Figure 7a that the optimal trajectory closely tracks the desired sinusoidal reference trajectory. As expected, in the case with no measurement noise, a much better tracking performance is recorded - near zero trajectory error and hence, tight trajectory tracking (Figure 6). For a numerical comparison, Table III presents root-mean-square error (RMSE) values for the leader agent's trajectory error, along with those for the error between the desired and actual inter-agent distances for both follower agents (abbreviated as \(f_{1}\) and \(f_{2}\)). ## Acknowledgements The authors gratefully acknowledge Microsoft Corporation and the Maryland Robotics Center for their kind financial support.
2305.06667
Relativistic approach for the determination of nuclear and neutron star properties in consideration of PREX-II results
The bulk properties of nuclear matter and neutron stars with the newly generated relativistic interaction DBHP are investigated which provides an opportunity to modify the coupling parameters keeping in view the finite nuclei, nuclear matter, PREX-II data for neutron skin thickness in $^{208}$Pb and astrophysical constraints. The relativistic interaction has been generated by including all possible self and mixed interactions between $\sigma$, $\omega$, and $\rho$-meson up to the quartic order satisfying the naturalness behavior of parameters. A covariance analysis is performed to assess the statistical uncertainties on the model parameters and observables of interest along with correlations amongst them. We obtained a value of neutron skin thickness for $^{208}$Pb nucleus $\Delta r_{np}$ = 0.24 $\pm$ 0.02 fm. The maximum gravitational mass of neutron star and radius corresponding to the canonical mass ($R_{1.4}$) come out to be 2.03 $\pm$ 0.04 M$\odot$ and 13.39 $\pm$ 0.41 km respectively. The dimensionless tidal deformability, ${\Lambda}$ for a neutron star is also analyzed.
Virender Thakur, Raj Kumar, Pankaj Kumar, Mukul Kumar, C. Mondal, Kaixuan Huang, Jinniu Hu, B. K. Agrawal, Shashi K. Dhiman
2023-05-11T09:05:57Z
http://arxiv.org/abs/2305.06667v1
A relativistic approach for determination of nuclear and neutron star properties in consideration of PREX-II results ###### Abstract The bulk properties of nuclear matter and neutron stars with the newly generated relativistic interaction DBHP are investigated which provides an opportunity to modify the coupling parameters keeping in view the finite nuclei, nuclear matter, PREX-II data for neutron skin thickness in \({}^{208}\)Pb and astrophysical constraints. The relativistic interaction has been generated by including all possible self and mixed interactions between \(\sigma\), \(\omega\), and \(\rho\)-meson up to the quartic order satisfying the naturalness behavior of parameters. A covariance analysis is performed to assess the statistical uncertainties on the model parameters and observables of interest along with correlations amongst them. We obtained a value of neutron skin thickness for \({}^{208}\)Pb nucleus \(\Delta r_{np}\) = 0.24 \(\pm\) 0.02 fm. The maximum gravitational mass of neutron star and radius corresponding to the canonical mass (\(R_{1.4}\)) come out to be 2.03 \(\pm\) 0.04 M\(\odot\) and 13.39 \(\pm\) 0.41 km respectively. The dimensionless tidal deformability, \(\Lambda\) for a neutron star is also analyzed. ## I Introduction Neutron stars (NSs) are highly dense and asymmetric nuclear systems having a central density about 5-6 times the nuclear saturation density [1]. The studies of the NSs proclaim that their internal structure are quite complex as new degrees of freedom like hyperons and quarks may appear in the core. The NS properties like mass, radius, and tidal deformability can be estimated using equations of state (EoSs) obtained within various theoretical models [2; 3; 4]. One of such models is based on the relativistic interaction which describes the interaction between nucleons through \(\sigma\), \(\omega\) and \(\rho\) mesons. There are several models of relativistic mean field (RMF) effective lagrangian density consisting of nonlinear \(\sigma\), \(\omega\), and \(\rho\) terms and cross terms that have been analyzed for nucleonic and hyperonic matter and confronted with the constraints of nuclear matter properties and astrophysical observations of NS masses [5; 6; 7; 8; 9]. The nuclear theory studies [10; 11; 12] are mainly focusing on understanding the dense matter in NS. The constraints on EOS at high density are imposed with currently available lower bound on neutron star's maximum mass and radius [13; 14; 15]. The precise measurement of masses of millisecond pulsars such as PSR J1614-2230 [16], PSR J0348+0432 [17] show that the maximum mass of the NS should be around 2 M\(\odot\). The recent observations with LIGO and Virgo of GW170817 event [18; 19] of Binary Neutron Stars merger and the discovery of NS with masses around 2\(M_{\odot}\)[16; 17; 20; 21; 22; 23] have intensified the interest in these intriguing objects. The analysis of GW170817 has demonstrated the potential of gravitational wave (GW) observations to yield new information relating to the limits on NS tidal deformability. The Lead Radius Experiment (PREX-II) has recently provided a model-independent extraction of neutron skin thickness of \({}^{208}\)Pb as \(\Delta r_{np}\) = 0.283 \(\pm\) 0.071 fm [24]. The \(\Delta r_{np}\) has been identified as an ideal probe for the density dependence of symmetry energy - a key but poorly known quantity that describes the isospin dependence of the EOS of asymmetric nuclear matter and plays a critical role in various issues in nuclear physics and astrophysics. The neutron skin thickness of the Lead nucleus exhibit a strong positive linear correlation with the slope of symmetry energy parameter (L) at saturation density. The parameter \(L\) that determines the density dependence of symmetry energy strongly affects the Mass-Radius relation and tidal deformability (\(\Lambda\)) of a neutron star and provides a unique bridge between atomic nuclei and neutron star [25]. The large value of \(\Delta r_{np}\) = 0.283 \(\pm\) 0.071 fm suggests a large value of \(L\) which yields a very stiff EOS. This generally gives rise to a large value of neutron star radius and the tidal deformability [3]. The upper limit on \(\Lambda_{1.4}\)\(\leq\) 580 for GW170817 requires softer EOS and hence softer symmetry energy coefficient [18]. The heaviest observed neutron star \(M_{max}\) = 2.35 \(\pm\) 0.17\(M_{\odot}\) for the black-widow pulsar PSR J0952-0607 [26] may place stringent constraints on the symmetry energy at high densities, since, the EOS of symmetric nuclear matter (SNM) from heavy ion collisions flow data [27] which is relatively soft and limits the NS maximum mass. The motivation of the present work is to generate a new parametrization of the RMF model which can accommodate the properties of NSs within the astrophysical observations without compromising the finite nuclei properties. The RMF model used in the present work, includes all possible self and mixed-coupling terms for the \(\sigma\), \(\omega\), and \(\rho\) mesons up to the quartic order so that the parameters should obey the naturalness behavior as imposed by the effective field theory [28]. In this work, the new parameter set is searched in view of PREX-II data and the model EOS satisfies the observed astrophysical constraints imposed by NSs. The paper is organized as follows, in section II, the theoretical framework which is used to construct the EOS for neutron stars has been discussed. In section III, the procedure for optimization and covariance analysis of the parameters is discussed. In section IV, we present our results. Finally, we summarized the results of the present work in section V. ## II Theoretical model The effective lagrangian density for the RMF model generally describes the interaction of the baryons via the exchange of \(\sigma\), \(\omega\), and \(\rho\) mesons up to the quartic order. The Lagrangian density[5; 7; 29] is given by \[\mathcal{L} = \sum_{B}\overline{\Psi}_{B}[i\gamma^{\mu}\partial_{\mu}-(M_{B}-g_ {\sigma B}\sigma)-(g_{\omega B}\gamma^{\mu}\omega_{\mu} \tag{1}\] \[+ \frac{1}{2}g_{\rho B}\gamma^{\mu}\tau_{B}.\rho_{\mu})]\Psi_{B}+ \frac{1}{2}(\partial_{\mu}\sigma\partial^{\mu}\sigma-m_{\sigma}^{2}\sigma^{2})\] \[- \frac{\overline{\kappa}}{3!}g_{\sigma N}^{3}\sigma^{3}-\frac{ \overline{\lambda}}{4!}g_{\sigma N}^{4}\sigma^{4}-\frac{1}{4}\omega_{\mu\nu} \omega^{\mu\nu}+\frac{1}{2}m_{\omega}^{2}\omega_{\mu\nu}\omega^{\mu}\] \[+ \frac{1}{4!}\zeta g_{\omega N}^{4}(\omega_{\mu}\omega^{\mu})^{2} -\frac{1}{4}\rho_{\mu\nu}\rho^{\mu\nu}+\frac{1}{2}m_{\rho}^{2}\rho_{\mu} \rho^{\mu}\] \[+ \frac{1}{4!}\xi g_{\rho N}^{4}(\rho_{\mu}\rho^{\mu})^{2}\] \[+ g_{\sigma N}g_{\omega N}^{2}\sigma\omega_{\mu}\omega^{\mu}\left( a_{1}+\frac{1}{2}a_{2}\sigma\right)\] \[+ g_{\sigma N}g_{\rho N}^{2}\sigma\rho_{\mu}\rho^{\mu}\left(b_{1}+ \frac{1}{2}b_{2}\sigma\right)\] \[+ \frac{1}{2}c_{1}g_{\omega N}^{2}g_{\rho N}^{2}\omega_{\mu}\omega^ {\mu}\rho_{\mu}\rho^{\mu}\] The equation of motion for baryons, mesons, and photons can be derived from the Lagrangian density defined in Eq.(1). The equation of motion for baryons can be given as, \[\left[\gamma^{\mu}\left(i\partial_{\mu}-g_{\omega B}\omega_{\mu} -\frac{1}{2}g_{\rho B}\tau_{B}.\rho_{\mu}-e\frac{1+\tau_{B0}}{2}A_{\mu}\right) -\right.\] \[\left.(M_{B}+g_{\sigma B}\sigma)\right]\Psi_{B}=\epsilon_{B}\Psi _{B}. \tag{2}\] The Euler-Lagrange equations for the ground-state expectation values of the mesons fields are \[\left(-\Delta+m_{\sigma}^{2}\right)\sigma = \sum_{B}g_{\sigma B}\rho_{sB}-\frac{\overline{\kappa}}{2}g_{\sigma N }^{3}\sigma^{2}-\frac{\overline{\lambda}}{6}g_{\sigma N}^{4}\sigma^{3} \tag{3}\] \[+a_{1}g_{\sigma N}g_{\omega N}^{2}\omega^{2}+a_{2}g_{\sigma N}^{2 }g_{\omega N}^{2}\sigma\omega^{2}\] \[+b_{1}g_{\sigma N}g_{\rho B}^{2}\rho^{2}+b_{2}g_{\sigma N}^{2}g_{ \rho N}^{2}\sigma\rho^{2},\] \[\left(-\Delta+m_{\omega}^{2}\right)\omega = \sum_{B}g_{\omega B}\rho_{B}-\frac{\zeta}{6}g_{\omega N}^{4} \omega^{3} \tag{4}\] \[-2a_{1}g_{\sigma N}g_{\omega N}^{2}\sigma\omega-a_{2}g_{\sigma N }^{2}g_{\omega N}^{2}\sigma^{2}\omega\] \[-c_{1}g_{\omega N}^{2}g_{\rho N}^{2}\omega\rho^{2},\] \[\left(-\Delta+m_{\rho}^{2}\right)\rho = \sum_{B}g_{\rho B}\tau_{3B}\rho_{B}-\frac{\xi}{6}g_{\rho N}^{4} \rho^{3} \tag{5}\] \[-2b_{1}g_{\sigma N}g_{\rho N}^{2}\sigma\rho-b_{2}g_{\sigma N}^{2 }g_{\rho N}^{2}\sigma^{2}\rho\] \[-c_{1}g_{\omega N}^{2}g_{\rho N}^{2}\omega^{2}\rho,\] \[-\Delta A_{0}=e\rho_{p}. \tag{6}\] where the baryon vector density \(\rho_{B}\), scalar density \(\rho_{sB}\) and charge density \(\rho_{p}\) are, respectively, \[\rho_{B}=\left\langle\overline{\Psi}_{B}\gamma^{0}\Psi_{B}\right\rangle=\frac{ \gamma k_{B}^{3}}{6\pi^{2}}, \tag{7}\] \[\rho_{sB}=\left\langle\overline{\Psi}_{B}\Psi_{B}\right\rangle=\frac{\gamma}{( 2\pi)^{3}}\int_{0}^{k_{B}}d^{3}k\frac{M_{B}^{*}}{\sqrt{k^{2}+M_{B}^{*2}}}, \tag{8}\] \[\rho_{p}=\left\langle\overline{\Psi}_{B}\gamma^{0}\frac{1+\tau_{3B}}{2}\Psi_{B} \right\rangle, \tag{9}\] with \(\gamma\) the spin-isospin degeneracy. The \(M_{B}^{*}=M_{B}-g_{\sigma B}\sigma\) is the effective mass of the baryon species B, \(k_{B}\) is its Fermi momentum and \(\tau_{3B}\) denotes the isospin projections of baryon B. The energy density of the uniform matter within the framework of the RMF model is given by; \[\mathcal{E} = \sum_{j=B,\ell}\frac{1}{\pi^{2}}\int_{0}^{k_{j}}k^{2}\sqrt{k^{2}+ M_{j}^{*2}}dk \tag{10}\] \[+ \sum_{B}g_{\omega B}\omega\rho_{B}+\sum_{B}g_{\rho B}\tau_{3B}\rho_ {B}\rho+\frac{1}{2}m_{\sigma}^{2}\sigma^{2}\] \[+ \frac{\overline{\kappa}}{6}g_{\sigma N}^{3}\sigma^{3}+\frac{ \overline{\lambda}}{24}g_{\sigma N}^{4}\sigma^{4}-\frac{\zeta}{24}g_{\omega N}^{4} \omega^{4}\] \[- \frac{\xi}{24}g_{\rho N}^{4}\rho^{4}-\frac{1}{2}m_{\omega}^{2} \omega^{2}-\frac{1}{2}m_{\rho}^{2}\rho^{2}\] \[- a_{1}g_{\sigma N}g_{\omega N}^{2}\sigma\omega^{2}-\frac{1}{2}a_{2 }g_{\sigma N}^{2}g_{\omega N}^{2}\sigma^{2}\omega^{2}\] \[- b_{1}g_{\sigma N}g_{\rho N}^{2}\sigma\rho^{2}-\frac{1}{2}b_{2}g_{ \sigma N}^{2}g_{\rho N}^{2}\sigma^{2}\rho^{2}\] \[- \frac{1}{2}c_{1}g_{\omega N}^{2}g_{\rho N}^{2}\omega^{2}\rho^{2}.\] The pressure of the uniform matter is given by \[\begin{split} P&=\sum_{j=B,\ell}\frac{1}{3\pi^{2}}\int_ {0}^{k_{j}}\frac{k^{4}dk}{\sqrt{k^{2}+{M_{j}^{*}}^{2}}}-\frac{1}{2}m_{\sigma}^{2 }\sigma^{2}\\ &-\frac{\overline{\kappa}}{6}g_{\sigma N}^{3}\sigma^{3}-\frac{ \overline{\lambda}}{24}g_{\sigma N}^{4}\sigma^{4}+\frac{\zeta}{24}g_{\omega N} ^{4}\omega^{4}\\ &+\frac{\xi}{24}g_{\rho N}^{4}\rho^{4}+\frac{1}{2}m_{\omega}^{2} \omega^{2}+\frac{1}{2}m_{\rho}^{2}\rho^{2}\\ &+a_{1}g_{\sigma N}g_{\omega N}^{2}\sigma\omega^{2}+\frac{1}{2}a_ {2}g_{\sigma N}^{2}g_{\omega N}^{2}\sigma^{2}\omega^{2}\\ &+b_{1}g_{\sigma N}g_{\rho N}^{2}\sigma\rho^{2}+\frac{1}{2}b_{2}g _{\sigma N}^{2}g_{\rho N}^{2}\sigma^{2}\rho^{2}\\ &+\frac{1}{2}c_{1}g_{\omega N}^{2}g_{\rho N}^{2}\omega^{2}\rho^{ 2}.\end{split} \tag{11}\] Here, the sum is taken over nucleons and leptons. ## III Optimization and Covariance Analysis The optimization of the parameters (\(\mathbf{p}\)) appearing in the Lagrangian (Eq. 1) has been performed by using the simulated annealing method (SAM) [30; 31] by following \(\chi^{2}\) minimization procedure which is given as, \[\chi^{2}(\mathbf{p})=\frac{1}{N_{d}-N_{p}}\sum_{i=1}^{N_{d}}\left(\frac{M_{i}^ {exp}-M_{i}^{th}}{\sigma_{i}}\right)^{2}, \tag{12}\] where \(N_{d}\) is the number of experimental data points and \(N_{p}\) is the number of fitted parameters. The \(\sigma_{i}\) denotes adopted errors [32] and \(M_{i}^{exp}\) and \(M_{i}^{th}\) are the experimental and the corresponding theoretical values, respectively, for a given observable. The minimum value of \(\chi_{0}^{2}\) corresponds to the optimal values \(\mathbf{p_{0}}\) of the parameters. Following the optimization of the energy density functional, it is important to explore the richness of the covariance analysis. It enables one to calculate the statistical uncertainties on model parameters or any calculated physical observables. The covariance analysis also provides additional information about the sensitivity of the parameters to the physical observables, and interdependence among the parameters [32; 33; 34; 35]. Having obtained the parameter set, the correlation coefficient between two quantities Y and Z can be calculated by covariance analysis [32; 34; 35; 36; 37] as \[c_{YZ}=\frac{\overline{\Delta Y\Delta Z}}{\overline{\Delta Y^{2}}\overline{ \Delta Z^{2}}}, \tag{13}\] where covariance between Y and Z is expressed as \[\overline{\Delta Y\Delta Z}=\sum_{\alpha\beta}\left(\frac{\partial Y}{\partial p _{\alpha}}\right)_{\mathbf{p}_{0}}C_{\alpha\beta}^{-1}\left(\frac{\partial Z} {\partial p_{\beta}}\right)_{\mathbf{p}_{0}}. \tag{14}\] Here, \(C_{\alpha\beta}^{-1}\) is an element of inverted curvature matrix given by \[C_{\alpha\beta}=\frac{1}{2}\left(\frac{\partial^{2}\chi^{2}(\mathbf{p})}{ \partial p_{\alpha}\partial p_{\beta}}\right)_{\mathbf{p}_{0}}. \tag{15}\] The standard deviation, \(\overline{\Delta Y^{2}}\), in Y can be computed using Eq. (14) by substituting Z = Y. The prediction of maximum mass around \(2M_{\odot}\) for the nonrotating neutron star and constraints on EOSs of Symmetric Nuclear Matter (SNM) and Pure Neutron Matter (PNM) as extracted from the analysis of particle flow in heavy ion collisions [27] require relatively softer EOSs as demanded by GW170817 event. ## IV Results and Discussion The parameters of the model are searched by fit to the available experimental data of total binding energies and charge rms radii [38; 39; 40] for some closed/open shell nuclei \({}^{16,24}\)O,\({}^{40,48,54}\)Ca, \({}^{56,68,78}\)Ni,\({}^{88}\)Sr,\({}^{90}\)Zr,\({}^{100,116,132,138}\)Sn, and \({}^{144}\)Sm, \({}^{208}\)Pb. We have also included the maximum mass of neutron star [41] in our fit data. Recently, the parity-violating electron scattering experiment (PREX-II) put a limit on the neutron skin thickness of \({}^{208}\)Pb is \(\Delta r_{np}\) = 0.283 \(\pm\) 0.071 fm [24]. We included the recently measured \(\Delta r_{np}\) in our fit data to constrain the linear density dependence of symmetry energy coefficient. For the open shell nuclei, the pairing has been included by using BCS formalism with constant pairing gaps that have been taken from the particle separation energies of neighboring nuclei [42; 43; 44]. In Table 1, we display the values of relativistic parameterization DBHP generated for the Lagrangian given by Eq. (1) along with theoretical uncertainties. The values of parameter sets for NL3 [45], FSUGarnet [33], IOPB-1 [46] and Big Apple [47] are also shown. The effective field theory imposes the condition of naturalness [28] on the parameters or expansion coefficients appearing in the effective Lagrangian density Eq. (1). According to naturalness, the coefficients of various terms in Lagrangian density functional should be of the same size when expressed in an appropriate dimensionless ratio. The dimensionless ratios are obtained by dividing Eq. (1) by \(M^{4}\) and expressing each term in powers of \(\frac{g_{\sigma}\sigma}{M}\), \(\frac{g_{\omega\omega}}{M}\) and \(2\frac{g_{\rho\rho}}{M}\). This means that the dimensionless ratios \(\frac{1}{2C_{2}^{2}M^{2}}\),\(\frac{1}{2C_{2}^{2}M^{2}}\), \(\frac{1}{8C_{2}^{2}M^{2}}\), \(\frac{\overline{\kappa}}{6M}\), \(\frac{\overline{\lambda}}{24}\), \(\frac{\zeta}{24}\), \(\frac{a_{1}}{M}\), \(\frac{a_{2}}{2}\), \(\frac{b_{1}}{4M}\), \(\frac{b_{2}}{8}\) and \(\frac{c_{1}}{8}\) should be roughly of same size, where \({c_{1}}^{*}=\frac{g_{1}^{*}}{M_{1}^{*}2}\), i denotes \(\sigma\), \(\omega\) and \(\rho\) mesons. In Table 2, we present the overall naturalness behavior of DBHP parameterization i.e. the value of these parameters when expressed in dimensionless ratios as shown just above. We also display the corresponding values for NL3, FSUGarnet, IOPB-1, and Big Apple parameter sets. It is obvious from the table that DBHP parameterization closely favors the naturalness behavior. This may be attributed to the fact that this parameterization includes all possible self and crossed interaction terms of \(\sigma\), \(\omega\), and \(\rho\)-mesons up to the quartic order. The small value of parameter \(c_{1}\) for DBHP model which gives rise to better naturalness behaviour of the parameters might be attributed to the fact that the coupling parameter \(c_{1}\) has strong correlation with \(b_{1}\) and also has good correlation with \(a_{2}\) and \(b_{2}\) (see Fig. 1). It is evident from the table that the value of coupling parameter \(c_{1}\) (crossed interaction term of \(\omega^{2}\) and \(\rho^{2}\)) appearing in Eq. (1) is large for IOPB-I, FSU-Garnet and Big Apple which shows deviation from the naturalness behavior in the absence of all other possible mixed interaction terms of \(\sigma\), \(\omega\), and \(\rho\)-meson. Keeping in view the naturalness behavior of the parameters as imposed by the effective field theory [28] and as observed in case of DBHP model, it is important to incorporate the contributions of the higher order mixed interactions of mesons in the Lagrangian. The naturalness behavior of parameters can be further improved by considering the next higher order terms containing the gradient of fields [28]. As far as NL3 parameterization is concerned, the naturalness behavior is favored very well but it does not include any cross interaction terms of \(\sigma\), \(\omega\), and \(\rho\) mesons which are very important for constraining the symmetry energy and its density dependence. In Fig.1, the correlation coefficients between the DBHP model parameter appearing in Lagrangian (Eq. 1) are shown in graphical form. A strong correlation is found between the pairs of model parameters \(g_{\sigma}\) and \(g_{\omega}\) (0.95), \(c_{1}-b_{1}\) (0.80), and \(a_{2}-\overline{\kappa}\) (0.72). The strong correlation is also found for \(g_{\rho}\) with \(b_{1}\) and \(b_{2}\). Mild correlations exist between the pairs of model parameters \(g_{\sigma}-\overline{\kappa}\), \(g_{\sigma}-a_{1}\) and \(g_{\sigma}-\)\(a_{2}\). A strong correlation between the model parameters implies a strong interdependence i.e. if one parameter is fixed at a certain value then the other must attain the precise value as suggested by their correlation. ### Properties of finite nuclei and nuclear matter The newly generated DBHP parameterization gives a good fit to the properties of finite nuclei. In Fig. 2, we display the value of relative error in the total binding energies \(\delta E=\frac{B^{exp}-B^{th}}{B^{exp}}\) calculated for DBHP parameterization. We also display similar results for other parameter sets considered. It is evident that binding energies obtained using DBHP parameterization are in good agreement with the available experimental data [48]. The root mean square (rms) errors in total binding energy for all the nuclei considered in our fit data is found to be 2.1 MeV. In Fig.3, we present our results for relative error \(\delta R_{ch}\) for charge rms radii and also compare them with other parameter sets. The root mean square (rms) errors in charge radii for all nuclei taken in our fit is 0.02 fm. The neutron skin thickness of \({}^{208}\)Pb for DBHP model comes out to be 0.24 \(\pm\) 0.02 fm. In Table 3, we present the results for the SNM properties such as binding energy per nucleon (E/A), incompressibility (K), the effective nucleon mass (\(M^{s}\)) at the saturation density (\(\rho_{0}\)), symmetry energy coefficient (J), slope of symmetry energy (L) and curvature parameter \(K_{\rm sym}\) along with the theoretical uncertainties. It is observed that the isoscalar properties (E/A, K, \(M^{s}\)\(\rho_{0}\)) are well constrained for DBHP parametrization (at the \(\leq\) 3.3 % level). But in isovector sector, the error on the density dependence of the symmetry energy are relatively larger for \(L\) (\(\approx\) 23 %). The value of \(K_{\rm sym}\) is determined only poorly [49; 50; 51]. The experimental data on finite nuclei are not enough to constrain \(K_{\rm sym}\). Only the accurate knowledge of symmetry energy at higher densities (\(\rho>2\rho_{0}\)) may constrain the \(K_{\rm sym}\) in tighter bounds. This may be attributed to the large experimental error on the neutron skin thickness for \({}^{208}\)Pb (0.283 \(\pm\) 0.071 fm) which Figure 1: (Color online) Correlation coefficients among the model parameters for DBHP parametrization of the Lagrangian given by Eq. (1). \begin{table} \begin{tabular}{c c c c c c} \hline \hline Parameters & DBHP & NL3 & FSUGarnet & IOPB-1 & Big Apple \\ \hline \(\frac{1}{2\text{C}_{\frac{3}{2}}\text{M}^{2}}\) & 1.3311 & 1.4028 & 1.2690 & 1.3086 & 1.4698 \\ \(\frac{1}{2\text{C}_{\frac{3}{2}}\text{M}^{2}}\) & 1.9604 & 2.0970 & 1.8508 & 1.9383 & 2.2819 \\ \(\frac{1}{2\text{C}_{\frac{3}{2}}\text{M}^{2}}\) & 0.6631 & 1.0306 & 0.4278 & 0.6670 & 0.4121 \\ \(\frac{1}{2\text{C}_{\frac{3}{2}}\text{M}^{2}}\) & 0.6380 & 0.6855 & 0.5787 & 0.6499 & 0.9168 \\ \(\frac{1}{2\text{C}_{\frac{3}{2}}\text{M}}\) & 0.1018 & -0.6630 & -0.1472 & -0.3146 & -0.9024 \\ \(\frac{1}{2\text{C}_{\frac{3}{2}}\text{M}}\) & 0.8982 & - & 0.9785 & 0.7267 & 0.0291 \\ \(\frac{1}{2\text{C}_{\frac{3}{2}}\text{M}}\) & 0.1172 & - & - & - & - \\ \(\frac{1}{2\text{C}_{\frac{3}{2}}\text{M}}\) & 0.2641 & - & - & - & - \\ \(\frac{1}{2\text{C}_{\frac{3}{2}}\text{M}}\) & 0.9953 & - & - & - & - \\ \(\frac{1}{2\text{C}_{\frac{3}{2}}\text{M}}\) & 0.1177 & - & - & - & - \\ \(\frac{1}{2\text{C}_{\frac{3}{2}}\text{M}}\) & 0.9989 & - & 10.7500 & 6.0000 & 11.7500 \\ \hline \hline \end{tabular} \end{table} Table 2: The values of parameters are expressed as dimensionless ratios corresponding to naturalness behavior. All values have been multiplied by \(10^{3}\). lead us choosing the large adopted error during the optimisation procedure. The values of neutron-skin thickness (\(\Delta r_{np}\)) for \({}^{208}\)Pb and \({}^{48}\)Ca nuclei are also presented. The DBHP parameter significantly overestimates the value of neutron-skin thickness for \({}^{48}\)Ca in comparison to that \(\Delta r_{np}\)(\({}^{48}\)Ca) = \(0.121\pm 0.026\) fm as reported recently by CREX [52]. Other parametrizations considered in Table 3 also do not satisfy simultaneously the experimental data for the neutron-skin for the \({}^{208}\)Pb and \({}^{48}\)Ca nuclei. Similar trends have been observed in recent investigations based on the relativistic and non-relativistic mean field models which call for further experimental studies [53; 54; 55]. The results are also compared with the NL3 [45], FSUGarnet [33], IOPB-1 [46] and Big Apple [47] parameter sets. These SNM properties are very important for constructing the EOS for nuclear matter. E/A is -16.1 MeV for DBHP parameterization. The value of J and L obtained by DBHP parameterization are consistent with the values J = 38.1 \(\pm\) 4.7 MeV and L = 106 \(\pm\) 37 MeV as inferred by Reed et. al., [3]. The value of K is 225 MeV which is in agreement with the value K = 240 \(\pm\) 20 MeV determined from isoscalar giant monopole reso Figure 3: (Color online) Relative error in the charge root mean square (\(\delta R_{ch}\)) plotted against the mass number (A) for the newly generated parameter set DBHP For comparison, the values obtained with parameters NL3, IOPB-1, FSUGarnet and Big Apple are also displayed. Figure 2: (Color online) Relative error in the total binding energy (\(\delta E\)) plotted against the mass number (A) for the newly generated parameter set DBHP. For comparison, the values of \(\delta E\) obtained with parameters NL3, IOPB-1, FSUGarnet and Big Apple are also displayed. \begin{table} \begin{tabular}{c c c c c c} \hline \hline \(NMPs\) & DBHP & NL3 & FSUGarnet & IOPB-1 & Big Apple \\ \hline \(\rho_{0}\) & 0.148\(\pm\)0.003 & 0.148 & 0.153 & 0.149 & 0.155 \\ **E/A** & -16.11\(\pm\)0.05 & -16.25 & -16.23 & -16.09 & -16.34 \\ **K** & 229.5\(\pm\)5.6 & 271.6 & 229.6 & 222.6 & 227.1 \\ **M\({}^{*}\)/M** & 0.615\(\pm\)0.007 & 0.595 & 0.578 & 0.595 & 0.608 \\ **J** & 34.7 \(\pm\)1.5 & 37.4 & 30.9 & 33.3 & 31.4 \\ **L** & 83.9\(\pm\)19.2 & 118.6 & 50.9 & 63.8 & 40.3 \\ **K\({}_{\rm sym}\)** & -33.2\(\pm\)64.1 & 100.7 & 57.9 & -38.4 & 88.8 \\ **\(\Delta\)r\({}_{\rm np}\)** (\({}^{208}\)Pb)** & 0.24\(\pm\)0.02 & 0.28 & 0.16 & 0.22 & 0.15 \\ **\(\Delta\)r\({}_{\rm np}\)** (\({}^{48}\)Ca)** & 0.21\(\pm\)0.02 & 0.23 & 0.17 & 0.17 & 0.17 \\ \hline \hline \end{tabular} \end{table} Table 3: The bulk nuclear matter properties (NMPs) at saturation density along with calculated theoretical errors for DBHP parameterization compared with that other parameter sets. \(\rho_{0}\), E/A, K, \(M^{*}\)/\(M\), J, L and \(K_{sym}\) denote the saturation density, Binding Energy per nucleon, Nuclear Matter incompressibility coefficient, the ratio of effective nucleon mass to the nucleon mass, Symmetry Energy, the slope of symmetry energy, and curvature of symmetry energy respectively. the value of \(\rho_{0}\) is in fm\({}^{-3}\) and rest all the quantities are in MeV. The values of neutron skin thickness \(\Delta r_{np}\) for \({}^{208}\)Pb and \({}^{48}\)Ca nuclei in units of fm are also listed. nance (ISGMR) for \({}^{90}\)Zr and \({}^{208}\)Pb nuclei [56; 57]. In Fig. 4, we plot the EOS i.e. pressure as a function of the baryon density for SNM (upper) and PNM (lower panel) using the DBHP parametrization that agrees reasonably well and lies in the allowed region with the EOS extracted from the analysis of the particle flow in heavy ion collision [27]. It is evident from the figure that the EOSs for SNM and PNM calculated with the NL3 parameterization are very stiff and ruled out by the heavy ion collision data. The EOS calculated by using the DBHP parameterization is relatively softer which is in requirement to constrain the recent astrophysical observations [58; 59; 60; 41]. In Fig. 5, we plot the symmetry energy as a function of baryon density for DBHP model. The results for other parametrizations are also shown for comparison. It can be observed that the symmetry energy increases with baryon density and it is found to be softer than NL3 but stiffer than IOPB-1, FSUGarnet and Big Apple models. ### Neutron star properties In Fig. 6 we display the variation of pressure with the energy density for the nucleonic matter in \(\beta\) equilibrium for the DBHP parameterization. The results are also compared with those obtained for parameter sets. The shaded region represents the observational constraints at \(r_{ph}\)=R with the \(2\sigma\) uncertainty [58]. Here \(r_{ph}\) and R are the photospheric and neutron star radius respectively. It is clear that the EOS computed with our DBHP parameter set is consistent with the EOS obtained by Steiner et al. [58]. The EOSs obtained by the DBHP and IOPB-1 parameterizations are softer and lie in the allowed shaded region which represents the observational constraints taken from Ref. [58]. The EOS obtained with NL3 parameter set is much stiffer than DBHP and IOPB-1 parameter sets and ruled out by the observational constraints [58]. The stiffness of EOS for NL3 may be attributed to its very high value of compressibility (K), symmetry energy coefficient (J), and slope of symmetry energy (L) as shown in Table (3). The mass and radius of a neutron star are obtained by solving the Tolman-Oppenheimer-Volkoff (TOV) equations [61; 62] given as: \[\frac{dP(r)}{dr}=-\frac{\{\epsilon(r)+P(r)\}\{4\pi r^{3}P(r)+m(r)\}}{r^{2}(1-2 m(r)/r)} \tag{16}\] \[\frac{dm}{dr}=4\pi r^{2}\epsilon(r), \tag{17}\] \[m(r)=4\pi\int_{0}^{r}drr^{2}\epsilon(r) \tag{18}\] where \(P(r)\) is the pressure at radial distance \(r\) and \(m(r)\) is the mass of neutron stars enclosed in the sphere of radius \(r\). The EOS for the crust region is taken from Ref. Figure 4: (color online) Variation of Pressure as a function of baryon density for SNM (upper panel) and PNM (lower panel) computed with DBHP parameterization along with NL3, IOPB-1, FSUGarnet and Big Apple models. The shaded region represents the experimental data taken from the reference [27]. Figure 5: (Color online) The density dependent symmetry energy plotted as a function of baryon density for DBHP model. The results are also displayed for NL3, IOPB-1, FSUGarnet and Big Apple parameter sets. [63]. In Fig. 7 we present our results for gravitational mass of static neutron star and its radius for DBHP and other parameterizations. It is observed that the maximum gravitational mass of the static neutron star for DBHP parameter set is 2.03 M\(\odot\) which is in good agreement with the mass constraints from GW170817 event, pulsars PSRJ1614-2230, PSRJ0348+0432, and PSRJ0740+6620 [16; 41; 59; 60; 64]. The radius (\(R_{1.4}\)) of canonical mass is 13.39 Km for DBHP parameterization which satisfies the radius constraints from NICER [59; 60; 65]. The value of \(R_{1.4}\)for NL3 parameterization is 14.61 Km which seems to rule out the constraints for \(R_{1.4}\) extracted from Ref. [66]. The tidal deformability \(\Lambda\) rendered by the companion stars on each other in a binary system can provide remarkable pieces of information on the EOS of neutron stars [67; 68]. The tidal influences of its companion in BNS system will deform neutron stars in the binary system and, the resulting change in the gravitational potential modifies the BNS orbital motion and its corresponding gravitational wave (GW) signal. This effect on GW phasing can be parameterized by the dimensionless tidal deformability parameter, \(\Lambda_{i}=\lambda_{i}/M_{i}^{5}\), i = 1, 2. For each neutron star, its quadrupole moment \(\mathcal{Q}_{j,k}\) must be related to the tidal field \(\mathcal{E}_{j,k}\) caused by its companion as, \(\mathcal{Q}_{j,k}=-\lambda\mathcal{E}_{j,k}\), where, \(j\) and \(k\) are spatial tensor indices. The dimensionless tidal deformability parameter \(\Lambda\) of a static, spherically symmetric compact star depends on the neutron star compactness parameter C and a dimensionless quadrupole Love number k\({}_{2}\) as, \(\Lambda=\frac{2}{3}k_{2}C^{-5}\). The \(\Lambda\) critically parameterizes the deformation of neutron stars under the given tidal field, therefore it should depend on the EOS of nuclear dense matter. To measure the Love number \(k_{2}\) along with the evaluation of the TOV equations we have to compute \(y_{2}=y(R)\) with initial boundary condition y(0) = 2 from the first-order differential equation [67; 68; 69; 70] simultaneously, \[y^{\prime} = \frac{1}{r}[-r^{2}Q-ye^{\lambda}\{1+4\pi Gr^{2}(P-\mathcal{E})\} -y^{2}], \tag{19}\] \[Q \equiv 4\pi Ge^{\lambda}(5\mathcal{E}+9P+\frac{\mathcal{E}+P}{c_{s}^ {2}})-6\frac{e^{\lambda}}{r^{2}}-{\nu^{{}^{\prime}}}^{2}\] (20) \[e^{\lambda} \equiv (1-\frac{2Gm}{r})^{-1}\] (21) \[\nu^{\prime} \equiv 2Ge^{\lambda}(\frac{m+4\pi Pr^{3}}{r^{2}}). \tag{22}\] First, we get the solutions of Eq.(19) with boundary condition, y\({}_{2}\) = y(R), then the electric tidal Love number k\({}_{2}\) is calculated from the expression as, \[k_{2} = \frac{8}{5}C^{5}(1-2C)^{2}[2C(y_{2}-1)-y_{2}+2]\{2C(4(y_{2}+1)C^{ 4}\] \[+ (6y_{2}-4)C^{3}+(26-22y_{2})C^{2}+3(5y_{2}-8)C-3y_{2}+6)\] \[- 3(1-2C)^{2}(2C(y_{2}-1)-y_{2}+2)\log(\frac{1}{1-2C})\}^{-1}.\] Fig. 8 shows the results of dimensionless tidal deformability \(\Lambda\) as a function of gravitational mass for neutron stars for DBHP and other parameterizations. The value of \(\Lambda\) decreases with an increase in the gravitational mass Figure 6: (color online) Variation of Pressure as a function of Energy Density for DBHP parameter set. EOS computed with NL3, IOPB-1, FSUGarnet and Big Apple models are also shown for comparison. The Shades region represents the observational constraints taken from reference [58]. Figure 7: (color online) Relationship between neutron star mass and its radius for DBHP parameterization. The results are compared with NL3, IOPB-1, FSUGarnet and Big Apple parameters. of the neutron star and reduces to a very small value at the maximum mass. The value of \(\Lambda_{1.4}\) obtained for canonical mass with DBHP parameters is 682 \(\pm\) 125 which satisfies the finding from the GW170817 event [71; 3; 72] for the EOS of dense nuclear matter. It is noteworthy that the our analysis of tidal deformability (\(\Lambda_{1.4}\)) lies within the constraint (\(\Lambda_{1.4}\leq 800\)) for GW170817 event [71]. But value of \(\Lambda_{1.4}\) obtained for DBHP model (682) has marginal overlap with revised limit \(\Lambda_{1.4}\leq 580\) within 1\(\sigma\) uncertainty [18]. This is attributed to the impact of inclusion of PREX-II data in our fit which produces stiff symmetry energy with density slope L = 83.9 MeV. We are looking forward that the new terrestrial experiments and astrophysical observations may impose tighter bounds. In Table 4, we present the results for the various properties of static stars with DBHP parameterization. The theoretical uncertainties calculated for the properties using Eqs. (13 and 14) are also listed. Results obtained with other parameter sets are also shown for comparison. We obtain a very small theoretical uncertainties for the maximum mass \(M_{max}\) (1.9 %), maximum mass radius \(R_{max}\) (2.5 %) and radius \(R_{1.4}\) (3 %) of neutron star. The small uncertainties might be attributed to the fact that the inclusion of \(M_{max}\) in the fit data constraint the high density regime of EOS. A relatively large uncertainties (\(\approx\) 18 %) is obtained for \(\Lambda_{1.4}\). This is due to the fact that \(\Lambda\propto R^{5}\) which indicates that precise measurement of tidal deformability can constrain the NS radius in narrow bounds. Indeed it is believed that no terrestrial experiment can reliably constrain the EOS of neutron star [3]. ### Correlations of nuclear matter, neutron star properties and model parameters We now discuss the correlation coefficients, shown in Fig. 9, between the model parameters and nuclear matter properties, neutron skin thickness of \({}^{208}\)Pb nucleus as well as NS observables. The isoscalar nuclear matter properties like E/A, K, M*/M show strong correlations with isoscalar parameters \(g_{\sigma}\), \(g_{\omega}\) and \(\overline{\kappa}\). It can also be observed from the figure that the symmetry energy slope parameter (\(L\)) can be constrained by the coupling parameter \(a_{2}\), \(b_{1}\) and \(b_{2}\) along with the coupling parameter \(g_{\rho}\) as suggested by their correlations. The value of \(\Delta r_{np}\) is found to be well constrained by the parameters \(g_{\rho}\) and \(b_{2}\) as they have strong correlations. This study is quite consistent with results reported in [33; 35]. Finally, we discuss the correlations between neutron star observables and Lagrangian model parameters as shown in Fig. 9. A strong correlation between maximum neutron star mass and \(\omega\)-meson self-coupling parameter \(\zeta\) is missing in case of DBHP model parameterization. The \(M_{max}\) display a moderate correlations with isovector coupling parameter \(c_{1}\) and \(b_{1}\). A large maximum mass may be generated either by having a stiff EOS for SNM or a stiff symmetry energy. If the symmetry energy is soft, then one must stiffen EOS of SNM which can be done by tuning the parameter \(\zeta\). But the symmetry energy of DBHP model is stiff as shown by the Fig. 5. The symmetry energy slope parameter at saturation density is found to be 83.9 MeV. The stiff symmetry energy thereby weakens the correlation between \(\zeta\) and \(M_{max}\). This suggests that the maximum mass results from a competition between \(\zeta\) and \(L\). This further implies that the parameter Figure 8: (Color online) Variation of dimensionless tidal deformability (\(\Lambda\)) with respect to gravitational mass for DBHP parameterization. The results for NL3, IOPB-1, FSUGarnet and Big Apple parameters are also shown. \begin{table} \begin{tabular}{c c c c c} \hline **EOS** & **M** & **R\({}_{max}\)** & **R\({}_{1.4}\)** & \(\Lambda_{1.4}\) \\ & (M\({}_{\odot}\)) & (km) & (km) & \\ \hline DBHP & 2.03\(\pm\)0.04 & 11.68\(\pm\)0.29 & 13.39\(\pm\)0.41 & 682\(\pm\)125 \\ NL3 & 2.77 & 13.27 & 14.61 & 1254 \\ IOPB-I & 2.15 & 11.95 & 13.28 & 694 \\ FSUGarnet & 2.06 & 11.70 & 12.86 & 624 \\ Big Apple & 2.6 & 12.41 & 12.96 & 717 \\ \hline \end{tabular} \end{table} Table 4: The properties of nonrotating neutron stars along with theoretical uncertainties obtained for the DBHP parameter set. Results are also compared with the other parameter sets. \(M_{\rm max}\) and \(R_{\rm max}\) denote the Maximum Gravitational mass and corresponding radius, respectively. The values for \(R_{1.4}\) and \(\Lambda_{1.4}\) denote radius and dimensionless tidal deformability at \(1.4M_{\odot}\). \(\zeta\) should be well correlated to \(c_{1}\) and \(b_{1}\) and this is what exactly reflected from the correlations shown in Fig.1. The values of \(L\) and \(K_{sym}\) are found to be constrained by the parameters \(c_{1}\) and \(b_{1}\). Finally, in Fig. 10 we display the correlation coefficients between the properties of nuclear matter, neutron star, and neutron skin thickness of \({}^{208}\)Pb. A strong correlation of neutron skin thickness of \({}^{208}\)Pb nucleus with J, L, \(R_{1.4}\) and \(\Lambda_{1.4}\) is observed. As per the expectation, radius \(R_{1.4}\) is found to have a strong correlation with J and L. These findings are quite in harmony with the results reported in Ref. [33; 35]. The curvature of the symmetry energy (\(K_{sym}\)) is also found to have a strong correlation with \(R_{1.4}\) and \(\Lambda_{1.4}\). Figure 10: (Color online) Correlation coefficients for bulk nuclear matter and neutron start properties and neutron skin of \({}^{208}\)Pb for DBHP parametrization. Figure 9: (Color online) Correlation coefficients between the model parameters and a set of neutron star observables as well as the bulk properties of nuclear matter at the saturation density for DBHP parametrization (see text for details). Summary The new relativistic interaction DBHP for the relativistic mean field model has been generated by keeping in view the PREX-II data for neutron-skin in \({}^{208}\)Pb nucleus, astrophysical constraints in addition to those usually employed, like, binding energy, charge radii for finite nuclei and empirical data on nuclear matter at the saturation density. We have included all possible self and mixed interactions between \(\sigma\), \(\omega\), and \(\rho\)-meson up to the quartic order so that the coupling parameters obey the naturalness behavior as imposed by the effective field theory [28]. The Covariance analysis enabled us to asses the statistical uncertainties in the estimation of the model parameters and observables of interest as well as the correlations among them. The DBHP parameter set is obtained such that it reproduces the ground state properties of the finite nuclei, bulk nuclear matter properties and also satisfies the constraints of mass and radius of the neutron star and its dimensionless deformability \(\Lambda\) from recent astrophysical observations [18; 19; 58; 66]. The root mean square errors in the total binding energies and charge rms radii for finite nuclei included in our fit for DBHP parameterization are 2.1 MeV and 0.02 fm respectively. The Bulk nuclear matter properties obtained are well consistent with the current empirical data [3; 57]. The maximum gravitational mass and radius (\(R_{1.4}\)) of the neutron star comes out to be 2.029 \(\pm\) 0.038 M\(\odot\) and 13.388 \(\pm\) 0.521 km respectively [41]. The value of \(\Lambda_{1.4}\) which is equal to 682.497\(\pm\)125.090 for DBHP parameterization also satisfies the constraints for GW170817 event [71] and reported in Refs. [3; 72]. The parametrization generated in consideration of PREX-II data produces stiff symmetry energy coefficient and its density depdenedence leading to the \(\Lambda_{1.4}=682\pm 125\) which has marginal overlap with the revised constraint [18]. We are looking forward that the new terrestrial experiments and astrophysical observations may put more stringent constraints on the density dependence of the symmetry energy. ###### Acknowledgements. V.T. is highly thankful to Himachal Pradesh University for providing computational facility and the Department of Science & Technology (Govt. of India) for providing financial assistance (DST/INSPIRE Fellowship/2017/IF170302) under Junior/Senior Research Fellowship scheme. C.M. acknowledges partial support from the IN2P3 Master Project "NewMAC".
2305.10005
DinoSR: Self-Distillation and Online Clustering for Self-supervised Speech Representation Learning
In this paper, we introduce self-distillation and online clustering for self-supervised speech representation learning (DinoSR) which combines masked language modeling, self-distillation, and online clustering. We show that these concepts complement each other and result in a strong representation learning model for speech. DinoSR first extracts contextualized embeddings from the input audio with a teacher network, then runs an online clustering system on the embeddings to yield a machine-discovered phone inventory, and finally uses the discretized tokens to guide a student network. We show that DinoSR surpasses previous state-of-the-art performance in several downstream tasks, and provide a detailed analysis of the model and the learned discrete units.
Alexander H. Liu, Heng-Jui Chang, Michael Auli, Wei-Ning Hsu, James R. Glass
2023-05-17T07:23:46Z
http://arxiv.org/abs/2305.10005v2
# DinoSR: Self-Distillation and Online Clustering ###### Abstract In this paper, we introduce self-**d**istillation and **o**nline clustering for self-supervised speech **r**epresentation learning (DinoSR) which combines masked language modeling, self-distillation, and online clustering. We show that these concepts complement each other and result in a strong representation learning model for speech. DinoSR first extracts contextualized embeddings from the input audio with a teacher network, then runs an online clustering system on the embeddings to yield a machine-discovered phone inventory, and finally uses the discretized tokens to guide a student network. We show that DinoSR surpasses previous state-of-the-art performance in several downstream tasks, and provide a detailed analysis of the model and the learned discrete units. ## 1 Introduction Self-supervised speech representation learning techniques have been a game changer in recent years. Learning from unlabeled data has been shown to be effective for many downstream tasks such as speech recognition, translation, and language modeling [1; 2]. Among the flourishing self-supervised learning techniques, we are particularly interested in three methods: masked language modeling, self-distillation, and clustering. Masked language modeling (MLM; [3]) predicts the masked part of a sentence based on the unmasked context and was first developed for training language models with bidirectional self-attention models [4]. The strong performance in various natural language processing tasks has enabled representation learning with MLM to quickly succeed in the field. Unsurprisingly, the MLM concept also applies to speech [5; 6] as it shares a similar structure to text in a more complex form. Self-distillation representation learning has recently come into the spotlight with outstanding results for computer vision [7; 8] and speech tasks [9]. In contrast to the conventional supervised knowledge distillation method [10], self-supervised distillation does not require labeled data to train a teacher model to guide the student model. Instead, both models are trained with unlabeled data using paired relations augmented by data augmentation [7] or masking [9]. Clustering algorithms like K-means have been well-known unsupervised techniques long before deep learning methods arose. In the deep learning era, researchers have found clustering mechanisms beneficial to self-supervised models in a differentiable form known as vector quantization [11]. Driven by the nature of speech, which is a continuous signal containing a spoken form of discrete text, vector quantization is an ideal match for representation learning as many studies [12; 13; 5] have discovered. Besides serving as an information bottleneck that filters out unnecessary content in high-dimensional spaces and improves performance, clustering also provides a glimpse of the characteristic of the latent embedding produced by the model by categorizing them [14]. In this paper, we introduce self-**d**i**s**t**m**i**sstillation and **o**n**i**n**i**e clustering for self-supervised **s**p**ecch **r**epresentation learning (DinoSR) which leverages the positive aspects of the aforementioned methods. We show that these concepts complement each other and result in a strong representation learning model for speech. In brief, DinoSR first extracts contextualized embeddings from the input audio with a teacher network, then runs an online clustering system on the embeddings to yield a machine-discovered phone inventory, and finally uses the discretized tokens to guide the student network. Quantitatively, DinoSR surpasses the state-of-the-art in speech recognition with limited resources on LibriSpeech [15] and unsupervised acoustic unit discovery [16]. Moreover, DinoSR demonstrates strong interpretability by discretizing the high-dimensional embedding space into clusters closely aligned to human-defined phonetic units. ## 2 Related Work Self-supervised speech representation learning with deep neural networks first emerged in the form of autoregressive models [11; 17; 18; 19] where the goal is to predict the future based on past observations. Subsequently, bidirectional models [5; 20; 21; 13; 22] relaxed the unidirectional limitation to achieve better results. A common learning paradigm for bidirectional models is Masked Language Modeling (MLM) - masking part of the input and training the model to recover the missing information using unmasked targets. These targets can be derived from the audio signal using different strategies, such as surface features [5] or contrastive learning [13]. Following the MLM training scheme, HuBERT [20] proposed targeting discrete units generated by vanilla acoustic unit discovery systems. Such a system can be as simple as K-means clustering over MFCC features, or even random linear projections over spectrograms [23]. Interestingly, HuBERT found that the acoustic unit discovery system can be iteratively refined by running offline K-Means clustering on the output of a specific layer of the pre-trained model. However, several important hyper-parameters are required to obtain the best performance, such as the number of updates, the layer whose output is to be clustered, and the number of clusters for each iteration. While the proposed method is conceptually similar to HuBERT - MLM with discovered acoustic units, our method can be trained end-to-end with fewer heuristics by leveraging the self-distillation framework and online clustering. Our method is also closely related to self-distillation methods for representation learning. These methods originated from image representation learning [7; 8], training a pair of identical models named student and teacher networks. The key to this framework is to provide different views of the same input by image augmentation to each model, and also to update them in different policies - gradient descent for the student model and exponential moving average for the teacher model. Following the self-distillation framework, Baevski et al. [9] generalized the method to speech processing by replacing image augmentation with the MLM masking strategy and found it effective. The key difference between this work and prior work is the online clustering mechanism that derives discrete targets instead of using continuous embeddings from the teacher model as targets. We also note that our method differs from studies in knowledge distillation from pre-trained speech representation models [24; 25; 26; 27] which focus on inference efficiency and model compression. ## 3 Method ### Self-distillation Paradigm As illustrated in Figure 1, our method shares the same framework as recent self-supervised learning methods with self-distillation such as DINO [8]. The goal is to train a student network \(\theta_{\text{student}}\) guided by a teacher network \(\theta_{\text{teacher}}\) where both models share the same architecture, which, in our work, is a \(K\)-layer transformer encoder [4]. The teacher network in the self-distillation framework is simply a copy of the randomly initialized student network at the beginning of training. To train the framework, we need to generate different _views_ of the same input data for each model to avoid a trivial solution (\(\theta_{\text{student}}=\theta_{\text{teacher}}\)). While this is often done by data augmentation in computer vision, we followed Baevski et al. [9] to use input masking as an alternative for speech. The input speech is partially masked for the student model to generate the masked representation \(\mathbf{z}_{t}^{K}\in\mathbb{R}^{D}\) where \(t=1,...,T\) is the sequence length. For the teacher model, the input is unmasked, and we denote the output representation \(\tilde{\mathbf{z}}_{t}^{K}\). Besides the different views of the same input, the parameter update policies of the two models are also different. While the student network is updated with gradient descent (with an objective function detailed later in SS3.2), the teacher network parameter is updated via tracking the student network parameter with an exponential moving average (EMA): \[\theta_{\text{teacher}}\longleftarrow\lambda\;\theta_{\text{ teacher}}+(1-\lambda)\;\theta_{\text{student}}, \tag{1}\] where \(\lambda\) is the decay rate of the teacher model in each training step. ### Self-supervised Learning with DinoSR **Acoustic Unit Discovery with Online Clustering.** Under the self-distillation framework, our key contribution is to derive a good target from the teacher network to guide the student network. Prior work on self-supervised speech representation investigated acoustic unit discovery by either performing offline clustering of contextualized representations [20] or online clustering of non-contextualized representations [13]. DinoSR uses an online acoustic unit discovery system on top of the teacher network, providing contextualized discrete units. Unlike prior work using K-means clustering over MFCC features or pre-trained representations, our model's unit discovery system cannot be fixed since the teacher model evolves with the student model. As a solution, we propose performing online clustering at multiple layers of the teacher network. For the \(k\)-th layer of the teacher model within the top \(N\) layers (i.e., \(k\in(K-N,K]\)), we introduce a codebook (set of centroids) \(\mathbf{E}^{k}=\{\mathbf{e}_{1}^{k},...,\mathbf{e}_{V}^{k}\}\) with \(V\) codewords (centroids) \(\mathbf{e}_{i}^{k}\in\mathbb{R}^{D}\). We update the codebook as follows: for each codebook entry \(v\), we first create a set \(\mathbf{\tilde{Z}}_{t}^{k}\) of the teacher output frames closest to the current representation of \(v\) as per the codebook \[\mathbf{\tilde{Z}}_{v}^{k}=\left\{\;\mathbf{\tilde{z}}_{t}^{k}\;\;\middle|\;v =\operatorname*{argmin}_{i\in V}\left\|\mathbf{\tilde{z}}_{t}^{k}-\mathbf{e} _{i}^{k}\right\|_{2}\right\}, \tag{2}\] where the set index \(v\) will be used as a pseudo label to train the student model. Each codeword is then updated using a weighted sum of the embeddings in this set using EMA: \[\mathbf{s}_{v}^{k} \longleftarrow\tau\;\mathbf{s}_{v}^{k}+(1-\tau)\;\sum\mathbf{ \tilde{Z}}_{v}^{k}, \tag{3}\] \[n_{v}^{k} \longleftarrow\tau\;n_{v}^{k}+(1-\tau)\;\left|\mathbf{\tilde{Z} }_{v}^{k}\right|,\] \[\mathbf{e}_{v}^{k} \longleftarrow\frac{\mathbf{s}_{v}^{k}}{n_{v}^{k}}.\] Figure 1: An overview of DinoSR: the teacher network is an exponential moving average of the student network and takes unmasked speech as input to extract target features. Online clustering is applied to multiple layers of the teacher, each with a separate codebook. The student network is trained to predict the corresponding clusters of masked input. Both teacher network and online clustering (shadowed regions) do not require gradients. For each codeword \(\mathbf{e}_{v}^{k}\), the first term \(\mathbf{s}_{v}^{k}\) tracks the sum of all neighboring teacher representations (i.e., \(\tilde{\mathbf{Z}}_{v}^{k}\) from Eq. 2), and the second term \(m_{v}^{k}\) tracks the amount of the neighbors. With both terms approximated by EMA using the decay rate \(\tau\), we have the codeword \(\mathbf{e}_{v}^{k}\) which is the moving average of its neighbor set. In practice, we found performing online clustering on the subset of the frames where \(t\in M\) is effective while reducing computation. More details and discussions on online clustering are available in SSA.2. Since we define codewords by their neighboring representations, we can treat codewords as acoustic units discovered from the teacher model in an unsupervised manner and use them for training the student network. The clustering process creates discrete labels for frames based on their context in an end-to-end fashion. In SS4.6, we show that these codewords possess similarities to human-defined acoustic units. **Online Clustering v.s. Vector Quantization.** Van Den Oord et al. [11] first introduced vector quantization (VQ) to speech representation learning, encoding input audio signals into a sequence of discrete units. Later studies [28; 18; 14; 5] found that discretizing embedding spaces not only reduced the dimensionality of the model but also lead to performance improvements in downstream tasks. Another benefit of VQ to speech representation learning is better model interpretability. Previous work [12; 29] showed that the discretized representation could be viewed as model-discovered acoustic units which often aligned with human-defined units such as phonemes. While there are similarities between VQ and the online clustering mechanism introduced here, they are also conceptually different. Prior works [18; 12; 13; 29] adopted VQ layer to serve as an efficacious discrete information bottleneck in the forward pass of the model; DinoSR leverages online clustering on gradient-free embedding space of the teacher model to mine acoustic units that can be treated as pseudo-labels. The most significant advantages of our method are 1) reducing computational costs; 2) bypassing estimations that are required by the non-differentiable nature of VQ, e.g., approximating the gradient with straight-through gradient estimator [30]; 3) mitigating problems in practice such as code collapse as shown in SS4.6. **Self-supervised Learning via Cluster Prediction** For each output frame of the student model \(\mathbf{z}_{t}^{K}\), the training objective is to predict the codeword index \(v\) of the corresponding frame from the teacher model (i.e., \(\tilde{\mathbf{z}}_{t}^{k}\in\tilde{\mathbf{Z}}_{v}^{k}\)) across all targeted layers, \[\sum_{t\in M}\sum_{k\in(K-N,K]}\log p_{\phi_{k}}(v|\mathbf{z}_{t}^{K}), \tag{4}\] where \(M\) denotes the set of all masked timesteps and \(\phi_{k}\) is the prediction head composed of a linear projection \(\mathbb{R}^{D\times V}\) followed by a softmax activation for each target layer \(k\). Note that the prediction head is at the last layer \(K\) of the student model regardless of the target layer \(k\). In SSA.3, we summarize the pre-training of DinoSR with pseudo-code to provide a complete view of our method. ## 4 Experiments ### Pre-training Following Hsu et al. [20] and Baevski et al. [9], we use 960 hours of speech from the LibriSpeech [15] corpus to pre-train our model. We focus on the Base sized transformer [4] with \(K=12\) layers and embedding dimension \(D=768\) due to resource constraints, with the batch size of 63 minutes of audio in total across 16 GPUs. The 16 kHz input waveform is first downsampled to 50Hz with a convolutional feature encoder [13]. For the student model, we randomly masked \(M=80\%\) of the 50Hz input features before feeding them into the transformer, with each masked span no shorter than 10 frames. For the teacher model, the input feature is not masked, and online clustering is performed at the top \(N=8\) layers (i.e., \(k\in[5,12]\)), each with a codebook with \(V=256\) codewords. The codebook decay rate \(\tau\) is fixed at 0.9. The student model is trained for 400k steps with the Adam optimizer [31] with a learning rate ramped up linearly to 0.0005 within the first 12k steps, held for the following 188k steps, and exponentially decayed to 0.00005 for the final 200k steps. The teacher model decay rate \(\lambda\) increases linearly from 0.999 to 0.9999 within the first 30k updates, held for the next 200k steps, and increased to 1.0 for the remaining steps. Pre-training the model takes about 180 hours on 16 Nvidia V100 GPUs. After pre-training, the student model is evaluated on different downstream tasks. ### Acoustic Unit Discovery To examine the effectiveness of the online clustering mechanism used in DinoSR, we consider the acoustic unit discovery benchmark introduced in the Zero Resource Speech Challenge 2021 [16]. In this task, the speech representation extracted from a frozen pre-trained model is used for unit discovery. The task is an ABX discrimination test: given a pair of spoken triphones (A and B, e.g., 'aba' and 'apa'), the model must decide which triphone a new input (X, e.g., 'apa') corresponds to. The new triphone can be spoken by the same speaker as A and B in the same-speaker setup, or by a different speaker in a more challenging cross-speaker setup. The evaluation metric is the decision error rate on the dev set. To measure the similarity between two sequences of a speech representation, the task introduced a pseudo-distance defined as the average framewise distance over the dynamic time warping path. A common choice of framewise distance is the cosine distance between two embedding vectors. Different from cosine similarity, we define the framewise distance as the JS-divergence between framewise probability over the codebook as defined in Eq. 4 to take advantage of the learned discrete units. Results are shown in Table 1 with three important observations. First, it can be shown that previous self-supervised methods do not surpass methods specialized for acoustic unit discovery [33; 32]. DinoSR, however, outperforms all other methods by a margin except in the easiest same-speaker clean-speech setup. Second, DinoSR performs better than HuBERT, which also leverages representation clustering for training. Finally, in this task, the continuous self-distillation method data2vec lags both DinoSR and HuBERT. With these observations, we conclude that the codebook design in DinoSR is effective for audio clustering, leading to its superior performance in acoustic unit discovery. ### Fine-tuning DinoSR for Speech Recognition Following the protocol proposed by Baevski et al. [13] and adopted by prior work [20; 22; 9], we fine-tune the student model using CTC [35] using labeled speech data under four different setups, using 10 minutes / 1 hour / 10 hours from LibriLight [36] or 100 hours from LibriSpeech [15]. After fine-tuning, we measure the word error rate (WER) on LibriSpeech by decoding test sets using the official 4-gram language model. The decoding hyper-parameter is searched with Ax 2 following the prior works. Footnote 2: [https://github.com/facebook/Ax](https://github.com/facebook/Ax) We compare DinoSR to four recent works that all adopted MLM with the Base sized transformer, and followed the same fine-tuning regime: 1) wav2vec 2.0 [13], a method relying on contrastive learning with VQ over local target representations; 2) HuBERT [20], an iterative method with offline clustering over global target representations; 3) WavLM [22], an iterative method guided by 1st iteration HuBERT and an auxiliary denoising task; and 4) data2vec [9], a self-distillation method with regression loss over contextualized target representations. In Table 2, we summarize the results \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Method} & Target & \multicolumn{2}{c}{same-speaker} & \multicolumn{2}{c}{cross-speaker} & \multirow{2}{*}{Average} \\ \cline{2-2} \cline{4-6} & layer & clean & other & clean & other \\ \hline \hline \multicolumn{6}{l}{**Best challenge participants1**} \\ Nguyen et al. [32] & - & 3.26 & 3.81 & 4.00 & 5.91 & 4.25 \\ Chorowski et al. [33] & - & **2.95** & 3.54 & 4.50 & 7.05 & 4.51 \\ \hline \multicolumn{6}{l}{**Self-supervised speech representation models2**} \\ wav2vec 2.0 [13] & 6 & 4.15 & 5.22 & 4.82 & 7.38 & 5.39 \\ HuBERT [20] & 11 & 3.07 & 3.90 & 3.71 & 6.19 & 4.22 \\ data2vec [9] & 4 & 4.03 & 5.09 & 4.72 & 6.97 & 5.20 \\ ContentVec [34] & 12 & 2.98 & 3.70 & 3.44 & 5.17 & 3.82 \\ \hline DinoSR & 5 & 3.08 & **3.43** & **3.42** & **4.42** & **3.59** \\ \hline \hline \end{tabular} * Results from [https://zerospeech.com/tasks/task_1/results/](https://zerospeech.com/tasks/task_1/results/) * Evaluating official model released by the authors. \end{table} Table 1: Acoustic unit discovery results on ZeroSpeech 2021 challenge [16] in ABX error rate. and compare them to prior work using the same setup. We also list the total pre-training steps and batch size used for each method to indicate the computation needed. Compared to other methods that rely on discrete units, our method is significantly stronger while reducing the batch size (vs. contrastive method wav2vec 2.0) and the training steps (vs. iterative offline clustering methods HuBERT and WavLM). This demonstrates the advantage of learning discrete units with online clustering instead of contrastive learning or offline clustering. An improvement over data2vec, the previous state-of-the-art method, is observed in most setups. This result shows that using discrete units as a learning target benefits speech representation learning. Despite being on par or slightly worse in a few setups, this benchmark has been thoroughly studied; thus, progress is not easily attained. Moreover, we show that DinoSR consistently outperforms data2vec in other benchmarks later in this section. Beyond recognition performance, we examined the data efficiency of each method, as shown in Figure 4.3. We introduce the metric _hours of speech processed_ that reflects the amount of speech one model needs to "hear" during pre-training. The metric is defined as the number of updates required to train the model \(\times\) batch size in hours, using attributes available in Table 2. By comparing DinoSR against prior work, we see the advantage of being more data efficient, requiring less training yet performing better. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Model} & Pre-training & Batch size & \multicolumn{2}{c}{dev} & \multicolumn{2}{c}{test} \\ \cline{3-7} & steps & (minutes) & clean & other & clean & other \\ \hline \hline \multicolumn{7}{l}{**10 minutes labeled data**} \\ wav2vec 2.0 [13] & 400k & 96 & 8.9 & 15.7 & 9.1 & 15.6 \\ HuBERT [20] & 250k + 400k & 47 & 9.1 & 15.0 & 9.7 & 15.3 \\ data2vec [9] & 400k & 63 & 7.3 & 11.6 & 7.9 & 12.3 \\ \hline DinoSR & 400k & 63 & **6.6** & **10.8** & **7.3** & **11.8** \\ \hline \hline \multicolumn{7}{l}{**1 hr labeled data**} \\ wav2vec 2.0 [13] & 400k & 96 & 5.0 & 10.8 & 5.5 & 11.3 \\ HuBERT [20] & 250k + 400k & 47 & 5.6 & 10.9 & 6.1 & 11.3 \\ WavLM [22] & 250k + 400k & 187 & - & - & 5.7 & 10.8 \\ data2vec [9] & 400k & 63 & **4.0** & 8.5 & **4.6** & 9.1 \\ \hline DinoSR & 400k & 63 & 4.1 & **8.1** & **4.6** & **8.7** \\ \hline \hline \multicolumn{7}{l}{**10 hr labeled data**} \\ wav2vec 2.0 [13] & 400k & 96 & 3.8 & 9.1 & 4.3 & 9.5 \\ HuBERT [20] & 250k + 400k & 47 & 3.9 & 9.0 & 4.3 & 9.4 \\ WavLM [22] & 250k + 400k & 187 & - & - & 4.3 & 9.2 \\ data2vec [9] & 400k & 63 & 3.3 & 7.5 & 3.9 & 8.1 \\ \hline DinoSR & 400k & 63 & **3.1** & **7.0** & **3.6** & **7.6** \\ \hline \hline \multicolumn{7}{l}{**100 hr labeled data**} \\ wav2vec 2.0 [13] & 400k & 96 & 2.7 & 7.9 & 3.4 & 8.0 \\ HuBERT [20] & 250k + 400k & 47 & 2.7 & 7.8 & 3.4 & 8.1 \\ WavLM [22] & 250k + 400k & 187 & - & - & 3.4 & 7.7 \\ data2vec [9] & 400k & 63 & **2.2** & **6.4** & **2.8** & 6.8 \\ \hline DinoSR & 400k & 63 & 2.3 & **6.4** & 2.9 & **6.7** \\ \hline \hline \end{tabular} \end{table} Table 2: Word Error Rate (WER) on LibriSpeech standard dev/test sets. All models are Base size (12-layer) transformer encoders pre-trained on the full LibriSpeech dataset (960 hours) and decoded with 4-gram language model. The best result in each setup is **bolded** and the second best is underlined. Figure 2: The trade-off between performance (WER on LibriSpeech dev-other) and data efficiency (hours of speech the model processed in total during pre-training) for different methods. ### Downstream Evaluation We further evaluate the effectiveness of DinoSR representations using the Speech Processing Universal PERformance Benchmark (SUPERB) [37; 39]. SUPERB is a benchmark consisting of ten speech-processing tasks spanning content, semantics, speaker, and paralinguistics tasks. To better understand the capabilities of modeling content and semantics, we report the results of our model on phoneme recognition (PR), automatic speech recognition (ASR), keyword spotting (KS), intent classification (IC), slot filling (SF), and speech translation (ST). In SUPERB, each pre-trained SSL model is frozen and serves as a feature extractor. In each task, a set of learnable weights are used for weighted-summing all layers' features. Then, the weighted-summed features are fed into a lightweight prediction head to generate outputs. Thus, only the learnable weights and the prediction head are fine-tuned with labeled data. The SUPERB results are shown in Table 3. In content tasks, the DinoSR surpasses prior art on PR and ASR, showing its capability of capturing better phonetic information. For semantic tasks like IC and SF, DinoSR has similar performance as WavLM [22] and HuBERT. Though DinoSR falls slightly behind the state-of-the-art model WavLM on SUPERB, it is worth pointing out that WavLM is a second iteration model based on HuBERT with a large batch size, requiring significantly more computational resources for pre-training. Moreover, WavLM has done a hyper-parameter search for each task in SUPERB (see Appendix A in Chen et al. [22]) whereas DinoSR is tested with no more than five runs in each downstream task due to resource limitations. ### Impact of Codebook Hyper-parameters To study the impact of several hyper-parameters used by DinoSR, we vary different options, including the codebook size \(V\) (default 8), the top \(N\) layers to apply online clustering (default 8), and the codebook decay rate of \(\tau\) (default 0.9). To reduce computation, we use the 10-hour subset to fine-tune the teacher network after 200k steps of pre-training. WERs are reported by decoding the dev-other subset with a fixed language model weight of 2, and word insertion penalty of \(-1\), following Baevski et al. [13]. Results are presented in Figure 3 and Table 4. Surprisingly, varying the codebook size \(V\) from 64 to 2048 only changed the resulting WER by a small margin. Compared to codebook size \(V\), the choice of the top \(N\) layers to cluster has a larger impact on the results, with the best choices ranging from 6 to 10. For the codebook decay rate \(\tau\), we found values between 0.5 to 0.99 worked well in general. Since the teacher network decay \(\lambda\) anneals \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline & \multicolumn{3}{c}{Content} & \multicolumn{3}{c}{Semantic} \\ \cline{2-9} Model1 & PR & ASR & KS & IC & SF & ST \\ & PER\(\downarrow\) & WER\(\downarrow\) & Acc\(\uparrow\) & Acc\(\uparrow\) & F1\(\uparrow\) & CER\(\downarrow\) & BLEU\(\uparrow\) \\ \hline wav2vec 2.0 [13] & 5.74 & 6.43 & 96.23 & 92.35 & 88.30 & 24.77 & 14.81 \\ CCC-wav2vec 2.0 [38] & 5.95 & 6.30 & 96.72 & 96.47 & 88.08 & 24.34 & 16.20 \\ HuBERT2 & 5.41 & 6.42 & 96.30 & 98.34 & 88.53 & 25.20 & 15.53 \\ WavLM23 [22] & 4.84 & 6.31 & **96.79** & **98.63** & **89.38** & **22.86** & **20.74** \\ data2vec [9] & 4.69 & 4.94 & 96.56 & 97.63 & 88.59 & 25.27 & 17.42 \\ \hline DinoSR & **3.21** & **4.71** & 96.69 & 98.02 & 88.83 & 23.57 & 17.68 \\ \hline \hline \end{tabular} \end{table} Table 3: Results on Speech Processing Universal PERformance Benchmark [37] (SUPERB). The tasks include phoneme recognition (PR), automatic speech recognition (ASR), keyword spotting (KS), intent classification (IC), slot filling (SF), and speech translation (ST). Metrics include accuracy (Acc%), phoneme error rate (PER%), word error rate (WER%), F1 score (F1%), concept error rate (CER%), and bilingual evaluation understudy score (BLEU). The best result in each task is **bolded** and the second best is **underlined**. throughout the training, we also tested and found annealing the codebook decay \(\tau\) to 0.99 or 0.999 is unnecessary. We suspect the stability originates from the slow-changing property of the teacher network updated via EMA. ### Analysis In this section, we took a closer look at the properties of the discrete units. We focused on the fifth layer of DinoSR and leave more analysis and comparisons against prior works in the appendix SSA.4. Cluster quality.To measure the quality of the discrete units learned by DinoSR, we adopt the three metrics proposed in HuBERT[20] as well as codebook perplexity [40]: * _Cluster purity_ (Cls Pur.) measures the purity of the set of associated codewords of each phone. * _Phone purity_ (Phn Pur.) measures the purity of the set of associated phones of each codeword. * _Phone-normalized mutual information_ (PNMI) measures the uncertainty reduction for the underlying phone when observing the codeword of a frame. * _Codebook perplexity_ (Code Ppl.) \(2^{-\sum_{V}p(v)\log_{2}p(v)}\) measures the diversity of codewords being used by the model with \(p(v)\) being the frequency distribution over the dataset. For example, code ppl.\(=\) codebook size indicates all codewords are being used equally. To compute these metrics, forced alignment is used to acquire the ground truth phone of each feature frame on LibriSpeech dev-clean and dev-other sets. The maximum cluster size for all methods is fixed to 256 for a fair comparison except VQ-APC [12]. Note that for online clustering methods, the number of active clusters might be lower due to the defect of vector quantization, and we report the number of active clusters. VQ-APC suffers from code collapse with vanilla VQ which leads to lower code usage and code ppl., so we use the model with a larger 512 codewords instead. Co-training APC [29] can be viewed as an improved version of VQ-APC which solved the problem by penalizing low codebook perplexity during training. Wav2vec 2.0 [13] is not applicable to this test since it used multiple codebooks that partitioned feature dimensions into 16 groups. Results are listed in Table 5. The MFCC clusters, which are used to train the first iteration HuBERT, provided a baseline for purity and PNMI. The first and second iterations of HuBERT, which served as the teacher in HuBERT's \begin{table} \begin{tabular}{l c c c c c} \hline \hline Method & Active & Code & Cls & Phn & PNMI \\ & cluster & Ppl. & Pur. & Pur. & \\ \hline \hline \multicolumn{6}{l}{K-means (offline clustering)} \\ \hline MFCC & 256 & 228.2 & 0.06 & 0.30 & 0.28 \\ HuBERT-iter1 L6 & 256 & 231.8 & 0.15 & 0.60 & 0.60 \\ HuBERT-iter2 L9 & 256 & 228.6 & 0.15 & 0.61 & 0.61 \\ DinoSR L5 & 256 & 242.4 & **0.17** & **0.63** & **0.62** \\ \hline \multicolumn{6}{l}{Codebook (online clustering)} \\ \hline VQ-APC & 98 & 72.1 & 0.08 & 0.24 & 0.19 \\ Co-training APC & 164 & 135.0 & 0.09 & 0.31 & 0.29 \\ DinoSR L5 & 217 & 179.2 & **0.19** & **0.58** & **0.57** \\ \hline \hline \end{tabular} \end{table} Table 4: Varying codebook decay \(\tau\). Figure 3: Varying codebook size \(V\) and the number of codebooks \(N\). \begin{table} \begin{tabular}{l c c c} \hline \hline \(\tau\) & WER \\ \hline 0.5 & 8.57 \\ 0.6 & 8.30 \\ 0.7 & 8.54 \\ 0.8 & 8.88 \\ 0.9 & 8.40 \\ 0.99 & 8.73 \\ 0.99 & 9.43 \\ 0.9 \(\rightarrow\) 0.99 & 8.71 \\ 0.9 \(\rightarrow\) 0.999 & 8.60 \\ \hline \hline \end{tabular} \end{table} Table 5: Discrete unit quality on LibriSpeech dev set measured by Codebook Perplexity (Code Ppl.), Cluster purity (Cls Pur.), Phone purity (Phn Pur.), and Phone-normalized mutual information (PNMI). Results are compared to HuBERT [20], VQ-APC [12], and co-training APC [29] using code and models released by the authors. iterative pre-training procedure, show a significant improvement over MFCCs. The results show performing K-means clustering on DinoSR, which does not require an iterative process, produces slightly better quality clusters. DinoSR makes better use of codewords compared to prior VQ works, having 217 active clusters out of 256 despite running online clustering. Better codebook usage results in a notable improvement in cluster quality since each cluster can be finer-grained. DinoSR achieved a comparable phone purity and PNMI compared to offline methods while being more efficient. Interestingly, the codebook's cluster purity surpasses offline clustering methods, which further supports the effectiveness of the proposed method. Mapping phones to codewords.To demonstrate the quality of the learned codebook, we visualize the conditional probability \(P(\text{phone}|\text{code})\) accumulated over the LibriSpeech dev sets in Figure 4. We highlight two interesting findings: 1) Each codeword is typically concentrated on one phone, reflecting the high phone purity obtained in the quality test. In the case where two phones shared high usage of the same codeword, we observed the sharing phones are acoustically similar such as /sp/ (short pause) and /sil/ (silence) in the upper left corner. 2) The overall usage of codewords captures the long-tail nature of phone distribution. The more frequent phones (upper part in figure) occupied significantly more codewords. The top 10 most frequent phones (/sp/ to /L/) held over 50% of the active codewords. This phenomenon, again, supports our claim that the proposed online clustering method is a good acoustic unit discovery system. Besides quantitative evaluations, we provide a qualitative result in Figure 5 by using t-SNE [41] to visualize the codebook in 2-dimensional space. By labeling each codeword using articulation manner classes in English, we revealed the fact that some of the acoustic attributes are embedded in the high-dimensional space. For example, vowels and silences demonstrated a high degree of concentration. ## 5 Conclusion In this paper, we introduced DinoSR - a new self-supervised method motivated by the continuous-to-discrete nature of speech understanding, leveraging recent advances in representation learning. The key innovation of DinoSR is to introduce a gradient-free online clustering method that leads to meaningful acoustic units. Our main contributions include advancing the state-of-the-art in different benchmarks with end-to-end training and providing a closer look at embeddings from speech transformers via the discrete unit. Future work includes structural learning with the codebook, scaling to larger models, and extending the model to different modalities. Figure 4: The conditional probability \(P(\text{phone}|\text{code})\) on LibriSpeech dev set visualized. The y-axis is the phone set sorted by the number of occurrences, the x-axis is the 217 active codewords sorted by the most correlated phone. A larger figure for clarity is provided in SSA.4. Figure 5: Visualizing codebook using t-SNE [41]. Each codeword is categorized into an articulation manner class by the most correlated phone.
2306.09004
Annotator Consensus Prediction for Medical Image Segmentation with Diffusion Models
A major challenge in the segmentation of medical images is the large inter- and intra-observer variability in annotations provided by multiple experts. To address this challenge, we propose a novel method for multi-expert prediction using diffusion models. Our method leverages the diffusion-based approach to incorporate information from multiple annotations and fuse it into a unified segmentation map that reflects the consensus of multiple experts. We evaluate the performance of our method on several datasets of medical segmentation annotated by multiple experts and compare it with state-of-the-art methods. Our results demonstrate the effectiveness and robustness of the proposed method. Our code is publicly available at https://github.com/tomeramit/Annotator-Consensus-Prediction.
Tomer Amit, Shmuel Shichrur, Tal Shaharabany, Lior Wolf
2023-06-15T10:01:05Z
http://arxiv.org/abs/2306.09004v1
# Annotator Consensus Prediction for Medical Image Segmentation with Diffusion Models ###### Abstract A major challenge in the segmentation of medical images is the large inter- and intra-observer variability in annotations provided by multiple experts. To address this challenge, we propose a novel method for multi-expert prediction using diffusion models. Our method leverages the diffusion-based approach to incorporate information from multiple annotations and fuse it into a unified segmentation map that reflects the consensus of multiple experts. We evaluate the performance of our method on several datasets of medical segmentation annotated by multiple experts and compare it with the state-of-the-art methods. Our results demonstrate the effectiveness and robustness of the proposed method. Our code is publicly available at [https://github.com/tomeramit/Annotator-Consensus-Prediction](https://github.com/tomeramit/Annotator-Consensus-Prediction) Keywords:Multi annotator Image segmentation Diffusion Model. ## 1 Introduction Medical image segmentation is a challenging task that requires accurate delineation of structures and regions of interest in complex and noisy images. Multiple expert annotators are often employed to address this challenge, to provide binary segmentation annotations for the same image. However, due to differences in experience, expertise, and subjective judgments, annotations can vary significantly, leading to inter- and intra-observer variability. In addition, manual annotation is a time-consuming and costly process, which limits the scalability and applicability of segmentation methods. To overcome these limitations, automated methods for multi-annotator prediction have been proposed, which aim to fuse the annotations from multiple annotators and generate an accurate and consistent segmentation result. Existing approaches for multi-annotator prediction include majority voting [7], label fusion [3], and label sampling [12]. In recent years, diffusion models have emerged as a promising approach for image segmentation, for example by using learned semantic features [2]. By modeling the diffusion of image intensity values over the iterations, diffusion models capture the underlying structure and texture of the images and can separate regions of interest from the background. Moreover, diffusion models can handle noise and image artifacts, and adapt to different image modalities and resolutions. In this work, we propose a novel method for multi-annotator prediction, using diffusion models for medical binary segmentation. The goal of multi-annotator prediction is to fuse multiple annotations of the same image from different annotators and obtain a more accurate and reliable segmentation result. In practice, we leverage the diffusion-based approach to create one map for each level of consensus. To obtain the final prediction, we average the obtained maps and obtain one soft map. We evaluate the performance of the proposed method on a dataset of medical images annotated by multiple annotators. Our results demonstrate the effectiveness and robustness of the proposed method in handling inter- and intra-observer variability and achieving higher segmentation accuracy than the state-of-the-art methods. The proposed method could improve the efficiency and quality of medical image segmentation and facilitate the clinical decision-making process. ## 2 Related work Multi-annotator strategies Research attention has recently been directed towards the issues of multi-annotator labels [7, 12]. During training, Jensen et al. [12] randomly sampled different labels per image. This method produced a more calibrated model. Guan et al. [7] predicted the gradings of each annotator individually and acquired the corresponding weights for the final prediction. Kohl et al. [15] used the same sampling strategy to train a probabilistic model, based on a U-Net combined with a conditional variational autoencoder. Another recent probabilistic approach [20] combines a diffusion model with KL divergence to capture the variability between the different annotators. In our work, we use consensus maps as the ground truth and compare them to other strategies. Diffusion Probabilistic Models (DPM) [23] are a class of generative models based on a Markov chain, which can transform a simple distribution (e.g. Gaussian) to data sampled from a complex distribution. Diffusion models are capable of generating high-quality images that can compete with and even outperform the latest GAN methods [23, 9, 19, 5]. A variational framework for the likelihood estimation of diffusion models was introduced by Huang et al. [11]. Subsequently, Kingma et al. [14] proposed a Variational Diffusion Model that produces state-of-the-art results in likelihood estimation for image density. Conditional Diffusion Probabilistic Models In our work, we use diffusion models to solve the image segmentation problem as conditional generation, given the image. Conditional generation with diffusion models includes methods for class-conditioned generation, which is obtained by adding a class embedding to the timestep embedding [19]. In [4], a method for guiding the generative process in DDPM is present. This method allows the generation of images based on a given reference image without any additional learning. In the domain of super-resolution, the lower-resolution image is upsampled and then concatenated, channelwise, to the generated image at each iteration [21, 10]. A similar approach passes the low-resolution images through a convolutional block [16] prior to the concatenation. A previous study directly applied a diffusion model to generate a segmentation mask based on a conditioned input image [1]. Baranchuk et al. [2] extract features from a pretrained diffusion model for training a segmentation network, while our diffusion model generates the output mask. Compared to the diffusion-based image segmentation method of Wolleb et al. [26], our architecture differs in two main aspects: (i) the concatenation method of the condition signal, and (ii) an encoder that processes the conditioning signal. We also use a lower value of T, which reduces the running time. ## 3 Method Our approach for binary segmentation with multi-annotators employs a diffusion model that is conditioned on the input image \(I\in R^{W\times H}\), the step estimation \(t\), and the consensus index \(c\). The diffusion model updates its current estimate \(x_{t}\) iteratively, using the step estimation function \(\epsilon_{\theta}\). See Fig. 1 for an illustration. Given a set of C annotations \(\{A_{k}^{i}\}_{i=1}^{C}\) associated with input sample \(I_{k}\), we define the ground truth consensus map at level \(c\) to be \[M_{k}^{c}[x,y]=\begin{cases}1&\sum_{i=1}^{C}A_{k}^{i}[x,y]\geq c,\\ 0&\text{otherwise},\end{cases} \tag{1}\] During training, our algorithm iteratively samples a random level of the consensus \(c\sim U[1,2,...,C]\) and an input-output pair \((I_{k},M_{k}^{c})\). The iteration number \(1\leq t\leq T\) is sampled from a uniform distribution and \(X_{T}\) is sampled from a normal distribution. We then compute \(x_{t}\) from \(X_{T}\), \(M_{k}^{c}\) and \(t\) according to: \[x_{t}=\sqrt{\bar{\alpha}_{t}}M_{k}^{c}+\sqrt{(1-\bar{\alpha}_{t})}X_{T},X_{T} \thicksim N(0,I_{n\times n}). \tag{2}\] where \(\bar{\alpha}\) is a constant that defines the schedule of added noise. Figure 1: The figure below illustrates our proposed method for multi-annotator segmentation. The input \(I_{k}\) image with the noisy segmentation map \(x_{t}\) is passed through our network iteratively \(T\) times in order to obtain an output segmentation map \(x_{0}\). Each network receives the consensus level \(c\) as an embedding \(z_{c}\) as well as the time step data. The current step index \(t\), and the consensus index \(c\) are integers that are translated to \(z_{t}\in R^{d}\) and \(z_{c}\in R^{d}\), respectively with a pair of lookup tables. The embeddings are passed to the different networks \(F\), \(D\) and \(E\). In the next step, our algorithm encodes the input signal \(x_{t}\) with network \(F\) and encodes the condition image \(I_{k}\) with network \(G\). We compute the conditioned signal \(u_{t}=F(x_{t},z_{c},z_{t})+G(I_{k})\), and apply it to the networks \(E\) and \(D\), where the output is the estimation of \(x_{t-1}\). \[\epsilon_{\theta}(x_{t},I_{k},z_{t},z_{c})=D(E(F(x_{t},z_{t},z_{c})+G(I_{k}),z _{t},z_{c}),z_{t},z_{c})\,. \tag{3}\] The loss function being minimized is: \[E_{x_{0},\epsilon,x_{e},t,c}[||\epsilon-\epsilon_{\theta}(x_{t},I_{k},z_{t},z_{ c})||^{2}]. \tag{4}\] The training procedure is depicted in Alg. 1. The total number of diffusion steps \(T\) is set by the user, and C is the number of different annotators in the dataset. Our model is trained using binary consensus maps (\(M_{k}^{c}\)) as the ground truth, where \(k\) is the sample id, and \(c\) is the consensus index. ``` Input \(T\), \(I\) for\(c=1,...,C\)do sample \(x_{T_{c}}\thicksim N(\mathbf{0},\mathbf{I}_{\mathbf{n}\times\mathbf{n}})\) for\(t=T,T-1,...,1\)do sample \(z\thicksim N(\mathbf{0},\mathbf{I}_{\mathbf{n}\times\mathbf{n}})\) \(z_{c}=LUT_{c}(c)\), \(z_{t}=LUT_{t}(t)\) \(\beta_{t}=\frac{10^{-4}(T-t)+2\ast 10^{-2}(t-1)}{T-1}\) \(\alpha_{t}=1-\beta_{t}\). \(\tilde{\alpha}_{t}=\prod_{s=0}^{t}\alpha_{s}\) \(\tilde{\beta}_{t}=\frac{1-\alpha_{t-1}}{1-\alpha_{t}}\beta_{t}\) \(\epsilon_{t}^{\prime}=\frac{1-\alpha_{t}}{\sqrt{1-\alpha_{t}}}\epsilon_{\theta }(\tilde{x}_{t},I,z_{t},z_{c})\) \(\bar{x}_{t-1_{c}}=\alpha_{t}-\frac{1}{2}(x_{t}-\epsilon_{t}^{\prime})\) \(x_{t-1_{c}}=\bar{x}_{t-1_{c}}+\mathbb{1}_{\left\lfloor t>1\right\rfloor}\tilde {\beta}_{t}^{\frac{1}{2}}z\) return\((\sum_{i=1}^{C}x_{0i})/C\) ``` **Algorithm 2** Inference Algorithm The inference process is described in Alg. 2. We sample our model for each consensus index, and then calculate the mean of all results to obtain our target, which is a soft-label map representing the annotator agreement. Mathematically, if the consensus maps are perfect, this is equivalent to assigning each image location with the fraction of annotations that consider this location to be part of the mask (if \(c\) annotators mark a pixel, it would appear in levels \(1..c\)). In Section 4, we compare our method with other variants and show that estimating the fraction map directly, using an identical diffusion model, is far inferior to estimating each consensus level separately and then averaging. Employing multiple generationsSince calculating \(x_{t-1}\) during inference includes the addition of \(\mathbb{1}_{\{t>1\}}\tilde{\beta}_{t}^{\frac{1}{2}}z\) where \(z\) is from a standard distribution, there is significant variability between different runs of the inference method on the same inputs, see Fig. 2(b). In order to exploit this phenomenon, we run the inference algorithm multiple times, then average the results. This way, we stabilize the results of segmentation and improve performance, as demonstrated in Fig. 2(c). We use twenty-five generated instances in all experiments. In the ablation study, we quantify the gain of this averaging procedure. ArchitectureIn this architecture, the U-Net's decoder \(D\) is conventional and its encoder is broken down into three networks: \(E\), \(F\), and \(G\). The last encodes the input image, while \(F\) encodes the segmentation map of the current step \(x_{t}\). The two processed inputs have the same spatial dimensionality and number of channels. Based on the success of residual connections [8], we sum these signals \(F(x_{t},z_{t},z_{c})+G(I)\). This sum then passes to the rest of the U-Net encoder \(E\). The input image encoder \(G\) is built from Residual in Residual Dense Blocks [24] (RRDBs), which combine multi-level residual connections without batch normalization layers. \(G\) has an input 2D-convolutional layer, an RRDB with a residual connection around it, followed by another 2D-convolutional layer, leaky RELU activation and a final 2D-convolutional output layer. \(F\) is a 2D-convolutional layer with a single-channel input and an output of \(L\) channels. The encoder-decoder part of \(\epsilon_{\theta}\), i.e., \(D\) and \(E\), is based on U-Net, similarly to [19]. Each level is composed of residual blocks, and at resolution 16x16 and 8x8 each residual block is followed by an attention layer. The bottleneck contains two residual blocks with an attention layer in between. Each attention layer contains multiple attention heads. The residual block is composed of two convolutional blocks, where each convolutional block contains group-norm, SiLU activation, and a 2D-convolutional layer. The residual block receives the time embedding through a linear layer, SiLU activation, and another linear layer. The result is then added to the output of the first 2D-convolutional block. Additionally, the residual block has a residual connection that passes all its content. On the encoder side (network \(E\)), there is a downsample block after the residual blocks of the same depth, which is a 2D-convolutional layer with a stride of two. On the decoder side (network \(D\)), there is an upsample block after the residual blocks of the same depth, which is composed of the nearest interpolation that doubles the spatial size, followed by a 2D-convolutional layer. Each layer in the encoder has a skip connection to the decoder side. ## 4 Experiments We conducted a series of experiments to evaluate the performance of our proposed method for multi-annotator prediction. Our experiments were carried out on datasets of the QUBIQ benchmark1. We compared the performance of our proposed method with several state-of-the-art methods. Footnote 1: Quantification of Uncertainties in Biomedical Image Quantification Challenge in MICCAI20’- link **Datasets** The Quantification of Uncertainties in Biomedical Image Quantification Challenge (QUBIQ), is a recently available challenge dataset specifically for the evaluation of inter-rater variability. QUBIQ comprises four different segmentation datasets with CT and MRI modalities, including brain growth (one task, MRI, seven raters, 34 cases for training and 5 cases for testing), brain tumor (one task, MRI, three raters, 28 cases for training and 4 cases for testing), prostate (two subtasks, MRI, six raters, 33 cases for training and 15 cases for testing), and kidney (one task, CT, three raters, 20 cases for training and 4 cases for testing). Figure 2: Multiple segmentation results on all datasets of the QUBIQ benchmark. (a) dataset, (b) input image, (c) a subset of the obtained consensus maps for multiple runs with different consensus index on the same input, (d) average result, visualized by the ’bwr’ color scale between 0 (blue) and 1 (red), and (e) ground truth. Following [13], the evaluation is performed using the soft Dice coefficient with five threshold levels, set as (0.1, 0.3, 0.5, 0.7, 0.9). **Implementation details** The number of diffusion steps in previous works was 1000 [9] and even 4000 [19]. The literature suggests that more is better [22]. In our experiments, we employ 100 diffusion steps, to reduce inference time. The AdamW [18] optimizer is used in all our experiments. Based on the intuition that the more RRDB blocks, the better the results, we used as many blocks as we could fit on the GPU without overly reducing batch size. Following [13], for all datasets of the QUBIQ benchmark the input image resolution, as well as the test image resolution, was \(256\times 256\). The experiments were performed with a batch size of four images and eight RRDB blocks. The network depth was seven, and the number of channels in each depth was \([L,L,L,2L,2L,4L,4L]\), with \(L=128\). The augmentations used were: random scaling by a factor sampled uniformly in the range \([0.9,1.1]\), a rotation between 0 and 15 degrees, translation between \([0,0.1]\) in both axes, and horizontal and vertical flips, each applied with a probability of 0.5. **Results** We compare our method with FCN [17], MCD [6], FPM [27], DAF [25], MV-UNet [13], LS-UNet [12], MH-UNet [7], and MRNet [13]. We also compare with models that we train ourselves, using public code AMIS [20], and DMISE [26]. The first is trained in a scenario where each annotator is a different sample ("No annotator" variant of our ablation results below), and the second is trained on the consensus setting, similar to our method. As can be seen in Tab. 1, our method outperforms all other methods across all datasets of QUBIQ benchmark. **Ablation Study** We evaluate alternative training variants as an ablation study in Tab 2. The "Annotator" variant, in which our model learns to produce each annotator binary segmentation map and then averages all the results to obtain the required soft-label map, achieves lower scores compared to the "Consensus" variant, which is our full method. The "No annotator" variant, where images were paired with random annotators without utilizing the annotator IDs, achieves a slightly lower average score compared to the "Annotator" variant. We \begin{table} \begin{tabular}{l c c c c c} \hline Method & Kidney & Brain & Tumor & Prost1 & Prost2 \\ \hline FCN & 70.03 & 80.99 & 83.12 & 84.55 & 67.81 \\ MCD & 72.93 & 82.91 & 86.17 & 86.40 & 70.95 \\ FPM & 72.17 & - & - & - & - \\ DAF & - & - & - & 85.98 & 72.87 \\ MV-UNet & 70.65 & 81.77 & 84.03 & 85.18 & 68.39 \\ LS-UNet & 72.31 & 82.79 & 85.85 & 86.23 & 69.05 \\ MH-UNet & 73.44 & 83.54 & 86.74 & 87.03 & 75.61 \\ MRNet & 74.97 & 84.31 & 88.40 & 87.27 & 76.01 \\ AMIS & 68.53 & 74.09 & 92.95 & 91.64 & 21.91 \\ DMISE & 74.50 & 92.80 & 87.80 & 94.70 & 80.20 \\ Ours & **96.58** & **93.81** & **93.16** & **95.21** & **84.62** \\ \hline \end{tabular} \end{table} Table 1: QUBIQ soft Dice results. Figure 3: Soft Dice vs. #generated images. also note that our "No annotator" variant outperforms the analog AMIS model in four out of five datasets, indicating that our architecture is somewhat preferable. In a third variant, our model learns to predict the soft-label map that denotes the fraction of annotators that mark each image location directly. Since this results in fewer generated images, we generate \(C\) times as many images per test sample. The score of this variant is also much lower than that of our method. Next, we study the effect of the number of generated images on performance. The results can be seen in Fig. 3. In general, increasing the number of generated instances tends to improve performance. However, the number of runs required to reach optimal performance varies between classes. For example, for the Brain and the Prostate 1 datasets, optimal performance is achieved using 5 generated images, while on Prostate 2 the optimal performance is achieved using 25 generated images. Fig. 4 depicts samples from multiple datasets and presents the progression as the number of generated images increases. As can be seen, as the number of generated images increases, the outcome becomes more and more similar to the target segmentation. ## 5 Discussion In order to investigate the relationship between the annotator agreement and the performance of our model, we conducted an analysis by calculating the average Dice score between each pair of annotators across the entire dataset. The results of this pairwise Dice analysis can be found in Tab 3, where higher mean-scores indicate a greater consensus among the annotators. We observed that our proposed method demonstrated improved performance on datasets with higher agreement among annotators, specifically the kidney and \begin{table} \begin{tabular}{l c c c c c} \hline Method & Kidney & Brain & Tumor & Prostate 1 & Prostate 2 \\ \hline Annotator & 96.13 & 89.88 & 92.51 & 93.89 & 76.89 \\ No annotator & 94.46 & 89.78 & 91.78 & 92.58 & 78.61 \\ Soft-label & 65.41 & 79.56 & 75.60 & 73.23 & 65.24 \\ Consensus (our method) & **96.58** & **93.81** & **93.16** & **95.21** & **84.62** \\ \hline \end{tabular} \end{table} Table 2: Ablation study showing soft Dice results for various alternative methods of training similar diffusion models. \begin{table} \begin{tabular}{l c} \hline \hline Dataset & Mean score between pairs \\ \hline Kidney & 94.95 \\ Brain & 85.74 \\ Tumor & 90.65 \\ Prostate 1 & 94.64 \\ Prostate 2 & 89.91 \\ \hline \hline \end{tabular} \end{table} Table 3: Pairwise Dice scores per dataset. prostate 1 datasets. Conversely, the performance of the other methods significantly deteriorated on the kidney dataset, leading to a lower correlation between the Dice score and the overall performance. Additionally, we examined the relationship between the number of annotators and the performance of our model. Surprisingly, we found no significant correlation between the number of annotators and the performance of our model. ## 6 Conclusions Shifting the level of consensus required to mark a region from very high to as low as one annotator, can be seen as creating a dynamic shift from a very conservative segmentation mask to a very liberal one. As it turns out, this dynamic is well-captured by diffusion models, which can be readily conditioned on the level of consensus. Another interesting observation that we make is that the mean (over the consensus level) of the obtained consensus masks is an effective soft Figure 4: Multiple segmentation results per number of generated images. (a) dataset, (b) input image, (c) results for 1, 5, 10, 25 generated images, and (d) ground truth. mask. Applying these two elements together, we obtain state-of-the-art results on multiple binary segmentation tasks.
2304.07650
Understanding Developers Privacy Concerns Through Reddit Thread Analysis
With the growing global emphasis on regulating the protection of personal information and increasing user expectation of the same, developing with privacy in mind is becoming ever more important. In this paper, we study the concerns, questions, and solutions developers discuss on Reddit forums to enhance our understanding of their perceptions and challenges while developing applications in the current privacy-focused world. We perform various forms of Natural Language Processing (NLP) on 437,317 threads from subreddits such as r/webdev, r/androiddev, and r/iOSProgramming to identify both common points of discussion and how these points change over time as new regulations are passed around the globe. Our results show that there are common trends in privacy topics among the different subreddits while the frequency of those topics differs between web and mobile applications.
Jonathan Parsons, Michael Schrider, Oyebanjo Ogunlela, Sepideh Ghanavati
2023-04-15T22:35:01Z
http://arxiv.org/abs/2304.07650v1
# Understanding Developers Privacy Concerns Through Reddit Thread Analysis ###### Abstract With the growing global emphasis on regulating the protection of personal information and increasing user expectation of the same, developing with privacy in mind is becoming ever more important. In this paper, we study the concerns, questions, and solutions developers discuss on Reddit forums to enhance our understanding of their perceptions and challenges while developing applications in the current privacy-focused world. We perform various forms of Natural Language Processing (NLP) on 437,317 threads from subreddits such as r/webdev, r/androiddev, and r/iOSProgramming to identify both common points of discussion and how these points change over time as new regulations are passed around the globe. Our results show that there are common trends in privacy topics among the different subreddits while the frequency of those topics differs between web and mobile applications. Reddit, Developers, Privacy, Application development, Privacy policy, Natural language processing 2020 acmcopyrightright figurec ## 1 Introduction As the world is continuously advancing in ever-growing connected ways, software developers and requirements analysts are required to implement privacy-preserving solutions to protect users' privacy in their applications. On the other hand, users are becoming more conscious about how their information is collected and used by various organizations. These parallel growths resulted in the creation of several privacy-focused regulations, such as the General Data Protection Regulation (GDPR) [1] and California Consumer Protection Act (CCPA) [2]. To ensure compliance with these regulations, systematic privacy by design approaches and tools need to become the norm. Without proper privacy education, tools, and guidelines, developers look into forums such as Stack Overflow [3], or Reddit [4] to find solutions to their privacy-related questions. Understanding the types of privacy-focused questions asked on these forums and developers' challenges helps better tailor the tools and approaches to their needs. In addition to surveys [5], in recent years, some work focused on evaluating privacy-related questions on Stack Overflow (SO) and Reddit [6, 7, 8]. Li et al. [8] conducted an analysis of Android developers' privacy concerns on r/androidev subreddit and discovered that privacy appears to be underrepresented. Tahaei et al. [6, 7] evaluated SO and show that conversations are mostly focused on compliance with regulations, often citing official documents. Li et al. [9] used Reddit [10] to identify the narratives driving users' privacy concerns rather than developers and showed that understanding the users' concerns could drive developers' concerns, as well. In this paper, we will extend Li et al. [8] approach by evaluating other similar subreddits, /r/iOSprogramming and /r/webdev, and conduct a comparative analysis on the types of privacy questions asked based on privacy requirements imposed by these three frameworks (i.e., Android, iOS, and Web) and the role of privacy regulations on the questions. We also examine the sentiment regarding privacy, comparing that amongst various subreddit communities to get a better idea about how developers actually view privacy, not just how they comply with it. Our analysis of 437,317 threads shows that there are differences in the most frequent topics based on mobile and web app development. We also observe that the regulations such as GDPR and CCPA impacted the topics' trend, however, we could not conclude a change in the overall sentiment. Our research questions which we will answer in this paper are as follows: * **RQ1:** What are the major privacy concerns on developer forums? * **RQ2:** What is the overall sentiment of developing privacy requirements in the studied communities? How does it differ by subreddits? * **RQ3:** How do regulations such as GDPR and CCPA influence RQ1 and RQ2? ## 2 Related Works In recent years, several works focused on evaluating and understanding the privacy challenges of developers and their approach to implementing privacy requirements. Some studies evaluate developers' privacy behaviors through questionnaires or surveys [11, 12, 13] and show that developers generally see privacy as a burden and afterthought and are not familiar with basic privacy concepts [14, 5]. Other research [6, 7, 8, 15] evaluate developers' popular forums such as Stack Overflow (SO) [3] and Reddit [4] to identify the topics of developers' questions. Proferes et al. [10] analyzed 727 publications between 2010 and 2020 and identified a substantial increase in the usage of Reddit as a data collection medium by disciplines ranging from computer science to social sciences. Iqbal et al. [16] evaluated the data from the top 10 mobile and desktop apps' subreddits and identified that 54% of the post included useful information such as bug reports or feature requests and could be used for requirements elicitation. Analysis of 4,957 Reddit comments in 180 security- and privacy-related discussions from /r/homeautomation show that users' concerns are context-dependent and their attitude towards privacy and security can change during the different phases of adoption of smart home devices [17]. Li et al. [8] analyzed 207 discussions on r/androiddev subreddit to identify how developers discuss personal data protection. Their findings indicate that privacy concerns are not discussed often on developer's forums and developers shy away from discussions relating to privacy concerning plan and execution trials. However, they posit that they seem externally motivated by new demands for privacy emerging from privacy-focused regulations. Tahaei et al. [6, 7, 18] studied 315 privacy questions on SO and identified that the introduction of Google and Apple privacy labels resulted in an increase in the number of privacy-related questions on SO [6]. In another study [7], they explored the types of advice given by developers on privacy issues and compared them with Hoepman's approach [19]. They identified 148 pieces of advice focused on regulations and confidentiality, including 'inform,' 'hide,' 'control,' and'minimize'. In this paper, we extend the current approaches [7, 8, 11, 18] in two ways: (a) the number of subreddits to review and (b) the scope at which they are analyzed. We extend Li et al. [8] by adding r/iOSprogramming and r/webdev to r/androiddev to get a more general idea of developers' privacy concerns. These subreddits constitute the majority of the current software development. We also conduct _sentiment_ analysis on the discussions similar to [20, 21, 22], and evaluate how those may change between the different communities to gain insight into the emotions behind the developers when it comes to privacy and the surrounding policy. ## 3 Methodologies In this section, we describe our methodology for collecting and creating the Reddit dataset (i.e., Section 3.1) and then explain the process of analyzing the data (i.e., Section 3.2). ### Gathering Reddit Data To gather data, we utilize Pushshift Multithread API Wrapper [23, 24] which allows for granular information and provides the ability to multithread, in compare to other work [9, 20, 25]. The current related work provides insights regarding _what_ and _how_ of extracting the information. Li et al. [8] focus on /r/androiddev, which includes 203k members (top 1% of subreddits by size) [26] and is an active community with the discussions limited to high-level android app design. We extend their effort to two other similar subreddits, /r/iOSprogramming (120k members) [27] and /r/webdev (1.4ml members) [28] to broaden the scope of the analysis of developers' privacy concerns across various development platforms. In total, we pulled 100,040 submissions from r/androiddev, 55,553 submissions from r/iOSprogramming, and 281,724 submissions from r/webdev along with all associated comments, from January 2014 through November 2022. We chose these subreddits specifically due to their relative similarity in being platform-specific, developer-centric forums. ### Processing and Analyzing Reddit Data After collecting the data, we need to identify privacy-related submissions. A common way to do this is by filtering posts and discussions that contain the term "privacy" [6, 7], or other similar terms that are often used in discussions around privacy such as "GDPR", "CCPA", "mac address", "location", etc [8, 9, 21]. In Table 1, we augmented the terms identified by Li et al. [8] (i.e., Unique Identifier, Photo, and Video, Audio, and Location) with general privacy and privacy regulation terms (i.e., Privacy category in Table 1). We then used these keywords to create a privacy-related dataset before performing our analysis. We also preprocessed the data by removing stopwords, stemming, and lemmatizing text to prepare for our analysis [29]. To answer RQ1, we conducted simple phrase frequency analysis and then identified the topics of each submission by leveraging Latent Dirichlet Allocation (LDA) [30] similar to other work [29]. We first narrowed down our dataset to privacy questions via an Adaptive Boosting (AdaBoost) model to classify posts in our dataset as questions. The AdaBoost classifier contained 25 decision stumps and was trained on a combination of subsets of SQuAD and SPAADIA datasets which include phrases and sentences labeled as either statements or questions [31, 32, 33]. We transformed the training data into matrices of token counts and validated the classifier against 200 randomly sampled posts. We manually classified posts from AndroidDev containing 93 (46.5%) questions and 107 (53.5%) statements. The classification was performed by a single team member and verified by another; the agreed-upon method was to read both the title and body, and if either had a question from the redditer to the community, then it was classified as a question; framing and rhetorical questions were not considered questions. Against the validation set, our classification achieved 71% accuracy. Through LDA analysis, we generated ten topics of four words each for posts Pre/Post GDPR and Pre/Post CCPA. To address RQ2, we use qualitative metrics to evaluate both the discussions and posts [10]. Qualitatively, we perform sentiment analysis leveraging Natural Language Toolkit (NLTK) approaches [20, 21, 34]. We answer RQ3 by applying the same term frequency analysis and LDA topic analysis used for RQ1 against datasets filtered to pre- and post-regulations and trending the RQ2 sentiment analysis to examine if there is any change due to the introduction of GDPR (April 2016) and CCPA (June 2018).2 Footnote 2: The data, models, and analysis are: [https://github.com/mschrider/PEP_Privacy_Dev_Forum_Analysis/](https://github.com/mschrider/PEP_Privacy_Dev_Forum_Analysis/). ## 4 Results In the first step, we conducted a simple word counting analysis to identify how often privacy-related words appear in the initial posts of r/androiddev, r/iOSProgramming, and r/webdev \begin{table} \begin{tabular}{l l} \hline \hline Data Category & Privacy Keywords \\ \hline Privacy & private, privacy, gdpr, general data protection regulation, ccpa, ccpr, \\ Unique Identifier & callfornia consumer privacy act, California consumer privacy regulation \\ & first name, last name, real name, identification number, id number, social \\ & security number, ssn, license number, passport number, screen name, user \\ & name, account name, user id, username, userid, online identifier, imei, \\ & device serial number, advertising id, android id, ssaid, mac address, imsi, \\ & instance id, guid, internet protocol address, ip address, email address, \\ & telephone number, phone number, line1 \\ & video recording, camera, gallery \\ & audio recording, microphone, voice \\ & location & voice, location, physical street address, home address, \\ & street name, city name, postal address \\ \hline \hline \end{tabular} \end{table} Table 1: A List of the Privacy Terms forums. We noticed that typical permission requests, such as "location", "camera", "gallery", "microphone" and "voice" are the more prevalent words, though they still make up a small portion of the overall posts. Additionally, we observed that the most common word, _location_, appears nearly twice as often as the second common one, _camera_. With more detailed analysis, we noticed that while _location_ and _camera_ are privacy-related words, they are more often referred to for non-privacy reasons; such as the best methods for having an app interact with a camera. After this high-level evaluation, we delve into our research questions. ### RQ1 - What are the major privacy concerns on developer forums? The initial phrase frequency analysis indicates that "location" and "camera" are the main privacy-related terms referenced in the titles and bodies of posts. Meanwhile /r/webdev prioritizes "location" but replaces "camera" with "gallery" while "email address" and "username" are much higher up in the list (See Fig. 1 and Fig. 2). This data shows the inherent difference between the primary functions of web-based and mobile-based applications. Based on the trends above, we narrowed down the questions to those containing "location" or "camera" for further analysis. We observe that most questions are not in fact privacy-related; though for _location_, one of the most upvoted questions is "GDPR - What all do I need to do?". In general, we identified that the questions that were actually privacy-related were centered around asking for and granting permission to apps for capabilities that have impacts on privacy. Our LDA analysis shows common topics among privacy questions. Fig. 3 and Fig. 4 include Figure 1: Comparison of Title Text Frequency Figure 2: Comparison of Body Text Frequency the top four webdev topics for pre- and post-GDPR/CCPA. We observed that 'gdpr' is one of the top trending topics from its expected non-existence pre-GDPR. Topics with variations of the word policy, consent, or cookie increased in frequency post-GDPR/CCPA but privacy-related topics appear more frequently post-CCPA. There is an overlap in post-CCPA with post-GDPR data; thus, it may be hard to distinguish the effects of each regulation in isolation. RQ2 - What is the overall sentiment of developing privacy in the studied communities? Does it differ by subreddit? Overall, all subreddits show similar sentiment profiles, with titles generally tending toward neutral and the main bodies of text tending toward positive. The results in Fig. 5 show the trends over time, with markers for where GDPR and CCPA were introduced. While all subreddits show undulations, there is no concrete evidence of trends in sentiment shifting one way or another. ### RQ3 - How do regulations such as GDPR and CCPA influence RQ1 and RQ2? Topics and terms show a significant change due to GDPR but to a lesser extent CCPA. As shown in Fig. 3 and Fig. 4, the overall distribution of topics/terms changed noticeably after both regulations were enacted. However, despite the changing topics, we did not observe conclusive Figure 4: webdev Top Topics Pre (left) and Post (right) CCPA. Figure 3: webdev Top Topics Pre (left) and Post (right) GDPR. evidence of a change in sentiment post-GDPR/CCPA. All three subreddits showed very steady sentiment, with roughly similar noise values, across the entire timeline represented in the data. ## 5 Discussion Similarities between /r/iOSProgramming and /r/androiddev are expected since both deal with mobile development. Mobile privacy concerns such as camera, microphone, and location permissions dominate the discussions while on /r/webdev, main focus is on cookies and websites. Understanding the trends and commonalities regarding developers' privacy concerns is important to platform owners (e.g., Apple, Google, etc.), regulators, and requirements analysts. Platform owners should provide support to mitigate and resolve developers' privacy questions. Regulations drive developers' concerns; therefore, regulators should study the impact of their regulations on developer behaviors. With this information, requirements analysts could focus on developing approaches to elicit and model privacy requirements from regulations and automate the compliance process. This study was done solely on Reddit, which may introduce an inherent selection bias; for example, due to focusing on issues brought up by developers active in their online communities. While we extended previous research [8], it still leaves out significant areas of research such as conducting interviews, surveys, or analysis of other forums to gain insights. There are limitations to our NLP analysis. The AdaBoost question classifier does not consider relations between tokens, parts of speech utilized in Reddit posts, or other potential features which leads to improper classification of rhetorical questions. For example, a post with: "Heard about a cool job posting? Let people know!" was misclassified. The sentiment analysis is also generic since the corpus used to generate the polarity scores, nltk.sentiment.vader, is specifically designed to be used with social media posts. While Reddit is a social media platform, a more directed corpus around software development most likely provides more accurate results. In the future, we will explore approaches proposed in [22, 35] to conduct a more detailed sentiment analysis. Lastly, our data collection for privacy-related questions is based on keywords search which may result in missing a large number of privacy content. We propose to use Natural Language Inference (NLI) approaches to extract a larger pool of data, similar to [36]. Figure 5: Sentiment over time for androiddev, iOSProgramming, and webdev subreddits. ## 6 Conclusion and Future Work In this paper, we looked into the developers' concerns and questions on Reddit forums to gain a better understanding of their attitudes toward privacy and the challenges they face when developing applications. We examined 437,317 threads from the subreddits r/webdev, r/androiddev, and r/iOSProgramming to determine the most frequently discussed topics as well as how the sentiments around these topics have evolved in response to GDPR and CCPA. Through a combination of word frequency, topic clustering, and question classifying, we observed that a large number of questions are related to requesting permissions from users, such as camera and location, or complying with the various regulations, particularly the GDPR. Additionally, we explored the emotions around dealing with privacy and found that there is a general neutral-to-positive sentiment around it across all types of development reviewed. In the future, we plan to extend our effort to other developers' forums and improve our classification task. We will also leverage NLI to extract more privacy-related concepts.
2306.06337
Tailoring Exciton Dynamics in TMDC Heterobilayers in the Quantum Plasmonic Regime
Control of excitons in transition metal dichalcogenides (TMDCs) and their heterostructures is fundamentally interesting for tailoring light-matter interactions and exploring their potential applications in high-efficiency optoelectronic and nonlinear photonic devices. While both intra- and interlayer excitons in TMDCs have been heavily studied, their behavior in the quantum tunneling regime, in which the TMDC or its heterostructure is optically excited and concurrently serves as a tunnel junction barrier, remains unexplored. Here, using the degree of freedom of a metallic probe in an atomic force microscope, we investigated both intralayer and interlayer excitons dynamics in TMDC heterobilayers via locally controlled junction current in a finely tuned sub-nanometer tip-sample cavity. Our tip-enhanced photoluminescence measurements reveal a significantly different exciton-quantum plasmon coupling for intralayer and interlayer excitons due to different orientation of the dipoles of the respective e-h pairs. Using a steady-state rate equation fit, we extracted field gradients, radiative and nonradiative relaxation rates for excitons in the quantum tunneling regime with and without junction current. Our results show that tip-induced radiative (nonradiative) relaxation of intralayer (interlayer) excitons becomes dominant in the quantum tunneling regime due to the Purcell effect. These findings have important implications for near-field probing of excitonic materials in the strong-coupling regime.
Mahfujur Rahaman, Gwangwoo Kim, Kyung Yeol Ma, Seunguk Song, Hyeon Suk Shin, Deep Jariwala
2023-06-10T03:19:42Z
http://arxiv.org/abs/2306.06337v1
# Tailoring Exciton Dynamics in TMDC Heterobilayers in the Quantum Plasmonic Regime ###### Abstract Control of excitons in transition metal dichalcogenides (TMDCs) and their heterostructures is fundamentally interesting for tailoring light-matter interactions and exploring their potential applications in high-efficiency optoelectronic and nonlinear photonic devices. While both intra- and interlayer excitons in TMDCs have been heavily studied, their behavior in the quantum tunneling regime, in which the TMDC or its heterostructure is optically excited and concurrently serves as a tunnel junction barrier, remains unexplored. Here, using the degree of freedom of a metallic probe in an atomic force microscope, we investigated both intralayer and interlayer excitons dynamics in TMDC heterobilayers via locally controlled junction current in a finely tuned sub-nanometer tip-sample cavity. Our tip-enhanced photoluminescence measurements reveal a significantly different exciton-quantum plasmon coupling for intralayer and interlayer excitons due to different orientation of the dipoles of the respective _e-h_ pairs. Using a steady-state rate equation fit, we extracted field gradients, radiative and nonradiative relaxation rates for excitons in the quantum tunneling regime with and without junction current. Our results show that tip-induced radiative (nonradiative) relaxation of intralayer (interlayer) excitons becomes dominant in the quantum tunneling regime due to the Purcell effect. These findings have important implications for near-field probing of excitonic materials in the strong-coupling regime. ## Introduction Coulomb bound electron-hole (_e-h_) pairs, commonly known as excitons, govern the optical properties of monolayer transition metal dichalcogenides (TMDCs) due to their large binding energies (on the scale of 0.5 eV) and oscillator strengths[1]. As a result, the fundamental optical properties of these materials are dominated by many body excitonic resonances, even at room temperature (RT). Furthermore, in a homo/hetero-bilayer (HBs) sample made from TMDCs, ultrafast interlayer charge transfer can also facilitate the formation of interlayer excitons (ILXs) with long lifetimes and large exciton binding energies observed at RT in prior work[2]. Therefore, TMDCs have attracted significant attention for both fundamental studies of novel quantum optical phenomena and photonic/optoelectronic applications in recent times[3, 4, 5, 6, 7, 8, 9]. TMDCs possess strong light-matter coupling at excitonic resonances in the visible part of the spectrum, with almost ideal two-dimensional (2D) confinement, making it easier to control the excitonic parameters such as resonance energies, oscillator strength, radiative and nonradiative lifetimes on demand[10, 11, 12, 13, 14]. Therefore, one way of controlling excitons is to manipulate them via plasmonic coupling, using plasmonic resonances in noble metal nanostructures which are also in the visible spectrum. In particular, the use of metallic nanostructures in the proximity of TMDC monolayers can create both weak and strong coupling regimes for excitons, and thus, control the emission energies, decay rates, radiative, and nonradiative lifetimes [15, 16, 17]. In general, excitons in TMDCs in proximity to a plasmonic system can be treated as dipole emitters, whose emission can be expanded into multipoles centered around the plasmonic energy [18]. Hence, the strength of coupling between the exciton and plasmon, and the associated manipulation of excitonic parameters, depends on the individual field polarizability. Therefore, the manipulation of intralayer (in-plane polarization) and interlayer (out-of-plane polarization) excitonic parameters in TMDC HBs via a plasmonic cavity can be different due to their different polarization states. ILXs, in particular, show great tunability in a plasmonic cavity as a function of cavity size in the \(z\)-direction (_i.e._, coupling efficiently to the ILX polarization), resulting in the amplification of both exciton decay rate and radiative lifetime [19, 20]. However, as the cavity size is further tuned from nanometer to sub-nm gap (in the quantum plasmonic regime), a strong interaction between the plasmonic field and ILX results in more nonradiative loss. Whereas, in-plane polarized intralayer excitons show an opposite trend as the size of the cavity decreases further in the sub-nm scale due to the Purcell effect [21]. The ultrafast exciton-plasmon interaction dynamics are generally probed via pump-probe and time-resolved PL measurements in a conventional optical configuration in the form of overall PL lifetimes of the excitonic species [22, 23]. However, it is not feasible to finely control cavity size in the sub-nm scale and simultaneously probe the exciton-plasmon interaction dynamics, let alone discern radiative and nonradiative contributions using a conventional setup. Recently, a qualitative approach of determining the individual contribution of radiative and nonradiative decays and the Purcell effect on intralayer and interlayer excitons in TMDC HBs has been proposed using tip-enhanced photoluminescence (TEPL) configuration in a finely tuned sub-nm cavity [21]. In this approach, the PL measured from both intralayer and interlayer excitonic emissions in a sub-nm cavity can be fitted using a rate equation model to deem the contribution of Purcell enhancing/quenching, radiative, and nonradiative lifetimes. Although the model, effectively deconvoluted all the contributing parameters, an important question remained unanswered, which is how these parameters evolved in a quantum plasmonic regime (sub-nm cavity) when the junction current flows through the channel. This is particularly relevant since previous works have predicted that tip-induced tunneling through the TMDC monolayers to the metal substrate can reduce the strength of the plasmonic field in the sub-nm cavity and hence decrease the intralayer excitonic emission [24, 25]. Here, we conduct a systematic investigation into the effect of junction current on the dynamics of intralayer and interlayer exciton-plasmon interactions in TMDC HBs within the quantum plasmonic regime, using a finely tune tip-sample cavity in a TEPL configuration. We utilize MoS\({}_{2}\)/WSe\({}_{2}\) HBs as a test bench on a hBN/Au substrate. Our findings indicate that as the tip-sample distance decreases below 1 nm, in the absence of junction current, intralayer exciton amplifies while ILXs decrease drastically due to the Purcell effect and stronger nonradiative coupling to the plasmon field respectively. Moreover, once a channel is established for current to flow through the HB/hBN to the Au substrate, a reverse trend is observed. Using a rate equation model, we qualitatively determined all the coupling parameters, including the dynamics of exciton-plasmon interactions. To the best of our knowledge, our results present the first experimental demonstration of the dynamics of exciton-plasmon interactions in the presence of junction current in the quantum plasmonic regime. ### Results and Discussions Fig. 1a,b show an optical image and atomic force microscope (AFM) topography, respectively, of one of the MoS\({}_{2}\)/WSe\({}_{2}\) HB samples prepared on hBN/Au substrate. Details of the sample preparation can be found in the experimental section and supplementary information (S1). Three far-field PL maps are created: two for intralayer excitons X\({}_{\text{M}}\) (monolayer MoS\({}_{2}\)) and X\({}_{\text{W}}\) (monolayer WSe\({}_{2}\)), and one for interlayer exciton X\({}_{\text{LL}}\) (HB) across the MoS\({}_{2}\)/WSe\({}_{2}\) interface, respectively, and presented in Fig. 1c-e. Three corresponding far-field PL spectra are displayed in Fig. 1f. As shown in the AFM topography and PL maps, areas marked by the red dashed lines only produce strong ILXs, suggesting better interfacial coupling. We also perform complementary surface potential mapping with/without illumination to further validate our hypothesis. Details of the Kelvin probe force microscope (KPFM) measurements can be found in the supplementary information (S2). We observe a strong ILX emission followed by heavily quenched intralayer X\({}_{\text{M}}\) and X\({}_{\text{W}}\) emission on the areas marked by red dashed lines, a hallmark of the ILX formation process. After initial far-field characterization, we perform TEPL measurements on areas of strong interfacial coupling. Fig. 2a shows a schematic of the TEPL measurements. We use an Au tip for the TEPL measurements under 633 nm excitation. The introduction of an Au substrate creates a plasmonic dimer cavity, the polarization of which is perpendicular to the basal plane of the HB (as shown by \(E\) in the scheme). We also tune the tip-sample distance (\(d\)) via AFM piezo actuator to investigate exciton dynamics in the HBs. We use 3 nm hBN grown by chemical vapor deposition (CVD) as the insulating barrier between HBs and the substrate. Fig. 2b displays an AFM topography image taken across the boundary of HB and WSe\({}_{2}\). The white dashed line is drawn as a guide to the eye along the border line. A TEPL hyperspectral map is acquired across the Figure 1: **Far-field optical characterization of HB.** (a) Optical image and (b) AFM topography of one of the MoS\({}_{2}\)/WSe\({}_{2}\) HB sample prepared on 3 nm hBN/Au substrate. (c) – (e) PL maps of intralayer exciton MoS\({}_{2}\) (X\({}_{\text{M}}\)), WSe\({}_{2}\) (X\({}_{\text{W}}\)) and interlayer exciton (ILX) across MoS\({}_{2}\)/WSe\({}_{2}\) interface respectively. Areas where strong interfacial coupling is established ILX have strong emission followed by quenching of intralayer X\({}_{\text{M}}\) and X\({}_{\text{W}}\). (f) Three representative PL spectra of MoS\({}_{2}\), WSe\({}_{2}\), and HB regions displaying characteristic PL spectra of intralayer and interlayer excitons. boundary and superimposed on the corresponding topography area within the AFM image in Fig. 2b. The TEPL map is created for X\({}_{\text{IL}}\) spectral range. Two representative TEPL spectra of the two regions (red and blue circles on the TEPL map) are shown in Fig. 2c. The orange rectangular shade is the spectral area for which the X\({}_{\text{IL}}\) map is created in Fig. 2b. As can be seen, TEPL spectra of HB is dominated by ILX emission, with both intralayer X\({}_{\text{M}}\) and X\({}_{\text{W}}\) strongly quenched. Additionally, the TEPL map also exhibits a spatially homogeneous distribution of the X\({}_{\text{IL}}\) intensity in the HB region. In order to investigate sub-nm tip-sample gap dynamics of exciton-plasmon interaction for both intra- and interlayer excitons in HB, we acquire TEPL spectra as a function of tip-sample distance at each point. In addition, we simultaneously record the junction current profile (current flowing from the tip to the substrate through the HB) as a function of tip-sample distances. The current profile is recorded in the short circuit configuration (_i.e._ tip and substrate are electrically connected and the bias voltage, V = 0 V as shown in the inset of Fig. 2a). Therefore, the driving force for the current flow in the sub-nm gap (quantum plasmonic regime) can be a combination of the tunneling of tip hot electrons through HB to the Au substrate and the photovoltage created at the HB interface under 633 nm excitation [26, 27]. Important to note that, we consistently observe junction current at random points on the HB/hBN/Au sample. As mentioned earlier, we use a CVD-grown 3 nm thick hBN film as the insulating barrier between the HB and Au substrate. During the transfer process of the CVD-grown hBN film onto the Au substrate using the PMMA-assisted wet transfer method from the sapphire substrate (see experimental section), it is possible that the quality of the film is compromised, and random channels are opened for the current flow between the tip and substrate. To support our hypothesis, we also perform conductive AFM mapping on hBN/Au areas adjacent to the HB. Results of the conductive AFM mapping of hBN film are presented in the supplementary information (S3). Figure 2: **TEPL study of HB.** (a) Schematic illustration of TEPL measurements. In-plane intralayer X\({}_{\text{M}}\) and X\({}_{\text{W}}\) and out-of-plane interlayer X\({}_{\text{IL}}\) were excited/amplified by the plasmonic field created at the tip apex under 633 nm excitation. Introduction of a Au substrate created a dimer cavity with the polarization direction perpendicular to the basal plane of the HB. Tip-sample distance was tuned via AFM piezo actuator from few nm to sub-nm gap and TEPL signal was collected. Inset: the schematic of the electrical configuration of the tip-sample junction. (b) AFM topography image at the boundary of the HB and WSe\({}_{2}\). Inset: a TEPL map acquired for X\({}_{\text{IL}}\) across the boundary superimposed on the corresponding topography area. (c) Two representative TEPL spectra on the map taken from red and blue circles marked on the TEPL map image. Orange shade on the TEPL spectra is the spectral region for which the TEPL map was created. Inset: zoomed in spectral range covered by the green box highlighting X\({}_{\text{M}}\). Fig. 3a,b show two sets of TEPL evolution as a function of tip-sample distance with junction current off and on respectively. These two data sets are recorded on two different points in the same TEPL map shown in Fig. 2b. The corresponding current vs tip-sample distance graphs are presented in Fig. 3c. Since electrons are flowing from the tip to the substrate (as shown in a schematic in Fig. 3d), and the substrate is grounded, we observe a negative current as a function of tip-sample distance. For the tip-sample distance-dependent study, we vary the AFM piezo actuator and record the corresponding Force curves, from which the actual tip-sample distances are calculated. Details of the tip-sample distance determination procedure can be found in the supplementary information (S4). PL evolution without junction current reveals two distinct tip-sample gap regimes: (i) in the nm gap (> 1 nm) regime all exciton intensities are increasing, and (ii) in the sub-nm gap regime intralayer (interlayer) exciton intensity is increasing (decreasing). Moreover, we can also observe that in the sub-nm gap X\({}_{\text{M}}\) intensity gradually deceases with gap size. Additionally, the contribution from dark exciton (X\({}_{\text{D}}\)) of WSe\({}_{2}\) becomes apparent as the gap shrinks. Two representative TEPL spectra one in the nm gap and the other in the sub-nm gap regime are plotted in the supplementary information (S5) for the PL evolution map shown in Fig. 3a. Observation of dark excitons in WSe\({}_{2}\) in the TEPL configuration is a well-known phenomenon, which originates from the radiative exchange between the exciton dipole and the tip plasmon[28, 29]. However, the decreasing trend of X\({}_{\text{M}}\) may lie in the exciton population and interfacial charge transfer process, as the tip-sample gap shrinks. A schematic illustration of the exciton population Figure 3: **Exciton tuning in the quantum tunnelling regime.** Spectral evolution of TEPL signal as a function of tip-sample distance (a) when no current flows through the HB and (b) when current flows through the HB. (c) Junction current profile as a function of tip-sample distance for the case of (a) and (b). Electrical configuration of the tip-sample junction is shown in Fig. 2a. Current was measured simultaneously in the short circuit configuration (V = 0 V). (d) Schematic of the HB band alignment showing trion formation in MoS2 and direction of current flow when the tip is in the sub-nm gap. (e) Comparison of MoS2 TEPL spectra at two different tip-sample distances (white dashed lines in (a)) for the case of no junction current. In addition to the X\({}_{\text{M}}\), we could also observe trion, X\({}_{\text{T}}\) in MoS\({}_{2}\). (f) X\({}_{\text{L}}\) evolution as a function of tip-sample distance for the two cases (current off and on). and relaxation process in the HB in the tip-sample gap is shown in Fig. 3d. Excitons are populated in both monolayers by gap plasmon excitation. Ultrafast interfacial charge transfer allows electrons in WSe\({}_{2}\) to cross the interface and jump to the conduction band of MoS\({}_{2}\). Since hBN acts as the barrier for electrons to move to the Au substrate, overall electron concentration may increase momentarily in MoS\({}_{2}\). This results in the radiative relaxation of ILX across the interface and formation of trion in MoS\({}_{2}\). Hence, we observe a gradual decrease in X\({}_{\text{{M}}}\) intensity as the gap shrinks. Fig. 3e displays two TEPL spectra in the MoS\({}_{2}\) spectral regime taken along the white dashed lines in Fig. 3a. As it is seen, PL spectra at 0.23 nm tip-sample distance clearly shows an overall broad spectrum with a trion peak at 35 meV\({}^{1}\) below the main excitonic peak in MoS\({}_{2}\), and supports our hypothesis. An interesting phenomenon is observed when the junction current flowed (Fig. 3b) between the tip and the sample, especially in the sub-nm gap (quantum plasmonic regime). Both the intralayer exciton Purcell enhancement and ILX showing a decreasing trend are slowed down as the current started flowing in the shrinking gap. The evolution of ILX intensity as a function of gap size for both current off and on is shown in Fig. 3f for comparison. PL enhancement in the tip-sample cavity (tip-sample gap plus the HB/hBN thickness) involves a competition between the Purcell effect and the tip-induced nonradiative quenching[18]. Both the Purcell enhancement and the tip-induced nonradiative damping are scaled to a power law of the cavity size. Especially, nonradiative relaxation becomes significant in the sub-nm tip-sample gap via dipole coupling to the tip-sample cavity plasmon due to the ultrafast ohmic Drude loss[30]. Therefore, in the sub-nm gap, we observed a sharp rise of X\({}_{\text{{W}}}\) emission due to the Purcell effect and ILX quenching since tip-induced nonradiative damping of intralayer excitons becomes faster than the interlayer charge transfer. However, as soon as the current starts flowing between the tip and the sample, the strength of the cavity plasmon weakens. This results in reduced Purcell enhancement and slower nonradiative damping of intralayer excitons resulting in the boosting of ILX emission in the sub-nm gap. To the best of our knowledge, our results are the first experimental demonstration of exciton-plasmon coupling in the presence of junction current recorded in a DC-biased near-field spectroscopy experiment. To understand the tip-sample gap induced contribution of radiative and nonradiative damping as well as the near-field enhancement, we fit the PL evolutions using a steady state rate equation model described in the previous work[21]. Details of our model and fitting procedure are discussed in the supplementary information (S6). Evolution of all excitonic populations is the product of various excitation and relaxation rates inside the cavity. The cavity induced field enhancement (excitation) can be scaled as \(F\propto(R/z)^{m}\), with \(R\) being the tip radius, \(z\) being the distance between the tip and Au substrate, and \(m\) is the geometrical factor. In contrast, population of ILX depends on the interlayer charge transfer upon intralayer exciton population. We divide the model into two regions: one for the nm gap and the other for the sub-nm gap, with only adjusting parameter is the scaling factor. The total decay rate of each excitonic species is the sum of three terms: (i) cavity-controlled (Purcell effect) radiative decay scaled as \(\Gamma^{rad}\propto(z+d)^{-n}+\Gamma_{0}^{rad}\), with \(d\) is the minimum tip-sample distance, \(n\) is the scaling factor, and \(\Gamma_{0}^{rad}\) is the free space radiative decay; (ii) cavity-induced nonradiative recombination described by \(\Gamma^{nrad}\propto(R/(z+d))^{l}\), with \(l\) being the scaling factor; and (iii) first-order intrinsic nonradiative relaxation rate \(\Gamma_{0}^{nrad}\). The value of \(\Gamma_{0}=2h/\tau_{0}\), with \(\tau_{0}^{rad}\) and \(\tau_{0}^{nrad}\) are assumed to be 0.7 ns and 1.5 ps respectively (taken from ref[21, 31]). Using these assumptions in the steady-state limit of exciton population, we fit the PL evolution of X\({}_{\text{{W}}}\) for both sets of results shown in Fig. 3a,b. We are not able to extract the X\({}_{\text{{M}}}\) intensity profile reliably due to its very low quantum yield. Therefore, we do not fit tip-substrate cavity dependent X\({}_{\text{{M}}}\) evolution in the present study. Moreover, since ILX population requires contribution from both \(\mathrm{X_{M}}\) and \(\mathrm{X_{w}}\), we need fit parameters from both intralayer excitonic species for a reliable fitting. Hence, we also avoid any qualitative analytical discussion on ILX parameters as well. Fig. 4a,b present fitted \(\mathrm{X_{w}}\) evolution as a function of tip-sample distance for the PL evolution graphs shown in Fig. 3a,b respectively. The PL evolution reveals two distinct distance-dependent regimes, which our model fits well. Since the model requires analytical expression of both radiative and nonradiative relaxation rates, it is possible to extract radiative and nonradiative lifetimes of fitted excitons in the varying tip-sample cavity. A qualitative discussion on radiative and nonradiative relaxation of \(\mathrm{X_{w}}\) in the tip-sample cavity is presented in the supplementary information (S6). Here, the evolution of the Purcell factor in the tip-substrate cavity is going to be discussed. Fig. 4c shows the evolution of the Purcell factor in the tip-substrate cavity extracted from the fitting presented in Fig. 4a,b respectively. Our model provided a similar scaling exponent to the model described in ref. [21] for the Purcell enhancement in the absence of junction current (see Table I in the supplementary information). However, a more dramatic change can be seen in the case of the current on. The cavity-dependent field enhancement initially increases at a scaling exponent of 5.6. However, as soon as the current starts flowing field strength is suppressed by an exponential factor of 0.5. The maximum Purcell factor is extracted to be F \(\approx\) 6 x 10\({}^{3}\) calculated for the case of current off is consistent with previous TEPL measurements [32, 33, 34]. Additionally, the exponent factor m \(\approx\) 5 indicates that our near-field geometry is more like a point dipole on a plane for which the Purcell factor is expected to grow as 1/2\({}^{6}\)[35]. This is most likely due to the fact that 3 nm hBN film on top of Au substrate results in reduced coupling between the tip and the Au substrate. ## Conclusion In summary, this work reports on tailoring exciton dynamics in the near-field from the classical plasmonic regime (few nm) to quantum plasmonic regime (sub-nm) with/without junction current in TMDC HBs using an Au tip + Au substrate-induced plasmonic cavity. We show that in the absence of a junction current intralayer and interlayer exciton show an opposite trend as a function of gap size in the sub-nm cavity. We explain this behavior by two competing phenomena. While the cavity field amplifies intralayer excitons dramatically in the quantum plasmonic regime, it also enhances nonradiative damping via coupling between exciton dipole and tip plasmon for which interlayer exciton suffers the most. In contrast, when current flows in the junction, it Figure 4: **Rate equation fit to the PL evolution.** PL evolution of intralayer \(\mathrm{X_{w}}\) together with fitted curve as a function of tip-sample cavity for (a) no current in the junction and (b) current flowing in the junction. (c) Calculated Purcell factor in the tip-sample cavity with/without junction current. quenches the Purcell factor of the sub-nm cavity dramatically, and at the same time boosts ILX by reducing the nonradiative relaxation of excitons. Our work provides a solid understanding of exciton dynamics in the quantum plasmonic regime with/without junction current and demonstrates a clear pathway of boosting exciton densities to enable new optoelectronic applications, and to induced room temperature exciton condensates via tuning plasmonic cavity in the quantum tunneling regime. ## Experimental Section MoS\({}_{2}\)/WSe\({}_{2}\) HBs were prepared using PDMS assisted deterministic dry transfer method. Since interface contamination is one of the major challenges for the ILXs formation, we used PDMS to PDMS pick up/creation of HBs. Details of the HBs creation are schematically presented in the supplementary information section, S1. HBs prepared in this way show strong ILX emission as shown in Fig.1. We used CVD-grown 3 nm thick hBN film on top of a 100 nm thick Au film as the substrate. The 3 nm thick hBN film was prepared by a low-pressure CVD system on a c-plane sapphire substrate using ammonia borane as a precursor. The details of the CVD procedure of hBN and PMMA-assisted wet transfer of hBN on arbitrary substrates can be found in the literature[36]. After the preparation of the individual HBs, the bilayer stacks were then transferred onto hBN/Au substrate. Far-field optical measurements were conducted using a Horiba LabRam HR evolution confocal microscope coupled with an electron multiplying charged couple detector dispersed by a 100 l/gr grating. A 633 nm solid-state laser was used for the excitation with laser power of 17 \(\upmu\)W focused onto the sample surface via 100x 0.9 NA objective. TEPL measurements were performed using a Horiba NanoRaman platform in the side illumination/collection configurations, which consists of an atomic force microscope from AIST-NT and a LabRam Evolution spectrometer. The Au tips used in the experiments were purchased from Horiba and suited for near-field measurements under 633 nm excitation. The laser power was kept at 17 \(\upmu\)W and focused onto the tip via a 100x 0.7 NA long working distance objective. The exposure time was 0.2 s. TEPL hyperspectral maps were acquired in the spectop mode (a contact/noncontact hybrid mode developed by Horiba), in which half of the time (t1) the tip is in contact with the sample to acquire the near-field signal and the rest half of the time (t2) the tip is operating in the intermittent contact mode to acquire the far-field signal and the AFM topography on each pixel with the total time defined by t = t1+t2 and t1=t2=exposure time. For tip-sample distance-dependent TEPL measurements, a varying DC voltage was applied gradually to the piezo-actuator connected to the sample stage for fine-tuning of the tip-sample cavity from few nm to sub-nm. A total of 50 data points were collected for a large piezo-actuator displacement (150 nm - 200 nm). At each point, a TEPL spectra was acquired using an exposure time of 0.2 s. In addition to the TEPL spectra, force vs distance curves and junction current were also monitored for the mentioned piezo-actuator tuning range for each experimental data set. The actual tip-sample distance, d was calculated from the acquired force vs distance curve. Details of the calculation can be found in the supplementary section, S2. KPFM measurements were performed using AFM from AIST-NT and commercially available Cr/Au probes with/without illumination under 633 nm excitation. ### Competing Interests The Authors declare no Competing Financial or Non-Financial Interests. ### Data Availability The data that support the findings of this study are available on request from the corresponding author. ### Authors Contributions M.R. and D.J. conceived the idea and designed the research. M.R. implemented the project via performing the experiments and simulations with the help of G.K and S.S. G.K. assisted in sample preparation. K.Y.M and H.S.S. prepared the hBN film. M.R. and D.J. wrote the manuscript; all the authors revised and commented on the manuscript. All authors contributed to the writing of manuscript and interpretation of the data. Corresponding author: Deep Jariwala: [email protected] ### Acknowledgement D.J. acknowledges primary support for this work by the Air Force Office of the Scientific Research (AFOSR) FA2386-20-1-4074. M.R. acknowledges support from Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) for Walter Benjamin Fellowship (award no. RA 3646/1-1). G. Kim acknowledge primary support for this work by the Asian Office of Aerospace Research and Development (AOARD) of the Air Force Office of Scientific Research (AFOSR) FA2386-20-1-4074. The sample fabrication, assembly and characterization were carried out at the Singh Center for Nanotechnology at the University of Pennsylvania which is supported by the National Science Foundation (NSF) National Nanotechnology Coordinated Infrastructure Program grant NNCI-1542153. K.Y.M. and H.S.S. acknowledge the support from the National Research Foundation, Republic of Korea via the research fund (NRF-2021R1A3B1077184). S.S. acknowledges support from Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (Grant number 2021R1A6A3A14038492).
2306.01458
Extremely Large-scale Array Systems: Near-Field Codebook Design and Performance Analysis
Extremely Large-scale Array (ELAA) promises to deliver ultra-high data rates with increased antenna elements. However, increasing antenna elements leads to a wider realm of near-field, which challenges the traditional design of codebooks. In this paper, we propose novel near-field codebook schemes based on the fitting formula of codewords' quantization performance. First, we analyze the quantization performance properties of uniform linear array (ULA) and uniform planar array (UPA) codewords. Our findings reveal an intriguing property: the correlation formula for ULA codewords can be represented by the elliptic formula, while the correlation formula for UPA codewords can be approximated using the ellipsoid formula. Building on this insight, we propose a ULA uniform codebook that maximizes the minimum correlation based on the derived formula. Moreover, we introduce a ULA dislocation codebook to further reduce quantization overhead. Continuing our exploration, we propose UPA uniform and dislocation codebook schemes. Our investigation demonstrates that oversampling in the angular domain offers distinct advantages, achieving heightened accuracy while minimizing overhead in quantifying near-field channels. Numerical results demonstrate the appealing advantages of the proposed codebook over existing methods in decreasing quantization overhead and increasing quantization accuracy.
Feng Zheng, Hongkang Yu, Chenchen Wang, Luyang Sun, Qingqing Wu, Yijian Chen
2023-06-02T11:36:02Z
http://arxiv.org/abs/2306.01458v2
# Extremely Large-scale Array Systems: Near-Field Codebook Design and Performance Analysis ###### Abstract Extremely Large-scale Array (ELAA) promises to deliver ultra-high data rates with increased antenna elements. However, increasing antenna elements leads to a wider realm of near-field, which challenges the traditional design of codebooks. In this paper, we propose novel near-field codebook schemes based on the fitting formula of codewords' quantization performance. First, we analyze the quantization performance properties of uniform linear array (ULA) and uniform planar array (UPA) codewords. Our findings reveal an intriguing property: the correlation formula for ULA codewords can be represented by the elliptic formula, while the correlation formula for UPA codewords can be approximated using the ellipsoid formula. Building on this insight, we propose a ULA uniform codebook that maximizes the minimum correlation based on the derived formula. Moreover, we introduce a ULA dislocation codebook to further reduce quantization overhead. Continuing our exploration, we propose UPA uniform and dislocation codebook schemes. Our investigation demonstrates that oversampling in the angular domain offers distinct advantages, achieving heightened accuracy while minimizing overhead in quantifying near-field channels. Numerical results demonstrate the appealing advantages of the proposed codebook over existing methods in decreasing quantization overhead and increasing quantization accuracy. ELAA, codebook, fitting correlation formula, near-field ## I Introduction Massive multiple-input multiple-output (MIMO) technology is vital to fifth-generation (5G) mobile communication networks. Massive MIMO involves the utilization of multiple antennas to concentrate signal power within a limited area, contributing to enhanced energy efficiency and spectral efficiency [1, 2]. However, with the explosive demand increase on data rates in the forthcoming sixth-generation (6G) mobile communication networks, massive MIMO cannot meet the requirement because achieving a Tbps data rate with limited antennas is difficult. To address this issue, ELAA technology, comprising hundreds or thousands of antennas, is considered a crucial enabling technology for next-generation communication [3]. ELAA enables efficient multiplexing of multiple user equipment (UE) on the same time-frequency resource, thereby improving spectral efficiency and data rates. Additionally, the deployment of ELAA's high beamforming gain facilitates enhanced spatial resolution and compensates for significant path loss experienced in the terahertz frequency bands [4]. The high beamforming gain of the ELAA system heavily relies on accurate channel state information (CSI) at the transmitter [5]. In the time division duplex (TDD) system, the uplink and downlink exhibit reciprocity, allowing downlink CSI to be obtained through uplink channel estimation. However, in the frequency division duplex (FDD) system, the uplink and downlink operate on different frequencies, weakening the channel reciprocity. As a result, deducing downlink CSI from uplink CSI becomes challenging [6]. In other words, CSI can only be acquired through dedicated feedback provided by the UE over signaling channels with limited capacity [7]. Currently, there are two typical categories of CSI acquisition methods, the explicit CSI acquisition and the implicit CSI acquisition [8]. Explicit feedback schemes directly report an element-wise quantized channel vector. They allow for more flexible transmission or reception methods, which can achieve a higher scheduling gain. Compared to explicit feedback, implicit feedback requires less overhead. Therefore, implicit feedback can enable more accurate link adaptation [9]. A mainstream technique in implicit feedback is the codebook-based approach, which feeds back an index of a quantized CSI in a predesign codebook to the transmitter. For codebook-based feedback, the quantization accuracy of CSI depends on the codebook structure and the allowed number of feedback bits [10]. The existence of massive antennas in ELAA leads to an unexpected increase in pilot overhead. Therefore, it is crucial to design a codebook to achieve accurate quantization of CSI with limited feedback overhead of the ELAA system. ### _Related Works_ Extensive research has focused on the design of far-field codebooks. The far-field electromagnetic wave can be considered a plane wave, so the phase changes linearly with the antenna index. The 5G new radio (NR) standard adopted a discrete Fourier transform (DFT) codebook for the ULA system, and the two-dimensional DFT (2D-DFT) codebook was introduced for the UPA system [11]. Moreover, to enable more accurate CSI acquisition, the NR standard supported codebook oversampling and the linear combination of multiple codewords for feedback [12]. IEEE 802.16 m standard adopted an adaptive codebook structure, such as the skewed codebook, and a differential codebook structure, such as a Polar-Cap codebook [13]. Besides, in the case of low pilot overhead, the hierarchical codebook [14], angle-of-departure (AoD) adaptive subspace codebook [15], and compressed sensing (CS) [16] methods could also be utilized to quantify CSI accurately. Among them, the codebook feedback schemes based on CS utilize the sparsity of the channel in the angle domain to achieve the goal of low feedback overhead. Although the increased antenna numbers of ELAA offer advantages in terms of spectral efficiency and data rates, side effects on wireless channel characteristics brought by it also demand attention [17]. The electromagnetic field is generally divided into far-field and near-field, and their boundaries can be determined by the Rayleigh distance \(2D^{2}/\lambda\), where \(D\) represents the array size and \(\lambda\) represents the wavelength [18]. Due to the extensive array size and the utilization of high-frequency carriers in the ELAA system, the Rayleigh distance extends to tens or even hundreds of meters. Consequently, User Equipment (UE) is more likely to be positioned within the near-field [19]. Unlike the plane wave model of the far-field model, the near-field is usually modeled as a spherical wave [20, 21]. The distance between UE and BS cannot be ignored in the spherical wave model. The far-field codebook has a significant loss when it is used for near-field beamforming. Therefore, the additional distance factor needs to be quantized in the near-field channel, which poses a significant challenge to the design of the near-field codebook. Only a few studies have focused on codebook design for large near-field ELAA systems. [8] designed a codebook for the near-field UPA channel, which uniformly samples in the cartesian coordinates. However, the range of this codebook applicable to near-field channels is limited, so significant quantization errors still exist. To address this issue, [22] proposed the sparse polar codebook, sampling in the sparse domain of the polar domain. Furthermore, the codebook was uniformly sampled in the angular domain and non-uniformly in the distance. Besides, [23] derived a near-field codebook scheme designing the optimal quantization points based on the Lloyd-Max algorithm. In [24], a hierarchical codebook was designed by projecting the near-field channel into the angle and slope domains, considering the incomplete coverage and overlap of spatial chirp beams, further designing a hierarchical codebook via manifold optimization and alternative minimization. Unfair sampling methods can render codewords redundant and reduce quantization accuracy. It has been known that the quantization performance of the codebook provides a good indication for the design of codeword sampling. The minimum quantization correlation achieved within the quantization area of the codeword is the pivotal factor determining codeword performance. Remarkably, the quantization performance of the codebook showcases distinct behaviors in the near-field and far-field scenarios. While a sine function characterizes the quantization performance of the codebook for channels in the far-field, research remains absent in evaluating the quantization performance of codewords for near-field channels. Hence, a thorough analysis of near-field codebook quantization performance becomes urgent, given its significance in the design of codebooks. Furthermore, many antennas and the non-negligible distances within the near-field significantly amplify the demand for feedback bits. Consequently, designing a low-quantization bit codebook capable of accurately quantifying the near-field channel is crucial. ### _Contributions_ To fill in this gap, in this paper, we analyze the quantization performance in theory and propose codebook design schemes for the ULA channel and the UPA channel in the ELAA system. Our main contributions are summarized as follows: * Firstly, we provide a theoretical analysis of the quantization performance of codeword to channels in the near-field ULA and UPA systems. The quantization performance of the ULA codeword exhibits symmetry and stationary. However, the UPA codeword is non-stationary and asymmetric. Further, we derive a fitting polynomial form of the correlation formula between the codeword and the channel vector. The correlation formula for the ULA codeword can be expressed as an elliptic function, and the fitting correlation formula for each codeword remains constant. In contrast, the correlation formula for the UPA codeword can be represented as an ellipsoid formula, with varying fitting correlation formulas for different codewords. * Secondly, we propose near-field codebook schemes building upon the fitting correlation formula. We present a ULA codebook scheme with uniform sampling in the transform domain that maximizes minimum quantization correlation. Additionally, we introduce an improved dislocation sampling codebook scheme, effectively reducing the overhead of quantized codewords. Recognizing the non-stationarity of UPA channels, we establish the reference ellipsoid as the minimum achievable ellipsoidal shape encompassing the entire quantization area of codewords. Similar to ULA codebook schemes, we develop uniform and dislocation UPA codebook schemes based on the reference ellipsoid. Our analytical results highlight the advantages of oversampling in the angle domain for designing near-field codebooks with high minimum quantization correlation. * Lastly, we conduct simulations to compare our proposed codebook with other codebook schemes. The simulation results show that our proposed codebook consistently achieves superior performance compared to other codebook schemes under the same quantization overhead. ### _Organization and Notation_ The remainder of the paper is organized as follows. Section II presents the spherical wave models and the CSI quantization feedback model. Section III analyzes the characteristic of correlation, and describes the fitting polynomial formula in both ULA and UPA model. In Section IV, the near-field uniform codebook and dislocation codebook of ULA channel are proposed. Section V presents UPA near-field uniform codebook and dislocation codebook. Simulation results are provided in Section VI, and conclusions are drawn in Section VII. _Notations_: Vectors are denoted by lowercase bold letters, while matrices are denoted by uppercase bold letters. denotes the Kronecker product; \((\cdot)^{*}\) and \((\cdot)^{T}\) denotes the conjugate and transpose operations, respectively. \(\left|\cdot\right|\) denotes the absolute operator. \(\mathrm{diag}(\mathbf{D})\) denotes diagonal matrix from \(\mathbf{D}\). \(\left\|\mathbf{v}\right\|\) denotes the Frobenius norm of a vector \(\mathbf{v}\). \(\left|x\right|\) is the rounding symbol. ## II System Model In this section, we first introduce the ULA and UPA spherical wave models of the ELAA system, respectively. Next, we present a CSI quantization feedback model and formulate the design of the codebook as an optimization problem. ### _ULA Near-field Channel Model_ As shown in Fig. 1, we consider a downlink narrow-band ELAA system, where the BS is equipped with a ULA to serve a single-antenna UE distributed in the near-field region. The \(N\)-antenna array is placed along the \(y\)-axis. The antenna spacing is \(d=\frac{\lambda}{2}\), where \(\lambda\) is the electromagnetic wavelength. The coordinate of the \(n\)-th antenna is given by \(\mathbf{t}_{n}=(0,y_{n})\), where \(y_{n}=\left(n-\frac{N+1}{2}\right)d\) with \(n=1,2,\ldots,N\). Meanwhile, the UE is located at \(\mathbf{u}=(r{\rm cos}\theta,r{\rm sin}\theta)\), where \(r\) and \(\theta\) represent the distance and angle between UE and array center, respectively. The line-of-sight (LoS) channel is considered because this paper only focuses on the quantization feedback problem of the near-field codebook. According to the spherical wave model [22], both the angle and distance of UE determine the signal phase, and the near-field channel vector \(\mathbf{h}\) can be expressed as \[\mathbf{h}=\sqrt{N}g\mathbf{b}\left(r,\theta\right), \tag{1}\] where \(k=\frac{2\pi}{\lambda}\) denotes the wavenumber at carrier frequency \(f\). \(g=\frac{\sqrt{\pi}e^{-j\lambda_{r}}}{r}\) represents the complex channel gain, where \(\eta\) represents the reference channel gain at a distance of \(1\) m. \(\mathbf{b}\left(r,\theta\right)\) denotes the near-field beam focusing vector, which is given by \[\mathbf{b}\left(r,\theta\right)=\frac{1}{\sqrt{N}}\left[e^{-jk(r_{1}-r)},e^{- jk(r_{2}-r)},\ldots,e^{-jk(r_{N}-r)}\right]^{T}, \tag{2}\] where \(r_{n}=\left\|\mathbf{t}_{\mathbf{n}}-\mathbf{u}\right\|\) represents the distance between the \(n\)-th antenna at the BS and the UE. Furthermore, according to the second order Taylor series expansion \(\sqrt{1+x}=1+\frac{x}{2}-\frac{x^{2}}{8}+\mathcal{O}\left(x^{3}\right)\), \(r_{n}\) can be approximated as \[\begin{split} r_{n}&=\sqrt{\left(r\sin\theta-y_{n} \right)^{2}+\left(r\cos\theta\right)^{2}}\\ &\approx r-\sin\theta y_{n}+\ \frac{{\rm cos}^{2}\theta}{2r}y_{n}^{2}.\end{split} \tag{3}\] **Remark 1**: _When the \(r\) is sufficiently large, the \(\frac{{\rm cos}^{2}\,\theta}{2r}\) term can be omitted, and \(\mathbf{b}\left(r,\theta\right)\) is simplified as_ \[\mathbf{a}\left(\theta\right)=\frac{1}{\sqrt{N}}\Big{[}1,e^{j\pi\sin\theta}, \ldots,e^{j\pi(N-1)\sin\theta}\Big{]}^{T}, \tag{4}\] _which is equivalent to the conventional far-field beam steering vector for the ULA. In this case, the DFT codebook is adopted to quantify the far-field channel vector. Therefore, to be more precise, the concept of "near-field" in this paper does not exclude far-field as well._ ### _UPA Near-field Channel Model_ As shown in Fig. 2, the BS employs a UPA, which is located on the \({\rm xOy}\) plane and the center of the array is located at the coordinate origin. \(N\times N\) uniformly spaced antenna elements are placed in both horizontal and vertical directions, with a spacing of \(d=\frac{\lambda}{2}\). The Cartesian coordinate of the \((m,n)\)-th antenna element of the UPA can be expressed as \(\mathbf{t}_{(m,n)}=\left(x_{m},y_{n},0\right)\) with \(x_{m}=\left(m-\frac{N+1}{2}\right)d\), \(y_{n}=\left(n-\frac{N+1}{2}\right)d\), \(m=1,...,N\), \(n=1,\ldots,N\). Meanwhile, we assume the coordination of UE is \(\mathbf{u}=\left(r\sin\theta\cos\phi,r\sin\theta\sin\phi,r\cos\theta\right)\), where \(r\), \(\theta\) and \(\phi\) represent the distance, elevation angle and azimuth angle of UE relative to the UPA center, respectively. Therefore, the beam focusing vector for UPA can be obtained based on the spherical wave propagation model as \[\mathbf{b}\left(r,\theta,\phi\right)=\frac{1}{N}\Big{[}e^{-jk\left(r_{(1,1)}-r \right)},\ldots,e^{-jk\left(r_{(N,N)}-r\right)}\Big{]}^{T}, \tag{5}\] where \(r_{(m,n)}=\left\|\mathbf{t}_{(m,n)}-\mathbf{u}\right\|\) represents the distance between the \((m,n)\)-th antenna at the BS and the UE, which can be approximated as \[\begin{split} r_{(m,n)}\approx& r-\sin\theta\cos \phi x_{m}-\sin\theta\sin\phi y_{n}\\ &+\frac{1-\sin^{2}\!\theta{\rm cos}^{2}\phi}{2r}x_{m}^{2}+\frac{1- \sin^{2}\!\theta{\rm sin}^{2}\phi}{2r}y_{n}^{2}\\ &-\frac{\sin^{2}\!\theta\cos\phi\sin\phi}{r}x_{m}y_{n}.\end{split} \tag{6}\] **Remark 2**: _When the \(r\) is sufficiently large, the last 3 terms in (6) can be omitted, and \(\mathbf{b}\left(r,\theta,\phi\right)\) is simplified as_ \[\begin{split}\mathbf{a}\left(\theta,\phi\right)=&\frac {1}{N}\left[1,\ldots,e^{j\pi\left(m\sin\theta\cos\phi+n\sin\theta\sin\phi \right)},\ldots,\right.\\ &\left.e^{j\pi\left(\left(N-1\right)\sin\theta\cos\phi+\left(N-1 \right)\sin\theta\sin\phi\right)}\right]^{T},\end{split} \tag{7}\] _which is equivalent to the conventional far-field beam steering vector for the UPA, and the 2D-DFT codebook is adopted for the CSI feedback. Since the phase of (7) can be decoupled into two parts in terms of \(x\) and \(y\), the 2D-DFT codebook can be expressed in the form of the Kronecker product of the DFT vectors, that is_ \[\mathbf{a}=\mathbf{a}_{x}\otimes\mathbf{a}_{y}, \tag{8}\] Fig. 1: Near-field channel model for ULA communication system. where \(\mathbf{a}_{x}=\frac{1}{N}\left[1,e^{j\pi\sin\theta\cos\phi},\ldots,e^{j\pi(N-1) \sin\theta\cos\phi}\right]\) and \(\mathbf{a}_{y}=\frac{1}{N}\left[1,e^{j\pi\sin\theta\sin\phi},\ldots,e^{j\pi(N-1 )\sin\theta\sin\phi}\right]\). However, the cross-term \(\frac{\sin^{2}\theta\cos\sin\phi}{r}x_{m}y_{n}\) in (6) prevents it from being decoupled as \(\mathbf{a}\left(\theta,\phi\right)\). ### _CSI Quantization Feedback Model_ For FDD communication systems, the BS can obtain the CSI through the UE's feedback. Specifically, the pilot signal \(\mathbf{X}=\text{diag}\big{(}x_{1},\ldots,x_{\widetilde{N}}\big{)}\) is transmitted at first, where \(x_{n}\) denotes the \(n\)-th pilot symbol, and \(\widetilde{N}\) represents the size of either ULA or UPA. At the UE side, the received signal is given by \[\mathbf{y}=\mathbf{X}\mathbf{h}+\mathbf{n}, \tag{9}\] where \(\mathbf{n}\) is the additive Gaussian white noise (AWGN) with variance \(\sigma^{2}\). Based on this, the estimation of channel vector, denoted as \(\tilde{\mathbf{h}}\), can be obtained by methods such as least squares (LS) [25]. Since this paper mainly focuses on the codebook design for the near-field communication, the perfect CSI estimation is assumed, i.e., \(\tilde{\mathbf{h}}=\mathbf{h}\). To inform the BS of CSI with limited feedback, the channel vector is quantized based on a predefined codebook \(\mathbf{W}=[\mathbf{w}_{1},\ldots,\mathbf{w}_{S}]\), which contains \(\mathrm{S}\) codewords and satisfies \(\left\|\mathbf{w}_{s}\right\|=1\). The UE selects the optimal codeword from the codebook and feeds back its index \(s^{\star}=\arg\max\limits_{s^{\star}}\left|\mathbf{h}^{\mathrm{T}}\mathbf{w} _{s}\right|^{2}\) to the BS. Subsequently, the BS can determine the transmission scheme based on the CSI feedback from the UE. For instance, the codeword \(\mathbf{w}_{s^{\star}}\) can be utilized as the beamforming weight. In the above quantization feedback model, the codebook design influences the accuracy of channel quantization, which in turn affects the performance of the communication system. Obviously, all the channel vectors in the area of interest have different correlations with the codewords. This paper adopts the max-min correlation criterion, and assuming \(\mathcal{W}=\{\mathbf{w}_{1},\ldots,\mathbf{w}_{S}\}\), the problem of near-field codebook design can be formulated as follows \[\begin{split}\max\limits_{\mathbf{W}}&\min \limits_{\mathbf{h}\in\mathcal{H}}\;\max\limits_{s}\left|\mathbf{h}^{\mathrm{ T}}\mathbf{w}_{s}\right|\\ &\mathrm{s.t.}\left|\mathcal{W}\right|=S.\end{split} \tag{10}\] or \[\begin{split}\min\limits_{\mathcal{W}}&\left|\mathcal{W }\right|\\ &\mathrm{s.t.}\;\min\limits_{\mathbf{h}\in\mathcal{H}}\;\max \limits_{s}\left|\mathbf{h}^{\mathrm{T}}\mathbf{w}_{s}\right|>c.\end{split} \tag{11}\] \(\mathcal{H}\) represents the set of LoS channel vectors within the area of interest, and \(c\in(0,1)\) represents the correlation between codewords and channel vectors. For the codebook design issue, existing solutions include Grassmannian codebooks [26], random vector quantization (RVQ) codebooks [27], and generalized Lloyd codebooks [28]. Nevertheless, these methods cannot fully utilize the characteristics of near-field channels while guaranteeing compatibility with the existing far-field codebooks. Therefore, we consider select codewords from beam focusing vectors \(\mathbf{b}(r,\theta)\), and the existing DFT codebook for the far-field is based on this idea. ## III Codeword Quantization Performance Analysis This section first investigates the correlation between the near-field codewords and the channel vectors. The transform domain perspective for analyzing correlation function is proposed, demonstrating many desired mathematical properties. Secondly, the section provides a fitting formula for the correlation quantization performance of the ULA and UPA codewords, which serves as inspiration for codebook design. ### _Correlation Function for ULA systems_ Codewords can be selected from the beam focusing vectors, i.e., \(\mathbf{w}_{s}=\mathbf{b}^{\star}(r_{s},\theta_{s})\), they can be viewed as LoS channel vectors at specific positions. In the ULA systems, the correlation between the codeword \(\mathbf{w}_{s}\) and the normalized channel vector pointing to \((r_{q},\theta_{q})\) can be calculated as \[\begin{split}\tau\big{(}r_{s},\theta_{s};r_{q},\theta_{q}\big{)}=& \big{|}\mathbf{b}\big{(}r_{q},\theta_{q}\big{)}\mathbf{b}^{\star}(r_{s}, \theta_{s}\big{)}\big{|}\\ =&\frac{1}{N}\Bigg{|}\sum\limits_{n=1}^{N}\exp\Bigg{(}-j \frac{2\pi}{\lambda}\bigg{(}\big{(}\sin\theta_{q}-\sin\theta_{s}\big{)}y_{n} \\ &+\Big{(}\frac{\cos^{2}\theta_{s}}{2r_{s}}-\frac{\cos^{2}\theta _{q}}{2r_{q}}\big{)}y_{n}^{2}\bigg{)}\Bigg{)}\Bigg{|}.\end{split} \tag{12}\] Let \(\alpha_{i}=\frac{\lambda\cos^{2}\theta_{s}}{4r_{i}}\) and \(\beta_{i}=\sin\theta_{i}\) with \(i=s,q\). \(\delta_{\alpha}\) and \(\delta_{\beta}\) respectively represent as the position difference between \(\mathbf{h}=\mathbf{b}(r_{q},\theta_{q})\) and \(\mathbf{w}_{s}\), which can be expressed as \[\delta_{\alpha}=\alpha_{q}-\alpha_{s},\quad\delta_{\beta}=\beta_{q}-\beta_{s}. \tag{13}\] Then, (12) can be simplified as \[\begin{split} f&\left(\delta_{\alpha},\delta_{\beta} \right)\\ =&\frac{1}{N}\Bigg{|}\sum\limits_{n=1}^{N}\exp\Bigg{(} -j\pi\bigg{(}-\delta_{\alpha}n^{2}+\Big{(}\delta_{\beta}+\delta_{\alpha} \big{(}N+1\big{)}\big{)}n\bigg{)}\Bigg{)}\Bigg{|}.\end{split} \tag{14}\] Without loss of generality, the correlation between the codeword and the channel vector always satisfies \(f\left(\delta_{\alpha},\delta_{\beta}\right)\leq 1\). The condition \(f\left(\delta_{\alpha},\delta_{\beta}\right)=1\) holds if and only if \(\delta_{\alpha}=0\) and \(\delta_{\beta}=0\). Consequently, a quantization error is always present when a codeword quantizes a channel other than itself. Fig. 2: Near-field channel model for UPA communication system. To further explore the characteristics of quantization performance, we perform a normalization substitution for the variables in the above equation. We set \(\tilde{\delta}_{\alpha}\!=\!\delta_{\alpha}N^{2}\), \(\tilde{\delta}_{\beta}\!=\!\delta_{\beta}N\) and \(t\!=\!\frac{n}{N-1}\!-\!\frac{1}{2}\). For the antenna with sufficiently large of \(N\), the above formula can be approximated as \[\tilde{f}\left(\tilde{\delta}_{\alpha},\tilde{\delta}_{\beta}\right)\approx \frac{1}{N}\left|\int_{-1/2}^{1/2}\exp\left(-j\pi\left(\tilde{\delta}_{\beta}t -\tilde{\delta}_{\alpha}t^{2}\right)\right)dt\right|. \tag{15}\] We plot the graph of \(\tilde{f}\left(\tilde{\delta}_{\alpha},\tilde{\delta}_{\beta}\right)=c\) in Fig. 3, where \(c\in(0,1)\). Interestingly, the boundaries of the codeword quantization areas resemble ellipses. The quantization performance remains independent of the frequency. The codeword quantization regions are solely determined by the number of antennas and the minimum correlation \(c\). When \(c\) is constant, the coverage area of codewords is inversely proportional to \(N\) for the \(\alpha\) domain and \(N^{2}\) for the \(\beta\) domain. Unfortunately, the exponential term in (14) is relatively complicated. Most existing literature deals with the exponential term based on the Fresnel integral, which is still difficult to directly analyze the quantification performance of codewords [22]. To solve the problem, we will give an approximate fitting correlation formula. Before that, we first explore the correlation properties of codewords in the following. **Property 1** (Stationary): _The correlation between the codeword and channel vector is only related to \(\delta_{\alpha}\) and \(\delta_{\beta}\), but it is independent of the codeword. For different codeword \(\mathbf{w}_{s}\) and \(\mathbf{w}_{s^{\prime}}\), the quantization performance for the channel vector within its quantization area is always the same. The stationarity in the ULA channel can be formulated as_ \[f(\alpha_{s},\beta_{s};\alpha_{s}\!+\!\delta_{\alpha},\beta_{s}\!+\!\delta_{ \beta})\!=\!f\left(\alpha_{s^{\prime}},\beta_{s^{\prime}};\alpha_{s^{\prime}} \!+\!\delta_{\alpha},\beta_{s^{\prime}}\!+\!\delta_{\beta}\right). \tag{16}\] **Proof 1**: _See Appendix A. \(\blacksquare\)_ **Property 2** (Symmetric): _The correlation distribution of the near-field channel is symmetric. Within the quantization area of the codeword, the correlation between the codeword and the channel vectors is symmetric about the codeword in the \(\alpha\) and \(\beta\) domain, which can be expressed as_ \[f\left(\delta_{\alpha},\delta_{\beta}\right)=f\left(\delta_{\alpha},-\delta_{ \beta}\right)=f\left(-\delta_{\alpha},\delta_{\beta}\right)=f\left(-\delta_{ \alpha},-\delta_{\beta}\right). \tag{17}\] **Proof 2**: _See appendix B. \(\blacksquare\)_ Inspired by (14) and the above properties, we use a polynomial function to approximate the correlation function in Corollary 1. **Corollary 1**: _For any ULA codeword \(\mathbf{w}_{s}=\mathbf{b}^{*}(\alpha_{s},\beta_{s})\), the fitting polynomial formula of quantization performance \(f(\delta_{\alpha},\delta_{\beta})\) can always be expressed as_ \[f\left(\delta_{\alpha},\delta_{\beta}\right)\approx p_{\alpha}\delta_{\alpha} {}^{2}N^{4}+p_{\beta}\delta_{\beta}{}^{2}N^{2}+1, \tag{18}\] _where_ \[p_{\alpha} =-0.025983670363830, \tag{19}\] \[p_{\beta} =-0.391749735984250.\] _To conclude, for the ULA codeword \(\mathbf{w}_{s}\) with a minimum quantization correlation of \(c\), the distribution of \(\delta_{\alpha}\) and \(\delta_{\beta}\) that satisfies \(f(\delta_{\alpha},\delta_{\beta})=c\) can be considered equivalent to_ \[p_{\alpha}\delta_{\alpha}^{2}N^{4}+p_{\beta}\delta_{\beta}^{2}N^{2}=c-1. \tag{20}\] _This formula can be further simplified into the form of the following formula_ \[\frac{p_{\alpha}\delta_{\alpha}^{2}N^{4}}{c-1}+\frac{p_{\beta}\delta_{\beta} ^{2}N^{2}}{c-1}=1. \tag{21}\] Evidently, the correlation fitting formula in Corollary 1 is an elliptic function. The ellipse formula always centers around \((\alpha_{s},\beta_{s})\). And the ellipse is the quantization boundary of \(\mathbf{w}_{s}\) when the minimal quantization correlation is \(c\). The minimal quantization correlation affects the major and minor axes of the ellipse formula. The axial length of the ellipsoid and the area of codeword quantization decrease with the increase of \(c\). Further, the axis length of the ellipse is also decided by \(N\). The axis length of the ellipse in the \(\alpha\) and \(\beta\) domains are inversely proportional to \(N^{2}\) and \(N\), respectively. The quantization performance of ULA codeword \(\hat{\mathbf{w}}=\mathbf{b}^{*}(0,0)\) can be written as \(f(\alpha_{q},\beta_{q})\). The possible channel vectors always distribute in the ellipse interior with the minimum correlation as \(c\), which can be formulated as \[\Omega=\left\{\mathbf{b}\left(\alpha_{q},\beta_{q}\right)\left|\frac{p_{ \alpha}N^{4}\alpha_{q}^{2}}{c-1}+\frac{p_{\beta}N^{2}\beta_{q}^{2}}{c-1}\leq 1 \right\}.\right. \tag{22}\] With stationarity in the ULA system, the quantization area of any codeword can be represented using ellipses with the same axis length as \(\hat{\mathbf{w}}\) but different centers. The formula in Corollary 1 is concise and provides strong theoretical support for the codebook design scheme outlined in this paper. ### _Correlation Function for UPA systems_ In the UPA system, the correlation between codeword \(\mathbf{w}_{s}=\mathbf{b}^{*}(r_{s},\theta_{s},\phi_{s})\) and channel vector pointing to \((r_{q},\theta_{q},\phi_{q})\) can be calculated as \[\tau\left(r_{q},\theta_{q},\phi_{q};r_{s},\theta_{s},\phi_{s}\right)=\left| \mathbf{b}\left(r_{s},\theta_{s},\phi_{s}\right)\mathbf{b}^{*}\left(r_{q}, \theta_{q},\phi_{q}\right)\right|. \tag{23}\] Fig. 3: Contour distribution between codeword quantization correlation and position difference with \(f=100\mathrm{GHz}\) and \(N=512\). Let \(\psi_{i}=\sin\theta_{i}\cos\phi_{i}\), \(\varphi_{i}=\sin\theta_{i}\sin\phi_{i}\) and \(\rho_{i}=\frac{\lambda}{r_{i}}\) with \(i=s,q\). We replace the difference of position between the codeword and channel vector as follows \[\begin{split}\delta_{\psi_{s}}&=\sin\theta_{q}\cos \phi_{q}-\sin\theta_{s}\cos\phi_{s},\\ \delta_{\varphi_{s}}&=\sin\theta_{q}\sin\phi_{q}- \sin\theta_{s}\sin\phi_{s},\\ \delta_{\rho_{s}}&=\frac{\lambda}{r_{q}}-\frac{ \lambda}{r_{s}}.\end{split} \tag{24}\] (23) can be rewritten as (25). The cross-term \(x_{m}y_{n}\) contained in the UPA channel vector prevents the use of the Kronecker product to decouple the channel vector. Ignoring the cross-term \(x_{m}y_{n}\) would result in significant performance errors due to the loss of crucial CSI. Before giving the method to solve the problem, the characteristics of UPA channel correlation shown in the Property 3 and Property 4 are discussed. **Property 3** (Non-stationary): _The quantized areas of different UPA codewords are not consistent under the same minimum correlation \(c\). For two UPA codewords \(\mathbf{w}_{s}=\mathbf{b}^{*}(\psi_{s},\varphi_{s},\rho_{s})\) and \(\mathbf{w}_{s^{\prime}}\!=\!\mathbf{b}^{*}(\psi_{s^{\prime}},\varphi_{s^{ \prime}},\rho_{s^{\prime}})\), the non-stationary feature can be expressed as_ \[f\!\!\left(\!\psi_{s},\varphi_{s},\rho_{s};\delta_{\psi},\delta_{\psi},\delta _{\rho}\!\right)\!\neq\!f(\psi_{s^{\prime}},\varphi_{s^{\prime}},\rho_{s^{ \prime}};\delta_{\psi},\delta_{\varphi},\delta_{\rho}). \tag{26}\] **Proof 3**: _See Appendix C. \(\blacksquare\)_ **Property 4** (Asymmetrical): _For the UPA codeword \(\mathbf{w}_{s}=\mathbf{b}^{*}(\psi_{s},\varphi_{s},\rho_{s})\), the quantization performance of the codeword is asymmetrical, which can be expressed as_ \[\begin{split} f\!\left(\!\psi_{s}&\!,\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \(\mathbf{b}(0,0)\). The quantization area of the codeword is rectangular. A red square at the center of the rectangular quantization area represents the codeword. Moreover, each blue triangle represents the intersection of the quantization intervals of adjacent codewords. Multiple layouts of rectangular vertices on the quantization boundary constitute multiple quantization schemes. In order to improve the quantization accuracy of the codeword, it is always desired that the codeword has the largest quantization area under minimum quantization correlation \(c\). Therefore, the goal can be mapped as maximizing the area of the rectangle on a specific ellipse. The quantization performance of the codeword \(\hat{\mathbf{w}}\) can always be represented by the (21). Consider a vertex \((\alpha^{\star},\beta^{\star})\) of the rectangle in Fig. 5(b), where \(\beta^{\star}>0\) and \(\alpha^{\star}>0\). According to the Cauchy-Schwartz inequality, for any point on the inscribed rectangle of an ellipse, the area of the rectangle is the largest when the point locates at \(\bigg{(}\frac{1}{N^{2}}\sqrt{\frac{(c-1)}{2p_{\alpha}}},\frac{1}{N}\sqrt{ \frac{(c-1)}{2p_{\beta}}}\bigg{)}\). Therefore, when the channel correlation is \(c\), the sampling interval for achieving the maximum quantization area on the \(\alpha\)-\(\beta\) domain can be calculated as \[\Delta\alpha\!=\!2\alpha^{\star}\!=\!\frac{1}{N^{2}}\sqrt{\frac{2\left(c-1 \right)}{p_{\alpha}}},\quad\Delta\beta\!=\!2\beta^{\star}\!=\!\frac{1}{N} \sqrt{\frac{2\left(c-1\right)}{p_{\beta}}}. \tag{31}\] Consider that the UE distribution is within a range of distances \(r\in\left[0.62\sqrt{\frac{D^{3}}{\lambda}},\infty\right)\) and angles \(\theta\in\left[-\frac{\pi}{2},\frac{\pi}{2}\right]\). The maximum quantization range on the \(\alpha\) and \(\beta\) domain can be calculated as \[Q_{\alpha}=\sqrt{\frac{\lambda}{2.48D^{3}}}\approx\frac{1}{N\sqrt{N}},\quad Q _{\beta}=2. \tag{32}\] Then, the number of codewords in the \(\alpha\) domain and \(\beta\) domain are given by \[S_{\alpha}=\frac{Q_{\alpha}}{\Delta\alpha}=\sqrt{\frac{Np_{\alpha}}{2\left(c- 1\right)}},\quad S_{\beta}=\frac{Q_{\beta}}{\Delta\beta}=N\sqrt{\frac{2p_{ \beta}}{\left(c-1\right)}}. \tag{33}\] Thus, the total number of codewords to achieve the minimum number of feedback bits can be calculated as \[S_{ULA}=S_{\alpha}S_{\beta}=\frac{N\sqrt{Np_{\alpha}p_{\beta}}}{\left(1-c \right)}. \tag{34}\] The \(s_{\alpha}\)-th sampling points in the \(\alpha\) domain can be reformulated as \[\alpha_{s_{\alpha}}=\left(s_{\alpha}-\frac{1}{2}\right)\Delta\alpha,\ \ s_{\alpha}=1,\ldots,\left\lfloor S_{\alpha}\right\rfloor. \tag{35}\] And the \(s_{\beta}\)-th sampling points in the \(\beta\) domain can be reformulated as \[\beta_{S\beta}=-1+\left(s_{\beta}-\frac{1}{2}\right)\Delta\beta,\ \ s_{\beta}=1\ldots,\left\lfloor S_{\beta}\right\rfloor. \tag{36}\] (34) shows that the number of quantized bits of the codeword is only related to the channel correlation \(c\) and the number \(N\) of antennas but is independent of the frequency. The number of codewords in the \(\alpha\) domain is proportional to \(\sqrt{N}\), and the number of codewords in the \(\beta\) domain is proportional to \(N\). Moreover, if the number of antennas \(N\) remains unchanged, an increase in the channel correlation \(c\) can result in a greater number of codebook quantization vectors. ### _Dislocation Quantization Codebook Scheme_ In this section, we propose a dislocation ULA codebook to further improve the quantized accuracy of codewords. The dislocation codebook can be viewed as a combination of two sets of uniform codebooks, as shown in Fig. 6(a). \(\overline{\Delta\alpha}\) and \(\overline{\Delta\beta}\) are the sampling steps of the dislocation codebook in the \(\alpha\) and \(\beta\) domains. Notably, in comparison to uniform sampling, the dislocated sampling approach introduces a distinct characteristic in the \(\alpha\) domain: the \(\beta\) value of two adjoining columns of sampling points consistently differs by \(\frac{\overline{\Delta\beta}}{2}\). Fig. 5: ULA uniform codebook: (a) Resource division in the \(\alpha\)-\(\beta\) domain. (b) Quantization area of a codeword with minimum correlation \(c\). The quantization area of each codeword is distributed in a regular hexagon. Three non-adjacent points within the hexagon can form a triangle, as shown in Fig. 6(b). The area of the triangle is always half the area of the hexagon. Therefore, the problem of maximizing the quantization area of a dislocation codeword can be transformed into the problem of finding the max inscribed triangle of an ellipse. Consider a vertex \((\alpha^{\star},\beta^{\star})\) of the triangle in the Fig. 6(b), where \(\alpha^{\star}\!>\!0\) and \(\beta^{\star}\!>\!0\). The area of the inscribed triangle is largest evenly when the point locates at \(\alpha^{\star}\!=\!\frac{1}{2N^{2}}\sqrt{\frac{(c-1)}{p_{\alpha}}}\) and \(\beta^{\star}\!=\!\frac{1}{2N}\sqrt{\frac{3(c-1)}{p_{\beta}}}\). Therefore, the optimal sampling steps of dislocation ULA codebook in the \(\alpha\) domain and the \(\beta\) domain can be calculated as \[\overline{\Delta\alpha}\!=\!6\alpha^{\star}\!=\!\frac{3}{N^{2}}\sqrt{\frac{(c- 1)}{p_{\alpha}}},\quad\overline{\Delta\beta}\!=\!2\beta^{\star}\!=\!\frac{1}{N }\sqrt{\frac{3\left(c-1\right)}{p_{\beta}}}. \tag{37}\] The number of the sampling points in the \(\alpha\) and \(\beta\) domains is \[\overline{S}_{\alpha}=\frac{1}{3}\sqrt{\frac{Np_{\alpha}}{(c-1)}}\,\quad \overline{S}_{\beta}=2N\sqrt{\frac{p_{\beta}}{3\left(c-1\right)}}. \tag{38}\] Therefore, the total number of sampling points can be calculated as \[\overline{S}_{ULA}=2\overline{S}_{\alpha}\overline{S}_{\beta}=\frac{4N}{3(1-c )}\sqrt{\frac{Np_{\alpha}p_{\beta}}{3}}. \tag{39}\] The \(\overline{s}_{\alpha}\)-th sampling point in the \(\alpha\) domain is \[\alpha_{\overline{s}_{\alpha}}=\begin{cases}\frac{2\overline{\Delta\alpha}}{3 }+\left(\overline{s}_{\alpha}-1\right)\overline{\Delta\alpha},&\overline{s}_ {\beta}\sim odd\\ \frac{\overline{\Delta\alpha}}{6}+\left(\overline{s}_{\alpha}-1\right) \overline{\Delta\alpha},&\overline{s}_{\beta}\sim even\end{cases} \tag{40}\] where \(\overline{s}_{\alpha}=1\ldots\left[\overline{S}_{\alpha}\right]\). And the \(s_{\beta}\)-th sampling point in the \(\beta\) domain is \[\beta_{\overline{s}_{\beta}}=\begin{cases}-1+\left(\overline{s}_{\beta}-1 \right)\overline{\Delta\beta},&\overline{s}_{\alpha}\sim odd\\ -1+\left(\overline{s}_{\beta}-\frac{1}{2}\right)\overline{\Delta\beta},& \overline{s}_{\alpha}\sim even\end{cases} \tag{41}\] (34) and (39) provide the number of codewords under uniform and dislocation sampling. The number of codewords in the dislocation sampling scheme is only \(75\%\) of the number of uniform sampling codewords. Therefore, in the same space, the dislocation quantization scheme achieves the goal of low codebook quantization overhead. We compare the number of sampling points in the \(\alpha\) and \(\beta\) domains of the two schemes in Example 1. **Example 1**: _In the case of the same channel correlation \(c=0.95\) and different antenna numbers \(N\), the optimal codeword numbers for the \(\alpha\) and \(\beta\) domains are summarized in Table I, respectively._ It should be noted that, the number of sampling points in the \(\alpha\) domain is significantly higher than that in the \(\beta\) domain. This phenomenon highlights the robustness of near-field beamforming in the \(\beta\) domain, and the denser sampling of the \(\beta\) domain enhances codeword quantization accuracy. Consequently, with the same amount of feedback bits, dense sampling in the \(\beta\) domain will be more conducive to improving the quantization performance of the codeword. ## V Near-field UPA Codebook Design This section will provide the UPA codebooks with uniform sampling and dislocation sampling, respectively. The non-stationary feature of the UPA channel leads to varying quantization performance for each codeword. To address this issue, we initiate by defining a reference ellipsoid and assume that the correlation formula for each codeword is always the same as the reference ellipsoid. Based on this assumption of stationarity, the uniform codebook and dislocation codebook can be obtained. The performance of the UPA codebook based on the reference ellipsoid is the lower bound of quantization performance under the assumption of stationarity. In other words, the actual quantization area of each codeword is all within the reference ellipsoid of each UPA codebook. Corollary 1 demonstrates that for any codeword, the pointing position of the channel vector that satisfies the minimum quantization correlation of \(c\) is always uniformly distributed on an ellipsoid centered around the quantization center of the codeword. It's important to note that, due to the non-stationary characteristics of UPA channels, the size of the ellipsoid enclosed by different codewords meeting the same minimum quantization correlation conditions varies. Consequently, when designing the optimal sampling interval between UPA codewords, we can't directly apply the correlation feature of any single codeword to all codewords, as is the case with ULA codebooks. In order to solve the problem caused by non-stationary features when designing the optimal sampling interval for codebooks, we hope to find a reference ellipsoid to describe the quantization features of any codeword in space. This reference ellipsoid provides the maximum allowable space, ensuring that all codewords can guarantee the minimum quantization correlation \(c\) at this volume. Below, we define a reference ellipsoid. **Definition 1**: _Consider sampling \(T_{\psi}\), \(T_{\varphi}\) and \(T_{\rho}\) points on the \(\psi\), \(\varphi\) and \(\rho\) domains, respectively. From Corollary 2, it Fig. 6: ULA dislocation codebook: (a) Resource division in the \(\alpha\)-\(\beta\) domain. (b) Quantization area of a codeword with minimum correlation \(c\). can be concluded that when the quantization performance of all codewords satisfies the minimum correlation \(c^{\star}\), the sets of ellipsoidal axis lengths enclosed by the quantization boundaries of all codewords are respectively represented as \(\mathbf{L}_{\psi}=\big{\{}l_{\psi,1},\ldots,l_{\psi,T_{\psi}}\big{\}}\), \(\mathbf{L}_{\varphi}=\big{\{}l_{\varphi,1},\ldots,l_{\varphi,T_{\varphi}}\big{\}}\), \(\mathbf{L}_{\rho}=\big{\{}l_{\rho,1},\ldots,l_{\rho,T_{\varphi}}\big{\}}\). Meanwhile, \(l_{\psi}^{\star}=\min\ \mathbf{L}_{\psi}\), \(l_{\varphi}^{\star}=\min\ \mathbf{L}_{\varphi}\) and \(l_{\rho}^{\star}\!=\!\min\ \mathbf{L}_{\rho}\) are used as the axial length of the reference ellipsoid. Thus, the formula of the reference ellipsoid can be written as \[\frac{\delta_{\psi_{\varphi}}^{2}}{\big{(}l_{\psi}^{\star}\big{)}^{2}}+\frac{ \delta_{\varphi_{\sigma}}^{2}}{\big{(}l_{\varphi}^{\star}\big{)}^{2}}+\frac{ \delta_{\rho_{\sigma}}^{2}}{\big{(}l_{\rho}^{\star}\big{)}^{2}}=1. \tag{42}\] According to the fitting formula given in Corollary 2, the fitting coefficient can be calculated as \[p_{\psi}^{\star}=\frac{\mathrm{c}-1}{(l_{\psi}^{\star}N)^{2}},\ \ \ \ p_{\varphi}^{\star}=\frac{\mathrm{c}-1}{(l_{\varphi}^{\star}N)^{2}},\ \ \ \ p_{\rho}^{\star}=\frac{\mathrm{c}-1}{(l_{\rho}^{\star}N^{2})^{2}}. \tag{43}\] The formula for the reference ellipsoid can be completed as \[\frac{\delta_{\psi_{\varphi}}^{2}}{\frac{\mathrm{c}-1}{p_{\psi}^{\star}N^{2}} }+\frac{\delta_{\varphi_{\sigma}}^{2}}{\frac{\mathrm{c}-1}{p_{\varphi}^{\star }N^{2}}}+\frac{\delta_{\rho_{\sigma}}^{2}}{\frac{\mathrm{c}-1}{p_{\rho}^{ \star}N^{4}}}=1. \tag{44}\] It is important to note that the reference ellipsoid is a virtual area reconstructed by considering the minimum value of all codeword quantization areas. Therefore, codewords with the reference ellipsoid as the quantization area may not exist. The minimum correlation of certain codewords may be smaller than \(c\) if the reference ellipsoid becomes larger. Thus, the reference ellipsoid represents the largest shape that can describe the quantization areas of all codewords in the UPA channel. Smaller reference ellipsoids allow for a higher minimum correlation for each codeword. Since the reference ellipsoid represents the largest ellipsoid achievable under the assumption of stationarity, it sets a lower bound on the performance achievable by codebook schemes based on the assumption of stationarity. ### _Uniform Codebook Quantization Scheme_ In this section, we propose the UPA uniform codebook scheme. The codebook scheme is uniformly sampled in the \(\psi\), \(\varphi\) and \(\rho\) domains, as shown in Fig. 7(a). Among them, the red rectangle represents the codeword. The actual quantization area of each codeword is a cuboid. Under the assumption of a stationary UPA channel, the quantization areas of codewords can be represented using a reference ellipsoid expressed by (44). The quantization area of a codeword is the inscribed cuboid of the ellipsoid. For adjacent eight codewords, the boundaries of their quantization regions consistently intersect at a single point, as shown in Fig. 7(b). Therefore, the problem of maximizing codeword quantization regions can be transformed into finding the maximum inscribed rectangular cuboid of an ellipsoid. Taking codeword \(\mathsf{W}\!=\!\mathbf{b}^{\star}(0,0,0)\) as an example, \((\psi^{\star}\!,\!\varphi^{\star}\!,\!\rho^{\star})\) represents a vertex located on the reference ellipsoid corresponding to this codeword. Here, it's crucial to note that \(\psi^{\star}\!>\!0\), \(\varphi^{\star}\!>\!0\), and \(\rho^{\star}\!>\!0\). Under the conditions \(\psi^{\star}\!=\!\frac{1}{N}\sqrt{\frac{\mathrm{c}-1}{3p_{\varphi}^{\star}}}\), \(\varphi^{\star}\!=\!\frac{1}{N}\sqrt{\frac{\mathrm{c}-1}{3p_{\varphi}^{\star}}}\) and \(\rho^{\star}\!=\!\frac{1}{N}\sqrt{\frac{\mathrm{c}-1}{3p_{\varphi}^{\star}}}\), the volume of the inscribed rectangular cuboid within the ellipsoid reaches its maximum. Consequently, the quantization space of the codeword achieves its maximum extent. In such a scenario, the optimal sampling steps can be calculated as \[\Delta\psi=2\psi^{\star} =\frac{2\sqrt{3}}{3N}\sqrt{\frac{\mathrm{c}-1}{p_{\psi}^{\star}}}, \tag{45}\] \[\Delta\varphi=2\varphi^{\star} =\frac{2\sqrt{3}}{3N}\sqrt{\frac{\mathrm{c}-1}{p_{\varphi}^{\star }}},\] \[\Delta\rho=2\rho^{\star} =\frac{2\sqrt{3}}{3N^{2}}\sqrt{\frac{\mathrm{c}-1}{p_{\rho}^{\star }}}.\] The codebook is designed for a 3D space with a distance range of \(r\in\big{[}0.62\sqrt{\frac{D^{3}}{\Delta},\infty}\big{)}\), elevation angle of \(\big{[}-\frac{\pi}{2},\frac{\pi}{2}\big{]}\) and azimuth angle of \(\big{[}0,\pi\big{]}\). And the range of the 3D space in the transformed domain is given by \[Q_{\psi}=2,\ \ \ Q_{\varphi}=2,\ \ \ Q_{\rho}\approx\frac{2.7}{N\sqrt{N}}. \tag{46}\] Therefore, the number of codewords in UPA uniform codebook can be calculated as \[S_{\psi}=\sqrt{\frac{3p_{\psi}^{\star}}{c-1}}N,S_{\varphi}=\sqrt{\frac{3p_{ \varphi}^{\star}}{c-1}}N,S_{\rho}\!\approx\!2.3\sqrt{\frac{Np_{\rho}^{\star}}{c -1}}. \tag{47}\] The positions represented by the \(s_{\psi}\)-th, \(s_{\varphi}\)-th and \(s_{\rho}\)-th sampling points in \(\psi\), \(\varphi\) and \(\rho\) domain can be expressed as \[\psi_{s_{\psi}} =-1+\left(s_{\psi}-\frac{1}{2}\right)\Delta\psi,\ \ \ s_{\psi}=1,\ldots,\lfloor S_{\psi}\rfloor\,,\] \[\varphi_{s_{\varphi}} =-1+\left(s_{\varphi}-\frac{1}{2}\right)\Delta\varphi,\ \ \ s_{\varphi}=1,\ldots,\lfloor S_{\varphi}\rfloor\,, \tag{48}\] \[\rho_{s_{\rho}} =\left(s_{\rho}-\frac{1}{2}\right)\Delta\rho,\ \ \ s_{\rho}=1,\ldots,\lfloor S_{\rho}\rfloor\,.\] And the total number of sampling points is \[S_{max}=\frac{7N^{2}}{(1-c)}\sqrt{\frac{Np_{\psi}^{\star}p_{\psi}^{\star}p_{ \rho}^{\star}}{c-1}}. \tag{49}\] For the proposed uniform codebook scheme, the number of sampling points is proportional to the number of antennas and minimum quantization correlation. And the sampling step will decrease with the increased minimum quantization correlation and the number of antennas. Fig. 7: UPA dislocation codebook: (a) Resource division in the \(\varphi\)-\(\psi\)-\(\rho\) domain. (b) Quantization area of a codeword with minimum correlation \(c\). ### _Dislocation Quantization Codebook Scheme_ In this section, we will explore a UPA dislocation codebook to further decrease the quantization overhead. The quantization area of UPA dislocation codeword is a hexagonal prism, illustrated in the Fig. 8(a). The UPA dislocation codebook can be viewed as a combination of two same sets of UPA uniform codebooks. Here, \(\overline{\Delta\psi}\), \(\overline{\Delta\varphi}\), and \(\overline{\Delta\rho}\) denote the sampling intervals of the UPA dislocation codebook in the \(\psi\), \(\varphi\), and \(\rho\) domains, respectively. By shifting adjacent sampling points in the \(\psi\) domain of a uniform codebook by \(\delta/2\) in the \(\varphi\) domain, a dislocated UPA codebook can be obtained. Fig. 8(b) is the quantization performance of the codeword \(\bar{\psi}\) considering a minimum correlation of \(c\). The optimization of the quantization area for the UPA dislocated codebook can be reformulated as the task of maximizing the volume of an inscribed hexagonal prism within the quantization area. We select the inscribed hexagonal prism in the Fig. 8(b). A vertex, denoted as \((\psi^{\star},\varphi^{\star},\rho^{\star})\), is a vertex located on the triangular pyramid satisfying conditions \(\rho^{\star}\!<\!0\), \(\psi^{\star}\!>\!0\) and \(\varphi^{\star}\!>\!0\). The quantization space reaches its maximum extent when \(\rho^{\star}\!=\!\sqrt{\frac{c-1}{3p_{\varphi}^{\star}N}}\) and \(\psi^{\star}\!=\!\sqrt{\frac{c-1}{9p_{\varphi}^{\star}N^{\star}}}\), and \(\varphi^{\star}\!=\!\sqrt{\frac{c-1}{2p_{\varphi}^{\star}N^{\star}}}\). Thus, the optimal quantization intervals can be calculated as \[\overline{\Delta\psi} =\frac{1}{N}\sqrt{\frac{6(c-1)}{p_{\psi}^{\star}}}, \tag{50}\] \[\overline{\Delta\varphi} =\frac{1}{N}\sqrt{\frac{2(c-1)}{p_{\varphi}^{\star}}},\] \[\overline{\Delta\rho} =\frac{2\sqrt{3}}{3N^{2}}\sqrt{\frac{c-1}{p_{\rho}^{\star}}}.\] The number of sampling points in the \(\psi\), \(\varphi\), and \(\rho\) domains are respectively \[\overline{S}_{\psi}=2\sqrt{\frac{p_{\psi}^{\star}}{6(c-1)}}N,\overline{S}_{ \varphi}=\sqrt{\frac{2p_{\varphi}^{\star}}{c-1}}N,\overline{S}_{\rho}\approx 2.3\sqrt{\frac{Np_{\rho}^{\star}}{c-1}}. \tag{51}\] The \(\overline{s}_{\rho}\)-th sampling point in the \(\rho\) domain is expressed as \[\rho_{\overline{s}_{\rho}}=\left(\overline{s}_{\rho}-\frac{1}{2}\right) \Delta\rho,\quad\overline{s}_{\rho}=1,\ldots,\left[\overline{S}_{\rho}\right]. \tag{52}\] The \(\overline{s}_{\varphi}\)-th sampling point in the \(\varphi\) domain is calculated as \[\psi_{\overline{s}_{\varphi}}=\left\{\begin{array}{cc}&-1+\left(\overline{s }_{\varphi}-1\right)\overline{\Delta\varphi},\quad\overline{s}_{\psi}\sim even \\ &-1+\frac{\overline{\Delta\varphi}}{2}+\left(\overline{s}_{\varphi}-1\right) \overline{\Delta\varphi},\quad\overline{s}_{\psi}\sim odd\end{array},\right. \tag{53}\] And the \(\overline{s}_{\psi}\)-th sampling point in the \(\psi\) domain is \[\psi_{\overline{s}_{\varphi}}=\left\{\begin{array}{cc}&-1+\left(\overline{s }_{\psi}-1\right)\overline{\Delta\psi},\quad\overline{s}_{\varphi}\sim odd \\ &-1+\frac{\overline{\Delta\psi}}{2}+\left(\overline{s}_{\psi}-1\right) \overline{\Delta\psi},\quad\overline{s}_{\varphi}\sim even\end{array},\right. \tag{54}\] where \(\overline{s}_{\psi}=1\ldots,\left[\overline{S}_{\psi}\right]\). Therefore, the total number of sampling points of dislocated codewords is calculated as \[\overline{S}_{max}=\frac{5.3N^{2}}{(1-c)}\sqrt{\frac{Np_{\varphi}^{\star}p_{ \varphi}^{\star}p_{\rho}^{\star}}{c-1}}. \tag{55}\] It can be seen that \(\frac{\overline{S}_{UPA}}{SUP_{A}}\approx 0.75\). Therefore, the overhead of dislocation codebook is always only 0.75 times that of uniform codewords under the same quantization area. This advantageous aspect is highlighted through Example 2, where we conduct a comparison of the sampling points between the proposed UPA codebook schemes. **Example 2**: _Table II illustrates the number of sampling points for the proposed UPA codebook schemes with minimum correlation \(c=0.95\) and frequency \(f=100\) GHz. Without loss of generality, the quantization overhead of UPA dislocation codebooks is always smaller than UPA uniform codebooks in the same system configuration. Of noteworthy significance is the consistent trend where the number of codewords in the \(\psi\) and \(\varphi\) domains consistently surpasses those in the \(\rho\) domain. This underscores that the robustness of the angle domain is stronger than the distance domain in the UPA channel._ ## VI Simulation Results In this section, we provide the simulation results to illustrate the performance of the proposed codebook schemes for ULA and UPA systems. The simulation considers the ULA system in Fig. 1. In the ULA system, the number of the transmitted antenna is set as \(N_{1}=512\). The UE locates randomly in the space spanned \(\left(r_{1},\theta_{1}\right)\in\left[0.62\sqrt{\frac{(N_{1}d)^{3}}{\lambda}}, \infty\right)\times\left[-\frac{\pi}{2},\frac{\pi}{2}\right]\). And the UPA system in Fig. 2 is used for simulation. In the UPA system, the number of the transmitted antenna is \(N_{2}\times N_{2}\) with \(N_{2}=16\). The elevation angle and azimuth angle of the UE is \(\theta_{2}\in\left[-\frac{\pi}{2},\frac{\pi}{2}\right]\) and \(\phi_{2}\in\left[0,\pi\right]\), respectively. The distance between the BS and the UE distributes in \(r_{2}\in\left[0.62\sqrt{\frac{D^{3}}{\lambda}},\infty\right)\). The carrier frequency of both ULA and UPA systems is set to \(f=100\,\mathrm{GHz}\). UE may be in the near-field or far-field with the above configuration. We evaluate the cumulative probability function (CDF) and achievable rate of the proposed codebook schemes. The signal-to-noise ratios (SNR) of the ELAA system can be calculated as [19] \[\mathrm{SNR}=\frac{P\eta N}{r^{2}\sigma^{2}}, \tag{56}\] Fig. 8: UPA uniform codebook: (a) Resource division in the \(\varphi\)-\(\psi\)-\(\rho\) domain. (b) Quantization area of a codeword with minimum correlation \(c\). where \(P\) is the transmit power and \(\sigma^{2}\) is the noise power set as \(\sigma^{2}=-70\,\mathrm{dBm}\). The achievable rate is given by \[R=\log_{2}\left(1+\frac{P\eta N\left|\mathbf{b}^{T}\left(r,\theta\right)\mathbf{ w}\right|^{2}}{r^{2}\sigma^{2}}\right). \tag{57}\] And the simulation results are the average results of 1000 randomly distributed UE. ### _ULA Codebook_ Fig. 9 illustrates the CDF of quantized correlation with various codebook schemes. In order to provide a comparative analysis, the proposed codebook schemes are compared with the following schemes: * Normal codebook: The codebook uniformly samples the \(\alpha\) and \(\beta\) domains. And the number of sampling points in the \(\alpha\) and \(\beta\) domains are both \(\frac{N_{1}}{3}\). * Codebook based on Lloyd-Max algorithm: The Lloyd-Max algorithm is used to sample \(\alpha\) and \(\beta\) domains [23]. The number of sampling points for \(\alpha\) domain is designed as \(\frac{N_{1}}{10}\); for \(\beta\) domain, it is set as \(N_{1}\). * Sparse polar codebook: The codebook is sampled on a sparse domain of distance domain and angle domain [22]. The number of the codewords is \(S_{P}=\sum_{n=1}^{N_{1}}S_{P}^{(n)}\), where \(N_{1}\) is the sampling number in the \(\alpha\) domain and \(S_{P}^{(n)}\) is the sampling number in the \(\beta\) domain. \(S_{P}\) can be calculated as \(28431\) in the considered ULA system. For equality, the proposed ULA dislocation codebook and uniform schemes are quantized with \(14.8\) bits. The proposed uniform codebook comprised 2027 sampling points in the \(\alpha\) domain and 14 sampling points in the \(\beta\) domain. On the other hand, the dislocation quantization codeword contains 1111 sampling points in the \(\alpha\) domain and 13 sampling points in the \(\beta\) domain. It is worth noting that the dislocation codebook consistently outperforms the uniform codebook under the equal number of codewords. The performance of the ULA normal codebook is considerably inferior to the proposed codebook schemes, even when employing a more significant number of quantization vectors. Furthermore, the proposed schemes demonstrate significant superiority over the codebook based Lloyd-Max algorithm and sparse polar domain codebook, which validates the effectiveness of our proposed schemes. In addition, the minimum quantization correlation of the proposed codebook in [22] is smaller than other codebook schemes. Moreover, our simulation includes an infinite distance, which verifies that the proposed solution also has good applicability in far-field scenarios. Fig. 10 illustrates the achievable rate for two scenarios: the ideal case of perfect CSI and the case that the precoding matrix is selected based on codebook. The beamforming scheme with perfect CSI represents the theoretical upper limit. In this comparison, we consider the same codebook schemes shown in Fig. 9. The proposed codebooks and the ideal case of perfect CSI exhibit remarkably similar performance. Notably, the dislocation codebook significantly enhances the achievable rate compared to the uniform codebook. When employing the same quantization bits, we observe that the achievable rate of the proposed codebooks consistently outperforms the rate achieved by other codebook schemes, particularly as the receiver SNR increases. With \(\mathrm{SNR}=20\) dB, the achievable rate demonstrates an improvement of approximately \(1.4\) bit/s/Hz compared to the ULA normal codebook. ### _UPA Codebook_ Fig. 11 shows the CDF of the quantification correlation of the UPA codebook schemes. We compare the proposed UPA codebook schemes with the other two schemes: * Normal codebook: This codebook employs identical sampling points in the \(\psi\), \(\varphi\), and \(\rho\) domains, consisting of 29 uniform samples within each of these domains. * Codebook based Lloyd-Max algorithm: The angle domain is sampled as the traditional far-field codebook, and the distance domain is sampled using the Lloyd-Max algorithm. The number of sampling points in the \(\psi\), \(\varphi\), and \(\rho\) domains is \(N_{2}\), \(N_{2}\), and \(3N_{2}\), respectively [23]. Fig. 10: Achievable rate against SNR for several codebook schemes in ULA channel, with \(N_{1}=512\) and \(f=100\,\mathrm{GHz}\). Fig. 9: CDF of the codebook quantification correlation in ULA channel, with \(N_{1}=512\) and \(f=100\,\mathrm{GHz}\). The minimum quantization correlation of the uniform and codebook schemes are set as \(0.95\) and \(0.96\), respectively. The quantization precision of the UPA uniform scheme and UPA dislocation codebook scheme is approximately of 14 bits and a half. The simulation results consistently demonstrate that the quantized correlation achieved by the proposed codebook consistently exceeds the predefined minimum correlation. The results indicate that the quantization performance of UPA dislocation codebook schemes is better than that of uniform codebook schemes while maintaining the same quantization overhead. Our proposed UPA codebook schemes achieve superior quantization correlation compared to the codebook designed using the Lloyd-Max algorithm. Moreover, the results also reveal that the performance of our proposed codebook consistently outperforms the UPA normal scheme which sets as equal number of samples in the \(\psi\), \(\varphi\), and \(\rho\) domains. The result shows that dense sampling in the angle domain is more conducive to accurate quantization of UPA channels. On the contrary, the distance domain only requires a small number of bits for quantization. We use different codebook schemes as the beamforming matrix at the BS in the UPA system and evaluate the achievable rate. Fig. 12 illustrates the results with various SNR. The outcomes demonstrate that our proposed UPA codebook schemes outperform the other two schemes and closely approach the performance achieved with perfect CSI. At an SNR of \(20\) dB, our proposed schemes exhibit an enhancement of approximately \(0.2\) bit/s/Hz when compared to the UPA normal codebook scheme. ## VII Conclusion This paper introduces a novel codebook design to maximize the minimum quantization correlation for near-field ELAA channels. The performance of the ULA codebook is stationary and symmetrical. Moreover, the performance of the UPA codebook is non-stationary and asymmetrical. The correlation formula of ULA and UPA can be fitting as ellipse and ellipsoid, respectively. Based on these insights, we propose two ULA codebooks: uniform sampling and dislocation sampling. The dislocation codebook scheme performs like the uniform codebook but needs fewer quantization bits. To address the non-stationarity of the UPA codeword, we propose UPA uniform and dislocation codebook schemes based on the assumption of stationarity. This way, the achievable minimum quantization correlation of the proposed codebook schemes is always greater than that achieved by a single codeword. Additionally, we emphasize the robustness of the angle domain in ELAA systems. Simulation results confirm that the proposed codebook achieves minimal quantization bits while maintaining high quantization performance. ## Appendix A proof of property 1 With the same displacement difference, the correlation formula of \(\mathbf{w}_{s}\) and \(\mathbf{w}_{\mathbf{S}}\) can be calculated as \[f\left( \alpha_{s},\beta_{s};\alpha_{s}+\delta_{\alpha},\beta_{s}+\delta _{\beta}\right)\] \[=\frac{1}{N}\left|\sum_{n=1}^{N}\exp\left(-\,j\pi\!\left(\delta_{ \alpha}n^{2}+\left(\delta_{\beta}-\delta_{\alpha}\!\left(N+1\right)\right)\!n \right)\right)\right|\] \[=f\left(\;\alpha_{s^{\prime}},\beta_{s^{\prime}};\alpha_{s^{ \prime}}+\delta_{\alpha},\beta_{s^{\prime}}+\delta_{\beta}\right). \tag{58}\] Then Property 1 is proved. ## Appendix B proof of property 2 (14) can be rewritten as \[f\left(\delta_{\alpha},\delta_{\beta}\right)=\frac{1}{N}\left|\sum_{n=1}^{N} \exp\left(-j\frac{2\pi}{\lambda}\left(\delta_{\beta}y_{n}-\frac{2}{\lambda} \delta_{\alpha}y_{n}^{2}\right)\right)\right|. \tag{59}\] If \(\beta_{q}\!-\!\beta_{s}\!=\!-\!\delta_{\beta}\), the above formula can be calculated as \[f\left(\delta_{\alpha},-\delta_{\beta}\right)=\frac{1}{N}\left|\sum_{n=1}^{N} \exp\left(-j\frac{2\pi}{\lambda}\left(-\delta_{\beta}y_{n}-\frac{2}{\lambda} \delta_{\alpha}y_{n}^{2}\right)\right)\right|. \tag{60}\] Fig. 11: CDF of the codeword quantification correlation in UPA channel, with \(N=16\times 16\), \(f=100\,\mathrm{GHz}\). Fig. 12: Achievable rate against SNR for several codebooks in UPA channel, with \(N=16\times 16\), \(f=100\,\mathrm{GHz}\). Since \(y_{N-n+1}=-y_{n}\), (60) can be written as \[\begin{split}& f\left(\delta_{\alpha},-\delta_{\beta}\right)\\ &=\frac{1}{N}\left|\sum_{n=1}^{N}\exp\left(-j\frac{2\pi}{\lambda} \left(\delta_{\beta}y_{N-n+1}-\frac{2}{\lambda}\delta_{\alpha}y_{N-n+1}^{2} \right)\right)\right|,\end{split} \tag{61}\] which indicates that \(f\left(\delta_{\alpha},\delta_{\beta}\right)=f\left(\delta_{\alpha},-\delta_{ \beta}\right)\). It is evident that \(f(\delta_{\alpha},\delta_{\beta})\!=\!f(-\delta_{\alpha},-\delta_{\beta})\) by central symmetry. Therefore, we can deduce that \(f(\delta_{\alpha},\delta_{\beta})\!=\!f(-\delta_{\alpha},\delta_{\beta})\). The proof of Property 2 is thus completed. ## Appendix C proof of property 3 According to the (25), the quantization performance of codeword always is related to the quantization center \((\psi_{s},\varphi_{s},\rho_{s})\). With the same displacements \((\delta_{\psi},\delta_{\varphi_{s}},\delta_{\rho_{s}})\), the quantization performance of tow codewords respectively pointing to \((\psi_{s},\varphi_{s},\rho_{s})\) and \((\psi_{s},\varphi_{s},\rho_{s})\) is different, that is, \(f(\psi_{s},\varphi_{s},\rho_{s};\delta_{\psi},\delta_{\psi},\delta_{\varphi} )\neq f(\psi_{s^{\prime}},\varphi_{s^{\prime}},\rho_{s^{\prime}};\delta_{\psi },\delta_{\varphi},\delta_{\rho})\). In this way, Property 3 is proved. ## Appendix D proof of property 4 Replace \(\rho_{s}\) with \(\rho_{s^{\prime}}\) that meets \(\rho_{q}-\rho_{s^{\prime}}=-\delta_{\rho_{s}}\). Then, the phase of the \((mN+n)\)-th exponential term in (25) can be calculated as \[\begin{split}\Upsilon_{(m,n)}&(\psi_{s},\varphi_{s},\rho_{s};\delta_{\psi_{s}},\delta_{\varphi_{s}},-\delta_{\rho_{s}})\\ &=\Upsilon_{(m,n)}(\psi_{s},\varphi_{s},\rho_{s};\delta_{\psi_{s}}, \delta_{\varphi_{s}},\delta_{\rho_{s}})+\iota_{(m,n)}^{(\rho)},\end{split} \tag{62}\] where \(\iota_{(m,n)}^{(\rho)}\) can be calculated as \[\begin{split}\iota_{(m,n)}^{(\rho)}=&\frac{4\delta _{\rho_{s}}\!\left(1-\psi_{s}+\delta_{\psi_{s}}\right)}{\lambda^{2}}x_{m}^{2} +\frac{2\delta_{\rho_{s}}\!\left(1-\varphi_{s}+\delta_{\varphi_{s}}\right)}{ \lambda^{2}}y_{n}^{2}\\ &-\delta_{\rho_{s}}\frac{2\!\left(\psi_{s}-\delta_{\psi_{s}} \right)\!\left(\varphi_{s}-\delta_{\varphi_{s}}\right)}{\lambda^{2}}x_{m}y_{n}. \end{split} \tag{63}\] Therefore, we can obtain that \(f\!\left(\psi_{s},\varphi_{s},\rho_{s};\!\delta_{\psi_{s}},\delta_{\varphi_{s }},\delta_{\rho_{s}}\right)\neq f\left(\psi_{s},\varphi_{s},\rho_{s};\delta_{ \psi_{s}},\delta_{\varphi_{s}},-\delta_{\rho_{s}}\right)\). Using the same method, Property 4 can be proved.
2305.18459
Diffusion Model is an Effective Planner and Data Synthesizer for Multi-Task Reinforcement Learning
Diffusion models have demonstrated highly-expressive generative capabilities in vision and NLP. Recent studies in reinforcement learning (RL) have shown that diffusion models are also powerful in modeling complex policies or trajectories in offline datasets. However, these works have been limited to single-task settings where a generalist agent capable of addressing multi-task predicaments is absent. In this paper, we aim to investigate the effectiveness of a single diffusion model in modeling large-scale multi-task offline data, which can be challenging due to diverse and multimodal data distribution. Specifically, we propose Multi-Task Diffusion Model (\textsc{MTDiff}), a diffusion-based method that incorporates Transformer backbones and prompt learning for generative planning and data synthesis in multi-task offline settings. \textsc{MTDiff} leverages vast amounts of knowledge available in multi-task data and performs implicit knowledge sharing among tasks. For generative planning, we find \textsc{MTDiff} outperforms state-of-the-art algorithms across 50 tasks on Meta-World and 8 maps on Maze2D. For data synthesis, \textsc{MTDiff} generates high-quality data for testing tasks given a single demonstration as a prompt, which enhances the low-quality datasets for even unseen tasks.
Haoran He, Chenjia Bai, Kang Xu, Zhuoran Yang, Weinan Zhang, Dong Wang, Bin Zhao, Xuelong Li
2023-05-29T05:20:38Z
http://arxiv.org/abs/2305.18459v2
# Diffusion Model is an Effective Planner and Data Synthesizer for Multi-Task Reinforcement Learning ###### Abstract Diffusion models have demonstrated highly-expressive generative capabilities in vision and NLP. Recent studies in reinforcement learning (RL) have shown that diffusion models are also powerful in modeling complex policies or trajectories in offline datasets. However, these works have been limited to single-task settings where a generalist agent capable of addressing multi-task predicaments is absent. In this paper, we aim to investigate the effectiveness of a single diffusion model in modeling large-scale multi-task offline data, which can be challenging due to diverse and multimodal data distribution. Specifically, we propose Multi-Task Diffusion Model (MTDiff), a diffusion-based method that incorporates Transformer backbones and prompt learning for generative planning and data synthesis in multi-task offline settings. MTDiff leverages vast amounts of knowledge available in multi-task data and performs implicit knowledge sharing among tasks. For generative planning, we find MTDiff outperforms state-of-the-art algorithms across 50 tasks on Meta-World and 8 maps on Maze2D. For data synthesis, MTDiff generates high-quality data for testing tasks given a single demonstration as a prompt, which enhances the low-quality datasets for even unseen tasks. ## 1 Introduction The high-capacity generative models trained on large, diverse datasets have demonstrated remarkable success across vision and language tasks. An impressive and even preternatural ability of these models, e.g. large language models (LLMs), is that the learned model can generalize among different tasks by simply conditioning the model on instructions or prompts [48; 45; 7; 12; 59; 41; 65]. The success of LLMs and vision models inspires us to utilize the recent generative model to learn from large-scale offline datasets that include multiple tasks for generalized decision-making in reinforcement learning (RL). Thus far, recent attempts in offline decision-making take advantage of the generative capability of diffusion models [53; 20] to improve long-term planning [21; 2] or enhance the expressiveness of policies [62; 11; 8]. However, these works are limited to small-scale datasets and single-task settings where broad generalization and general-purpose policies are not expected. In multi-task offline RL which considers learning a single model to solve multi-task problems, the dataset often contains noisy, multimodal, and long-horizon trajectories collected by various policies across tasks and with various qualities, which makes it more challenging to learn policies with broad generalization and transferable capabilities. Gato [46] and other generalized agents [29; 64] take transformer-based architecture [61] via sequence modeling to solve multi-task problems, while they are highly dependent on the optimality of the datasets and are expensive to train due to the huge number of parameters. To address the above challenges, we propose a novel diffusion model to further explore its generalizability in a multi-task setting. We formulate the learning process from multi-task data as a denoising problem, which benefits the modeling of multimodal data. Meanwhile, we develop a relatively lightweight architecture by using a GPT backbone [44] to model sequential trajectories, which has less computation burden and improved sequential modeling capability than previous U-Net-based [47] diffusion models [21, 32]. To disambiguate tasks during training and inference, instead of providing e.g. one-hot task identifiers, we leverage demonstrations as _prompt_ conditioning, which exploits the few-shot abilities of agents [49, 63, 68]. We name our method the Multi-Task Diffusion Model (**MTDiff**). As shown in Figure 1, we investigate two variants of MTDiff for planning and data synthesis to further exploit the utility of diffusion models, denoted as **MTDiff-p** and **MTDiff-s**, respectively. (a) For planning, MTDiff-p learns a prompt embedding to extract the task-relevant representation, and then concatenates the embedding with the trajectory's normalized return and historical states as the _conditions_ of the model. During training, MTDiff-p learns to predict the corresponding future action sequence given the conditions, and we call this process generative planning [77]. During inference, given few-shot prompts and the desired return, MTDiff-p tries to denoise out the optimal action sequence starting from the current state. Surprisingly, MTDiff-p can adapt to unseen tasks given well-constructed prompts that contain task information. (b) By slightly changing the inputs and training strategy, we can unlock the abilities of our diffusion model for data synthesis. Our insight is that the diffusion model, which compresses informative multi-task knowledge well, is more effective and generalist than previous methods that only utilize single-task data for augmentation [37, 51, 28]. Specifically, MTDiff-s learns to estimate the joint conditional distribution of the full transitions that contain states, actions, and rewards based on the task-oriented prompt. Different from MTDiff-p, MTDiff-s learns to synthesize data from the underlying dynamic environments for each task. Thus MTDiff-s only needs prompt conditioning to identify tasks. We empirically find that MTDiff-s synthesizes high-fidelity data for multiple tasks, including both seen and unseen ones, which can be further utilized for data augmentation to expand the offline dataset and enhance policy performance. To summarize, MTDiff is a diffusion-based method that leverages the multimodal generative ability of diffusion models, the sequential modeling capability of GPT architecture, and the few-shot generalizability of prompt learning for multi-task RL. To the best of our knowledge, we are the first to achieve both effective planning and data synthesis for multi-task RL via diffusion models. Our contributions include: (i) we propose MTDiff, a novel GPT-based diffusion model that illustrates the supreme effectiveness in multi-task trajectory modeling for both planning and data synthesis; (ii) we incorporate prompt learning into the diffusion framework to learn to generalize across different tasks and even adapt to unseen tasks; (iii) our experiments on Meta-World and Maze2D benchmarks demonstrate that MTDiff is an effective planner to solve the multi-task problem, and also a powerful data synthesizer to augment offline datasets in the seen or unseen tasks. Figure 1: Overall architecture of MTDiff. Different colors represent different tasks. \(S\), \(A\) and \(R\) denote the state sequence, action sequence, and reward sequence from multi-task data, respectively. \(S_{\mathrm{prev}}\) and \(R_{\tau}\) represent historical states and normalized return. Preliminaries ### Reinforcement Learning MDP and Multi-task MDP.A Markov Decision Process (MDP) is defined by a tuple \((\mathcal{S},\mathcal{A},\mathcal{P},\mathcal{R},\mu,\gamma)\), where \(\mathcal{S}\) is the state space, \(\mathcal{A}\) is the action space, \(\mathcal{P}:\mathcal{S}\times\mathcal{A}\rightarrow\mathcal{S}\) is the transition function, \(\mathcal{R}:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\rightarrow\mathbb{R}\) is the reward function for any transition, \(\gamma\in(0,1]\) is a discount factor, and \(\mu\) is the initial state distribution. At each timestep \(t\), the agent chooses an action \(a_{t}\) by following the policy \(\pi:\mathcal{S}\rightarrow\Delta_{\mathcal{A}}\). Then the agent obtains the next state \(s_{t+1}\) and receives a scalar reward \(r_{t}\). In single-task RL, the goal is to learn a policy \(\pi^{*}=\operatorname*{arg\,max}_{\pi}\mathbb{E}_{a_{t}\sim\pi}\big{[}\sum_{t =0}^{\infty}\gamma^{t}r_{t}\big{]}\) by maximizing the expected cumulative reward of the corresponding task. In a multi-task setting, different tasks can have different reward functions, state spaces and transition functions. We consider all tasks to share the same action space with the same embodied agent. Given a specific task \(\mathcal{T}\sim p(\mathcal{T})\), a task-specified MDP can be defined as \((\mathcal{S}^{\mathcal{T}},\mathcal{A},\mathcal{P}^{\mathcal{T}},\mathcal{R} ^{\mathcal{T}},\mu^{\mathcal{T}},\gamma)\). Instead of solving a single MDP, the goal of multi-task RL is to find an optimal policy that maximizes expected return over all the tasks: \(\pi^{*}=\operatorname*{arg\,max}_{\pi}\mathbb{E}_{\mathcal{T}\sim p(\mathcal{ T})}\mathbb{E}_{a_{t}\sim\pi^{\mathcal{T}}}\big{[}\sum_{t=0}^{\infty}\gamma^{t}r_{t }^{\mathcal{T}}\big{]}\). Multi-Task Offline Decision-Making.In offline decision-making, the policy is learned from a static dataset of transitions \(\{(s_{j},a_{j},s^{\prime}_{j},r_{j})\}_{j=1}^{N}\) collected by an unknown behavior policy \(\pi_{\beta}\)[30]. In the multi-task offline RL setting, the dataset \(\mathcal{D}\) is partitioned into per-task subsets as \(\mathcal{D}=\cup_{i=1}^{N}\mathcal{D}_{i}\), where \(\mathcal{D}_{i}\) consists of experiences from task \(\mathcal{T}_{i}\). The key issue of RL in the offline setting is the distribution shift problem caused by temporal-difference (TD) learning. In our work, we extend the idea of Decision Diffuser [2] by considering multi-task policy learning as a conditional generative process without fitting a value function. The insight is to take advantage of the powerful distribution modeling ability of diffusion models for multi-task data, avoiding facing the risk of distribution shift. Offline RL learns policies from a static dataset, which makes the quality and diversity of the dataset significant [15]. One can perform data perturbation [28] to up-sample the offline dataset. Alternatively, we synthesize new transitions \((s,a,s^{\prime},r)\) by capturing the underlying MDP of a given task via diffusion models, which expands the original dataset and leads to significant policy improvement. ### Diffusion Models We employ diffusion models to learn from multi-task data \(\mathcal{D}=\cup_{i=1}^{N}\mathcal{D}_{i}\) in this paper. With \(\tau\) the sampled trajectory from \(\mathcal{D}\), we denote \(\mathbf{x}_{k}(\tau)\) as the \(k\)-step denoised output of the diffusion model, and \(\mathbf{y}(\tau)\) is the condition which represents the task attributes and the trajectory's optimality (e.g., returns). A forward diffusion chain gradually adds noise to the data \(\mathbf{x}_{0}(\tau)\sim q(\mathbf{x}(\tau))\) in \(K\) steps with a pre-defined variance schedule \(\beta_{k}\), which can be expressed as \[q(\mathbf{x}_{k}(\tau)|\mathbf{x}_{k-1}(\tau)):=\mathcal{N}(\mathbf{x}_{k}(\tau);\sqrt{1- \beta_{k}}\mathbf{x}_{k-1}(\tau),\beta_{k}\mathbf{I}). \tag{1}\] In this paper, we adopt Variance Preserving (VP) beta schedule [66] and define \(\beta_{k}=1-\exp\big{(}-\beta_{min}(\frac{1}{K})-0.5(\beta_{\max}-\beta_{\min} )\frac{2k-1}{K^{2}}\big{)},\) where \(\beta_{\max}=10\) and \(\beta_{\min}=0.1\) are constants. A trainable reverse diffusion chain, constructed as \(p_{\theta}(\mathbf{x}_{k-1}(\tau)|\mathbf{x}_{k}(\tau),\mathbf{y}(\tau)):=\mathcal{N}(\mathbf{ x}_{k-1}(\tau)|\mu_{\theta}(\mathbf{x}_{k}(\tau),\mathbf{y}(\tau),k),\Sigma_{k})\), can be optimized by a simplified surrogate loss [20]: \[\mathcal{L}_{\mathrm{denoise}}:=\mathbb{E}_{k\sim\mathcal{U}(1,K),\mathbf{x}_{0} (\tau)\sim q,\epsilon\sim\mathcal{N}(\mathbf{0},\mathbf{I})}\big{[}\big{\|}\epsilon- \epsilon_{\theta}(\mathbf{x}_{k}(\tau),\mathbf{y}(\tau),k)\big{\|}^{2}\big{]}, \tag{2}\] where \(\epsilon_{\theta}\) parameterized by a deep neural network is trained to predict the noise \(\epsilon\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) added to the dataset sample \(\mathbf{x}_{0}(\tau)\) to produce \(\mathbf{x}_{k}(\tau)\). By setting \(\alpha_{k}:=1-\beta_{k}\) and \(\bar{\alpha}_{k}:=\prod_{s=1}^{k}\alpha_{s}\), we obtain \[\mathbf{x}_{k-1}(\tau)\!\leftarrow\!\frac{1}{\sqrt{\alpha_{k}}}\left(\mathbf{x}_{k}( \tau)\!-\!\frac{\beta_{k}}{\sqrt{1-\bar{\alpha}_{k}}}\epsilon_{\theta}(\mathbf{x}_ {k}(\tau),\mathbf{y}(\tau),k)\right)\!+\!\sqrt{\beta_{k}}\sigma,\,\sigma\! \sim\!\mathcal{N}(\mathbf{0},\mathbf{I}),\text{ for }k=\{K,...,1\}.\] Classifier-free guidance [19] aims to learn the conditional distribution \(q(\mathbf{x}(\tau)|\mathbf{y}(\tau))\) without separately training a classifier. In the training stage, this method needs to learn both a conditional \(\epsilon_{\theta}(\mathbf{x}_{k}(\tau),\mathbf{y}(\tau),k)\) and an unconditional \(\epsilon_{\theta}(\mathbf{x}_{k}(\tau),\varnothing,k)\) model, where \(\mathbf{y}(\tau)\) is dropped out. Then the perturbed noise \(\epsilon_{\theta}(\mathbf{x}_{k}(\tau),\varnothing,k)+\alpha(\epsilon_{\theta}(\mathbf{ x}_{k}(\tau),\mathbf{y}(\tau),k)-\epsilon_{\theta}(\mathbf{x}_{k}(\tau), \varnothing,k))\) is used to generate samples latter, where \(\alpha\) can be recognized as the guidance scale. ## 3 Methodology ### Diffusion Formulation To capture the multimodal distribution of the trajectories sampled from multiple MDPs, we formulate the multi-task trajectory modeling as a conditional generative problem via diffusion models: \[\max_{\theta}\mathbb{E}_{\tau\sim\cup_{i}\mathcal{D}_{i}}\big{[}\log p_{\theta}( \boldsymbol{x}_{0}(\tau)\,\big{|}\,\boldsymbol{y}(\tau)\big{]}, \tag{3}\] where \(\boldsymbol{x}_{0}(\tau)\) is the generated desired sequence and \(\boldsymbol{y}(\tau)\) is the condition. \(\boldsymbol{x}_{0}(\tau)\) will then be used for generative planning or data synthesis through conditional reverse denoising process \(p_{\theta}\) for specific tasks. Maximizing Eq. (3) can be approximated by maximizing a variational lower bound [20]. In terms of different inputs and outputs in generative planning and data synthesis, \(\boldsymbol{x}(\tau)\) can be represented in different formats. We consider two choices to formulate \(\boldsymbol{x}(\tau)\) in MTDiff-p and MTDiff-s, respectively. (i) For **MTDiff-p**, \(\boldsymbol{x}(\tau)\) represents the action sequence for planning. We model the action sequence defined as: \[\boldsymbol{x}_{k}^{p}(\tau):=(a_{t},a_{t+1},...,a_{t+H-1})_{k}, \tag{4}\] with the context condition as \[\boldsymbol{y}^{p}(\tau):=\big{[}\boldsymbol{y}^{\prime}(\tau),R(\tau)\big{]},\hskip 14.226378pt\boldsymbol{y}^{\prime}(\tau):=(Z,s_{t-L+1},...,s_{t}), \tag{5}\] where \(t\), \(H\), \(R(\tau)\) and \(L\) denote the time visited in trajectory \(\tau\), the length of the input sequence \(\boldsymbol{x}\), the normalized cumulative return under \(\tau\) and the length of the observed state history, respectively. \(Z\) is the task-relevant information as _prompt_. We use \(\boldsymbol{y}^{\prime}(\tau)\) as an ordinary condition that is injected into the model during both training and testing, while considering \(R(\tau)\) as the classifier-free guidance to obtain the optimal action sequence for a given task. (ii) For data synthesis in **MTDiff-s**, the inputs and outputs become the transition sequence that contains states, actions, and rewards, and then the outputs are utilized for data augmentation. We define the transition sequence as: \[\boldsymbol{x}_{k}^{s}(\tau):=\begin{bmatrix}s_{t}&s_{t+1}&\cdots&s_{t+H-1}\\ a_{t}&a_{t+1}&\cdots&a_{t+H-1}\\ r_{t}&r_{t+1}&\cdots&r_{t+H-1}\end{bmatrix}, \tag{6}\] with the condition: \[\boldsymbol{y}^{s}(\tau):=[Z], \tag{7}\] where \(\boldsymbol{y}^{s}(\tau)\) takes the same conditional approach as \(y^{\prime}(\tau)\). Figure 2 illustrates the reverse denoising process of MTDiff-p learned on multi-task datasets collected in Meta-World [72]. The result demonstrates that our diffusion model successfully distinguishes different tasks and finally generates the desired \(\boldsymbol{x}_{0}(\tau)\). We illustrate the data distribution of \(\boldsymbol{x}_{0}(\tau)\) in a 2D space with dimensional reduction via T-SNE [60], as well as the rendered states after executing the action sequence. The result shows that, with different task-specific prompts as conditions, the generated planning sequence for a specific task will be separate from sequences of other tasks, which verifies that MTDiff can learn the distribution of multimodal trajectories based on \(\boldsymbol{y}(\tau)\). Figure 2: An example of the denoising process of MTDiff. We choose 4 tasks for visualization and \(\boldsymbol{x}_{K}(\tau)\) is sampled from the Gaussian noise for each task. Since different tasks require different manipulation skills, the corresponding action sequences are dispersed in the embedding space. Our model learns such properties and generates task-specific sequences based on task-relevant prompts. ### Prompt, Training and Sampling In multi-task RL and LLM-driven decision-making, existing works use one-hot task identifiers [55; 73] or language descriptions [1; 6] as conditions in multi-task training. Nevertheless, we argue that the one-hot encoding for each task [22; 14] suffices for learning a repertoire of training tasks while cannot generalize to novel tasks since it does not leverage semantic similarity between tasks. In addition, the language descriptions [50; 1; 6; 40; 39] of tasks require large amounts of human labor to annotate and encounter challenges related to ambiguity [71]. In MTDiff, we use expert demonstrations consisting of a few trajectory segments to construct more expressive prompts in multi-task settings. The incorporation of prompt learning improves the model's ability for generalization and extracting task-relevant information to facilitate both generative planning and data synthesis. We remark that a similar method has also been used in PromptDT [68]. Nonetheless, how such a prompt contributes within a diffusion-based framework remains to be investigated. Specifically, we formulate the task-specific label \(Z\) as trajectory prompts that contain states and actions: \[Z:=\begin{bmatrix}s_{i}^{*}&s_{i+1}^{*}&\cdots&s_{i+J-1}^{*}\\ a_{i}^{*}&a_{i+1}^{*}&\cdots&a_{i+J-1}^{*}\end{bmatrix}, \tag{8}\] where each element with star-script is associated with a trajectory prompt, and \(J\) is the number of environment steps for identifying tasks. With the prompts as conditions, MTDiff can specify the task by implicitly capturing the transition model and the reward function stored in the prompts for better generalization to unseen tasks without additional parameter-tuning. In terms of decision-making in MTDiff-p, we aim to devise the optimal behaviors that maximize return. Our approach is to utilize the diffusion model for action planning via classifier-free guidance [19]. Formally, an optimal action sequence \(\mathbf{x}_{0}^{p}(\tau)\) is sampled by starting with Gaussian noise \(\mathbf{x}_{K}(\tau)\) and refining \(\mathbf{x}_{k}^{p}(\tau)\) into \(\mathbf{x}_{k-1}^{p}(\tau)\) at each intermediate timestep with the perturbed noise: \[\epsilon_{\theta}\big{(}\mathbf{x}_{k}^{p}(\tau),\mathbf{y}^{\prime}(\tau),\varnothing,k\big{)}+\alpha\big{(}\epsilon_{\theta}(\mathbf{x}_{k}^{p}(\tau),\mathbf{y}^{\prime}( \tau),R(\tau),k)-\epsilon_{\theta}(\mathbf{x}_{k}^{p}(\tau),\mathbf{y}^{\prime}(\tau), \varnothing,k)\big{)}, \tag{9}\] where \(\mathbf{y}^{\prime}(\tau)\) is defined in Eq. (5). \(R(\tau)\) is the normalized return of \(\tau\), and \(\alpha\) is a hyper-parameter that seeks to augment and extract the best portions of trajectories in the dataset with high return. During training, we follow DDPM [20] as well as classifier-free guidance [19] to train the reverse diffusion process \(p_{\theta}\), parameterized through the noise model \(\epsilon_{\theta}\), with the following loss: \[\mathcal{L}^{p}(\theta):=\mathbb{E}_{k\sim\mathcal{U}(1,K),\mathbf{x}_{0}(\tau) \sim q,e\sim\mathcal{N}(\mathbf{0},\mathbf{I}),\beta\sim\mathrm{Bern}(p)} \big{[}\big{\|}\epsilon-\epsilon_{\theta}\big{(}\mathbf{x}_{k}^{p}(\tau),\mathbf{y}^{ \prime}(\tau),(1-\beta)R(\tau)+\beta\varnothing,k\big{)}\big{\|}^{2}\big{]}. \tag{10}\] Note that with probability \(p\) sampled from a Bernoulli distribution, we ignore the conditioning return \(R(\tau)\). During inference, we adopt the _low-temperature sampling_ technique [2] to produce high-likelihood sequences. We sample \(\mathbf{x}_{k-1}^{p}(\tau)\sim\mathcal{N}(\mu_{\theta}(\mathbf{x}_{k-1}^{p},\mathbf{y}^{ \prime}(\tau),R_{\max}(\tau),k-1),\beta\Sigma_{k-1})\), where the variance is reduced by \(\beta\in[0,1)\) for generating action sequences with higher optimality. For MTDiff-s, since the model aims to synthesize diverse trajectories for data augmentation, which does not need to take any guidance like \(R(\tau)\), we have the following loss: \[\mathcal{L}^{s}(\theta):=\mathbb{E}_{k\sim\mathcal{U}(1,K),\mathbf{x}_{0}(\tau) \sim q,e\sim\mathcal{N}(\mathbf{0},\mathbf{I})}\big{[}\big{\|}\epsilon- \epsilon_{\theta}(\mathbf{x}_{k}^{s}(\tau),\mathbf{y}^{s}(\tau),k)\big{\|}^{2}\big{]}. \tag{11}\] We sample \(\mathbf{x}_{k-1}^{\star}(\tau)\sim\mathcal{N}(\mu_{\theta}(\mathbf{x}_{k-1}^{\star}, \mathbf{y}^{s}(\tau),k-1),\Sigma_{k-1})\). The evaluation process is given in Fig. 3. ### Architecture Design Notably, the emergence of Transformer [61] and its applications on generative modeling [43; 5; 46; 6] provides a promising solution to capture interactions between modalities of different tasks. Naturally, instead of U-Net [47] which is commonly used in previous single-task diffusion RL works [21; 2; 32], we parameterize \(\epsilon_{\theta}\) with a novel transformer architecture. We adopt GPT2 [44] architecture for implementation, which excels in sequential modeling and offers a favorable balance between performance and computational efficiency. Our key insight is to train the diffusion model in a unified manner to model multi-task data, treating different inputs as tokens in a unified architecture, which is expected to enhance the efficiency of diverse information exploitation. As shown in Figure 3, first, different raw inputs \(x\) are embedded into embeddings \(h\) of the same size \(\mathbf{d}\) via separate MLPs \(f\), which can be expressed as: \[h_{P}=f_{P}(x^{\mathrm{prompt}}),h_{Ti}=f_{Ti}(x^{\mathrm{timestep}}),\quad \rhd\text{ for prompt and diffusion timestep}\] \[h_{T}^{\mathrm{s}}=f_{Tr}(x^{\mathrm{transitions}}),\quad \rhd\text{ for MTDiff-s}\] \[h_{A}^{\mathrm{p}}=f_{A}(x^{\mathrm{actions}}),h_{H}^{\mathrm{p }}=f_{H}(x^{\mathrm{history}}),h_{R}^{\mathrm{p}}=f_{R}(x^{\mathrm{return}}). \quad\rhd\text{ for MTDiff-p}\] Then, the embeddings \(h_{P}\) and \(h_{Ti}\) are prepended as follows to formulate input tokens for MTDiff-p and MTDiff-s, respectively: \[h_{\mathrm{tokens}}^{\mathrm{p}}=\mathrm{LN}(h_{Ti}\times[h_{P},h_{Ti}^{ \mathrm{p}},h_{R}^{\mathrm{p}},h_{H}^{\mathrm{p}},h_{A}^{\mathrm{p}}]+h_{R}^ {\mathrm{p}}+E^{\mathrm{pos}}),h_{\mathrm{tokens}}^{\mathrm{s}}=\mathrm{LN}( h_{Ti}\times[h_{P},h_{Ti},h_{T}^{\mathrm{s}}]+E^{\mathrm{pos}}),\] where \(E^{\mathrm{pos}}\) is the positional embedding, and \(\mathrm{LN}\) denotes layer normalization [3] for stabilizing training. In our implementation, we strengthen the condition of the stacked inputs through multiplication with the diffusion timestep \(h_{Ti}\) and addition with the return \(h_{R}^{\mathrm{p}}\). GPT2 is a decoder-only transformer that incorporates a self-attention mechanism to capture dependencies between different positions in the input sequence. We employ the GPT2 architecture as a trainable backbone in MTDiff to handle sequential inputs. It outputs an updated representation as: \[h_{\mathrm{out}}^{\mathrm{p}}=\mathrm{transformer}(h_{\mathrm{tokens}}^{ \mathrm{p}}),\quad h_{\mathrm{out}}^{\mathrm{s}}=\mathrm{transformer}(h_{ \mathrm{tokens}}^{\mathrm{s}}).\] Finally, given the output representation, we use a prediction head consisting of fully connected layers to predict the corresponding noise at diffusion timestep \(k\). Notice that the predicted noise shares the same dimensional space as the original inputs, which differs from the representation size \(\mathbf{d}\). This noise is used in the reverse denoising process \(p_{\theta}\) during inference. We summarize the details of the training process, architecture and hyperparameters used in MTDiff in Appendix A. ## 4 Related Work **Diffusion Models in RL.** Diffusion models have emerged as a powerful family of deep generative models with a record-breaking performance in many applications across vision and language [48; 45; 17; 31]. Recent works in RL have demonstrated the capability of diffusion models to learn the multimodal distribution of offline policies [62; 42; 11] or human behaviors [8]. Other works formulate the sequential decision-making problem as a conditional generative process [2] and learn to generate the trajectories satisfying conditioned constraints. However, these works are limited to the single-task settings, while we further study the trajectory modeling and generalization problems of diffusion models in multi-task settings. **Multi-Task RL and Few-Shot RL.** Multi-task RL aims to learn a shared policy for a diverse set of tasks. The main challenge of multi-task RL is the conflicting gradients among different tasks, and Figure 3: Model architecture of MTDiff, which treats different inputs as tokens in a unified architecture. The two key designs are (i) the trainable GPT2 Transformer which enhances sequential modeling, and (ii) the MLPs and prediction head which enable efficient training. previous online RL works address this problem via gradient surgery [73], conflict-averse learning [33], and parameter composition [69; 55]. Instead, MTDiff addresses such a problem in an offline setting through a conditional generative process via a novel transformer architecture. Previous Decision-Transformer (DT)-based methods [46; 29; 71] which consider handling multi-task problems, mainly rely on expert trajectories and entail substantial training expenses. Scaled-QL [26] adopts separate networks for different tasks and is hard to generalize to new tasks. Instead of focusing on the performance of training tasks in multi-task RL, few-shot RL aims to improve the generalizability in novel tasks based on the learned multi-task knowledge. Nevertheless, these methods need additional context encoders [76; 79] or gradient descents in the finentuning stage [56; 57; 29]. In contrast, we use prompts for few-shot generalization without additional parameter-tuning. **Data Augmentation for RL.** Data augmentation [13; 38] has been verified to be effective in RL. Previous methods incorporate various data augmentations (e.g. adding noise, random translation) on observations for visual-based RL [70; 28; 51; 27], which ensure the agents learn on multiple views of the same observation. Differently, we focus on data augmentation via synthesizing new experiences rather than perturbing the origin one. Recent works [75; 10] consider augmenting the observations of robotic control using a text-guided diffusion model whilst maintaining the same action, which differs from our approach that can synthesize novel action and reward labels. The recently proposed SER [37] is closely related to our method by generating transitions of trained tasks via a diffusion model. However, SER is studied in single-task settings, while we investigate whether a diffusion model can accommodate all knowledge of multi-task datasets and augment the data for novel tasks. ## 5 Experiments In this section, we conduct extensive experiments to answer the following questions: (1) How does MTDiff-p compare to other offline and online baselines in the multi-task regime? (2) Does MTDiff-s synthesize high-fidelity data and bring policy improvement? (3) How is MTDiff-s compared with other augmentation methods for single-task RL? (4) Does the synthetic data of MTDiff-s match the original data distribution? (5) Can both MTDiff-p and MTDiff-s generalize to unseen tasks? ### Environments and Baselines Meta-World TasksThe Meta-World benchmark [72] contains 50 qualitatively-distinct manipulation tasks. The tasks share similar dynamics and require a Sawyer robot to interact with various objects with different shapes, joints, and connectivity. In this setup, the state space and reward functions of different tasks are different since the robot is manipulating different objects with different objectives. At each timestep, the Sawyer robot receives a 4-dimensional fine-grind action, representing the 3D position movements of the end effector and the variation of gripper openness. The original Meta-World environment is configured with a fixed goal, which is more restrictive and less realistic in robotic learning. Following recent works [69; 55], we extend all the tasks to a random-goal setting and refer to it as MT50-rand. We use the average success rate of all tasks as the evaluation metric. By training a SAC [18] agent for each task in isolation, we utilize the experience collected in the replay buffer as our offline dataset. Similar to [26], we consider two different dataset compositions: (i) **Near-optimal** dataset consisting of the experience (100M transitions) from random to expert (convergence) in SAC-Replay, and (ii) **Sub-optimal** dataset consisting of the initial 50% of the trajectories (50M transitions) from the replay buffer for each task, where the proportion of expert data decreases a lot. We summarize more details about the dataset in Appendix G. Maze2D TasksMaze2D [15] is a navigation task that requires an agent to traverse from a randomly designated location to a fixed goal in the 2D map. The reward is 1 if succeed and 0 otherwise. Maze2D can evaluate the ability of RL algorithms to stitch together previously collected sub-trajectories, which helps the agent find the shortest path to evaluation goals. We use the agent's scores as the evaluation metric. The offline dataset is collected by selecting random goal locations and using a planner to generate sequences of waypoints by following a PD controller. BaselinesWe compare our proposed MTDiff (MTDiff-p and MTDiff-s) with the following baselines. Each baseline has the same batch size and training steps as MTDiff. For **MTDiff-p**, we have following baselines: (i) **PromptDT.** PromptDT [68] built on Decision-Transformer (DT) [9] aims to learn from multi-task data and generalize the policy to unseen tasks. PromptDT generates actions based on the trajectory prompts and reward-to-go. We use the same GPT2-network as in MTDiff-p. The main difference between our method and PromptDT is that we employ diffusion models for generative planning. (ii) **MTDT.** We extend the DT architecture [9] to learn from multi-task data. Specifically, MTDT concatenates an embedding \(z\) and a state \(s\) as the input tokens, where \(z\) is the encoding of task ID. In evaluation, the reward-to-go and task ID are fed into the Transformer to provide task-specific information. MTDT also uses the same GPT2-network as in MTDiff-p. Compared to MTDT, our model incorporates prompt and diffusion framework to learn from the multi-task data. (iii) **MTCQL.** Following scaled-QL [26], we extend CQL [25] with multi-head critic networks and a task-ID conditioned actor for multi-task policy learning. (iv) **MTIQL.** We extend IQL [24] for multi-task learning using a similar revision of MTCQL. The TD-based baselines (i.e., MTCQL and MTIQL) are used to demonstrate the effectiveness of conditional generative modeling for multi-task planning. (v) **MTBC.** We extend Behavior cloning (BC) to multi-task offline policy learning via network scaling and a task-ID conditioned actor that is similar to MTCQL and MTIQL. As for **MTDiff-s**, we compare it with two baselines that perform direct data augmentation in offline RL. (i) **RAD.** We adopt the random amplitude scaling [28] that multiplies a random variable to states, i.e., \(s^{\prime}=s\times z\), where \(z\sim\mathrm{Uniform}[\alpha,\beta]\). This augmentation technique has been verified to be effective for state-based RL. (ii) **S4RL.** We adopt the adversarial state training [51] by taking gradients with respect to the value function to obtain a new state, i.e. \(s^{\prime}\gets s+\epsilon\nabla_{s}\mathrm{J}_{Q}(\pi(s))\), where \(\mathrm{J}_{Q}\) is the policy evaluation update performed via a \(Q\) function, and \(\epsilon\) is the size of gradient steps. We summarize the details of all the baselines in Appendix B. ### Result Comparison for Planning **How does MTDiff-p compare to baselines in the multi-task regime?** For a fair comparison, we add MTDiff-p-onehot as a variant of MTDiff-p by replacing the prompt with a one-hot task-ID, which is used in the baselines except for PromptDT. The first action generated by MTDiff-p is used to interact with the environment. According to Tab. 1, we have the following key observations. (i) Our method achieves better performance than baselines in both near-optimal and sub-optimal settings. For near-optimal datasets, MTDiff-p and MTDiff-p-onehot achieve about 60% success rate, significantly outperforming other methods and performing comparably with MTBC. However, the performance of MTBC decreases a lot in sub-optimal datasets. BC is hard to handle the conflict behaviors in experiences sampled by a mixture of policies with different returns, which has also been verified in previous offline imitation methods [34; 67]. In contrast, both MTDiff-p and MTDiff-onehot perform the best in sub-optimal datasets. (ii) We compare MTDiff-p with two SOTA multi-task online RL methods, CARE [52] and PaCo [55], which are trained for 100M steps in MT50-rand. MTDiff-p outperforms both of them given the near-optimal dataset, demonstrating the potential of solving the multi-task RL problems in an offline setting. (iii) MTDT is limited in distinguishing different tasks with task ID while PromptDT performs better, which demonstrates the effect of prompts in multi-task settings. (iv) As for TD-based baselines, we find MTCQL almost fails while MTIQL performs well. We hypothesize that since MTCQL penalizes the OOD actions for each task, it will hinder other tasks' learning since different tasks can choose remarkably different actions when facing similar states. In contrast, IQL learns a value function without querying the values of OOD actions. We remark that MTDiff-p based on GPT network outperforms that with U-Net in similar model size, and detailed results are given in Appendix C. Overall, MTDiff-p is an effective planner in multi-task settings including sub-optimal datasets where we must stitch useful segments of suboptimal trajectories, and near-optimal datasets where we mimic the best behaviors. Meanwhile, we argue that although MTDiff-p-onehot performs well, it cannot generalize to unseen tasks without prompts. \begin{table} \begin{tabular}{c|c c} \hline \hline **Methods** & **Near-optimal** & **Sub-optimal** \\ \hline **CARE**[52] (Online) & \(50.8\pm 1.0\) & \(-\) \\ **PaCo**[55] (Online) & \(57.3\pm 1.3\) & \(-\) \\ \hline **MTDT** & \(20.99\pm 2.66\) & \(20.63\pm 2.21\) \\ **PromptDT** & \(45.68\pm 1.84\) & \(39.76\pm 2.79\) \\ **MTBC** & \(60.39\pm 0.86\) & \(34.53\pm 1.25\) \\ **MTCQL** & \(-\) & \(-\) \\ **MTIQL** & \(56.21\pm 1.39\) & \(43.28\pm 0.90\) \\ \hline **MTDiff-p (ours)** & \(59.53\pm 1.12\) & \(\mathbf{48.67\pm 1.32}\) \\ **MTDiff-p-onehot (ours)** & \(\mathbf{61.32\pm 0.89}\) & \(\mathbf{48.94\pm 0.95}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Average success rate across 3 seeds on MetaWorld-V2 MT50 with random goals (MT50-rand). Each task is evaluated for 50 episodes. Does MTDiff-p generalize to unseen tasks?We further carry out experiments on Maze2D to evaluate the generalizability of MTDiff-p. We select PromptDT as our baseline, as it has demonstrated both competitive performances on training tasks and adaptation ability for unseen tasks [68]. We use 8 different maps for training and one new map for adaptation evaluation. The setup details are given in Appendix D. We evaluate these two methods on both seen maps and an unseen map. The average scores obtained in 8 training maps are referred to Figure 5. To further illustrate the advantages of our method compared to PromptDT, we select one difficult training map and the unseen map for visualization, as shown in Figure 5. According to the visualized path, we find that (1) for seen maps in training, MTDiff-p generates a shorter and smoother path, and (2) for unseen maps, PromptDT fails to obtain a reasonable path while MTDiff-p succeed, which verifies that MTDiff-p can perform few-shot adaptation based on trajectory prompts and the designed architecture. ### Results for Augmentation via Data Synthesis Does MTDiff-s synthesize high-fidelity data and bring policy improvement?We train MTDiff-s on near-optimal datasets from 45 tasks to evaluate its generalizability. We select 3 training tasks and 3 unseen tasks, and measure the policy improvement of offline RL training (i.e., TD3-BC [16]) with data augmentation. For each evaluated task, MTDiff-s synthesizes 2M transitions to expand the original 1M dataset. From the results summarized in Table 2, MTDiff-s can boost the offline performance for all tasks and significantly increases performance by about 180%, 131%, and 161% for _box-close_, _hand-insert_, and _coffee-push_, respectively. How is MTDiff-s compared with other augmentation methods for single-task RL?From Table 2, we find MTDiff-s achieves superior policy improvement in seen tasks compared with previous SOTA augmentation methods (i.e., S4RL and RAD) that are developed in single-task RL. We hypothesize that, by absorbing vast knowledge of multi-task data in training, MTDiff-s can perform implicit data sharing [74] by integrating other tasks' knowledge into data synthesis of the current task. To verify this hypothesis, we select two tasks (i.e. _coffee-push_ and _hand-insert_) to re-train MTDiff-s on the corresponding single-task dataset. We denote this variant as MTDiff-s-single and find MTDiff-s outperforms this variant, as shown in Figure 6. Does MTDiff-s generalize to unseen tasks?We answer this question by conducting offline RL training on the augmented datasets of 3 unseen tasks. According to Table 2, MTDiff-s is well-generalized and obtains significant improvement compared to the success rate of original datasets. MTDiff-s boost the policy performance by 131%, 180% and 32% for _hand-insert_, _box-close_ and _bin-picking_, respectively. We remark that S4RL performs the best on the two unseen tasks, i.e., _box-close_ and _bin-picking_, since it utilizes the entire datasets to train \(Q\)-functions and obtains the augmented states. Nevertheless, we use much less information (i.e., a single trajectory as prompts) for augmentation. Does the synthetic data of MTDiff-s match the original data distribution?We select 4 tasks and use T-SNE [60] to visualize the distribution of original data and synthetic data. We find the synthetic data overlap and expand the original data distribution while also keeping consistency with the underlying MDP. The visualization results and further analyses are given in Appendix E. ## 6 Conclusion We propose MTDiff, a diffusion-based effective planner and data synthesizer for multi-task RL. With the trajectory prompt and unified GPT-based architecture, MTDiff can model multi-task data and generalize to unseen tasks. We show that in the MT50-rand benchmark containing fine-grained manipulation tasks, MTDiff-p generates desirable behavior for each task via few-shot prompts. By compressing multi-task knowledge in a single model, we demonstrate that MTDiff-s greatly boosts policy performance by augmenting original offline datasets. In future research, we aim to develop a practical multi-task algorithm for real-robot to trade off the sample speed and generative quality. We further discuss the limitations and broader impacts of MTDiff in Appendix F.
2306.16124
Accurate force-field methodology capturing atomic reconstructions in transition metal dichalcogenide moiré systems
In this work, a generalized force-field methodology for the relaxation of large moir\'e heterostructures is proposed. The force-field parameters are optimized to accurately reproduce the structural degrees of freedom of some computationally manageable cells relaxed using density functional theory. The parameters can then be used to handle large moir\'e systems. We specialize to the case of 2H-phased twisted transition-metal dichalcogenide homo- and heterobilayers using a combination of the Stillinger-Weber intralayer- and the Kolmogorov-Crespi interlayer-potential. Force-field parameters are developed for all combinations of MX$_2$ for $\text{M}\in\{\text{Mo},\text{W}\}$ and $\text{X}\in\{\text{S},\text{Se},\text{Te}\}$. The results show agreement within 20 meV in terms of band structure between density functional theory and force-field relaxation. Using the relaxed structures, a simplified and systematic scheme for the extraction of the interlayer moir\'e potential is presented for both R- and H-stacked systems. We show that in-plane and out-of-plane relaxation effects on the moir\'e potential, which is made both deeper and wider after relaxation, are essential. An interpolation based methodology for the calculation of the interlayer binding energy is also proposed. Finally, we show that atomic reconstruction, which is captured by the force-field method, becomes especially prominent for angles below 4-5$^\circ$, when there is no mismatch in lattice constant between layers.
Carl Emil Mørch Nielsen, Miguel da Cruz, Abderrazak Torche, Gabriel Bester
2023-06-28T11:52:31Z
http://arxiv.org/abs/2306.16124v1
Accurate force-field methodology capturing atomic reconstructions in transition metal dichalcogenide moire systems ###### Abstract In this work, a generalized force-field methodology for the relaxation of large moire heterostructures is proposed. The force-field parameters are optimized to accurately reproduce the structural degrees of freedom of some computationally manageable cells relaxed using density functional theory. The parameters can then be used to handle large moire systems. We specialize to the case of 2H-phased twisted transition-metal dichalcogenide homo- and heterobilayers using a combination of the Stillinger-Weber intralayer- and the Kolmogorov-Crespi interlayer-potential. Force-field parameters are developed for all combinations of \(\text{MX}_{2}\) for \(\text{M}\in\{\text{Mo},\text{W}\}\) and \(\text{X}\in\{\text{S},\text{Se},\text{Te}\}\). The results show agreement within 20 meV in terms of band structure between density functional theory and force-field relaxation. Using the relaxed structures, a simplified and systematic scheme for the extraction of the interlayer moire potential is presented for both R- and H-stacked systems. We show that in-plane and out-of-plane relaxation effects on the moire potential, which is made both deeper and wider after relaxation, are essential. An interpolation based methodology for the calculation of the interlayer binding energy is also proposed. Finally, we show that atomic reconstruction, which is captured by the force-field method, becomes especially prominent for angles below 4-5\({}^{\circ}\), when there is no mismatch in lattice constant between layers. ## I Introduction Two dimensional (2D) moire systems are currently an especially attractive playground for new technological applications [1, 2, 3, 4]. Lattice mismatch combined with the twist angle between the constituent layers allows for an ingenious way of external mechanical control of the moire period and thus the resulting electronic properties. Without a doubt, the pioneering discovery of twisted bilayer graphene and its magic angle of \(1.05^{\circ}\)[5] was the major driving force toward the study of 2D heterostructures and constituted the basis for the field of twistronics. An interesting and widely studied class of moire systems is the 2D family of transition metal dichalcogenides (TMDs), featuring strong light-matter interaction and large spin-orbit coupling with a sizable bandgap [6]. A fundamental advantage of TMDs is that flat minibands are not only realised at specific angles, but exist in a continuum of small angles [7]. An example of a moire-structured TMD-system can be seen in Fig. 1. Moreover, experimental and theoretical findings of the excited states in type-II aligned heterostructured TMDs show evidence of spatially indirect excitons localized within certain registries of the moire structure [7, 8, 9, 10]. Moire structured TMDs provide a platform for studying correlated quantum phenomena [11] including hole Mott insulator states at integer and fractional fillings with generalized Wigner crystallization, essentially creating a Fermi-Hubbard system [12, 13, 14, 15, 16]. Moire structured TMD bilayer systems also allows for realization of Bose-Hubbard physics with excitons trapped in a periodic triangular potential and subject to strong Coulomb interactions [17]. Moire physics in TMDs are largely determined by the shape of the twist-induced moire potential, which arises from local stacking configurations, lattice corrugation and, for small angles, atomic reconstruction [7, 18, 19, 20, 21, 22, 23, 24, 25]. As a consequence, relaxation effects are important for numerical simulations that involve moire structured TMD systems prone to atomic reconstruction, and/or structures with a moire period large enough to corrugate the individual layers [26, 27]. From an _ab initio_ standpoint, this presents a large challenge owing to the fact that relaxation is a computational bottleneck in such calculations. In an excellent paper by Naik _et al._[28], a method to overcome this problem is suggested by using a force-field model based on a combination of the Stillinger-Weber (SW) [29, 30] and Kolmogorov-Crespi (KC) [31, 32] potentials. The SW force-field accurately describes the intralayer forces, while the KC potential captures van der Waals (vdW) interaction between layers and includes a stacking-dependent term. Previously, this had been parametrized and applied to graphene and Figure 1: WS\({}_{2}\) on MoS\({}_{2}\) twisted at an angle of \(6.0^{\circ}\). Sulphur atoms are shown in orange, molybdenum in blue, and tungsten in grey. The moiré unit cell is shown with black solid lines, and has a moiré period, \(m_{0}\), of 30.1 Å encompassing 546 atoms. The long diagonal, \(r_{diag}\), is marked with a dashed black line. hexagonal boron nitride [33; 34; 35; 36], but is now also available for \(\mathrm{MX}_{2}\) homobilayers, where \(\mathrm{M}\in\{\mathrm{Mo},\mathrm{W}\}\) and \(\mathrm{X}\in\{\mathrm{S},\mathrm{Se}\}\)[28; 37]. However, the parameters presented in Ref. [28] are somewhat inaccurate when comparing to density functional theory (DFT) calculated results, e.g. for some structures, the bandgap is inaccurate by up to 100 meV. Even more importantly, the band curvature and energetic position of e.g. the lowest conduction band and highest valence band are skewed on similar scales. In Ref. [28], the parameters are developed by fitting to DFT binding energies which will not guaranty the force-field model to reproduce the DFT relaxed structure. In this work, the structural parameters of the DFT optimized structures (i.e. atomic positions and unit cell size) are used directly as target values for the optimization of the force-field parameters. Furthermore, the KC-parametrization of Ref. [28] is presented on a _per interaction basis_, meaning that atom-atom interactions are considered the same for different systems, e.g. S-S parameters for \(\mathrm{MoS}_{2}\)- and \(\mathrm{WS}_{2}\)-bilayers are the same. However, from a fundamental point of view, vdW interaction, being of long-range nature, is known to be sensitive to the surrounding environment. As such, we reparametrize the KC-potential on a _per system basis_, which yields more accurate band structures. Furthermore, we expand the set of parameters to include heterobilayers with and without lattice mismatch, essentially covering all bilayer combinations of 2H-phased \(\mathrm{MX}_{2}\) for \(\mathrm{M}\in\{\mathrm{Mo},\mathrm{W}\}\) and \(\mathrm{X}\in\{\mathrm{S},\mathrm{Se},\mathrm{Te}\}\). However, the method presented here is, in principle, extendable to any 2D moire structure and not limited to TMDs. Our force-field parameters, along with a variety of relaxed structures can be found via Ref. [38]. Lastly, we present two interpolation-based schemes to describe the _interlayer exciton moire potential_ of lattice-matched heterostructures with type-II band alignment by using a combination of the force-field method and DFT, which provides easy access to the potential for almost any angle. We extend this analogy to the binding energy, which allows for visualization of atomic reconstruction and the rate at which the reconstructed domains form with decreasing twist angle. Specifically, we see that atomic reconstruction becomes significant for angles below 4-5\({}^{\circ}\) for the TMD heterostructures studied here. ## II Methodology The first step is to develop the SW-parameters, which is done by considering the constituting monolayers one at a time. For 2H-phased TMD monolayers, the hexagonal symmetry reduces the structural degrees of freedom into two (target) parameters only, namely the lattice constant, \(a_{0}\), and the _intralayer_ distance, \(d_{intra}\), i.e. the out-of-plane X-X distance. Therefore, the SW-parametrization is carried out using \(a_{0}\) and \(d_{intra}\) as targets and reproduces them extremely well. The force-field relaxations are performed using the LAMMPS package [39], and the optimization of parameters is carried out with use of the Dakota package [40]. For the optimization of the KC parameters we are following two strategies, depending on whether the constituting layers are lattice matched or not. ### Lattice matched bilayers Bilayers that have the same chalcogen atom have a lattice constant mismatch \(\delta\sim 0.1\%\) and are treated as lattice matched. In this case, only one additional structural parameter is considered, namely the _interlayer_ spacing, \(d_{inter}\) (M-M distance). The KC-parameters are obtained by fitting to \(a_{0}\), \(d_{intra}\) and \(d_{inter}\) for the six high-symmetry stacking configurations (HSSCs), while keeping the SW-parameters fixed. The HSSCs are depicted in Fig. 2 and are divided in two groups, namely R- and H-stacking, which differ by a rotation of one of the layers by 60\({}^{\circ}\). This procedure follows the idea that the mechanical properties of the single layer is well described by the SW potential and is not altered by the interlayer interaction (KC potential). It is crucial to derive a force-field that is transferable between the different stackings since the twisted bilayers correspond to combinations of three different stackings, as will be demonstrated subsequently. Moreover, as we will indirectly show in Sec. IV.1, every subcell of a lattice-matched moire unit-cell is, to a certain extent, well described by a superposition of the HSSCs. Note, that this is not the case for lattice-mismatched Figure 2: The six high-symmetry stacking configurations of a bilayer with no lattice-mismatch. (a)-(c) ((d)-(f)) belong to the R(H)-stacking group. The stacking \(\mathrm{R}_{\mathrm{X}}^{\mathrm{X}}/\mathrm{R}_{\mathrm{M}}^{\mathrm{M}}\) is also referred to as AA-stacking, and \(\mathrm{H}_{\mathrm{X}}^{\mathrm{M}}/\mathrm{H}_{\mathrm{M}}^{\mathrm{X}}\) as AB-stacking. The dotted black lines indicate atoms that coincide along \(z\), justifying the naming convention. systems where no local HSSCs can be identified. Justification of our methodology becomes trivial for smaller angles, where domains of the HSSCs make up a large fraction of the moire unit cell. Finally, the small unit cells constructed with merely six atoms, makes both DFT calculations and the optimization schemes of Dakota and LAMMPS relatively fast. ### Lattice mismatched bilayers In Table 1 we show the lattice mismatch \(\delta\) for the different combinations of chalcogen atoms (the metal atom is nearly irrelevant for the lattice constant). The lattice-mismatch of the systems investigated here (X=S,Se,Te) is so large that the construction of small six atom unit cells as done in the lattice matched case is not meaningful. The in-plane strain will radically change the electronic properties [41]. To circumvent this problem, we use relatively small (about 500 atoms) moire structures as targets for lattice-mismatched systems (See Table 1). Due to the reduced symmetry of lattice-mismatched systems, the only valid targets are the coordinates of all atoms of the moire unit cell combined with the lattice constant. However, using all atomic coordinates, i.e. three spatial dimensions for each atom, renders the mesh adaptive search scheme for optimizing the KC-parameters infeasible, as the number of target values greatly exceeds the number of fitting parameters (Fig. 3, dashdotted green curve). As such, it is necessary to reduce the number of target values. However, considering only the three spatial coordinates of the metal atoms, thus reducing the target space by one third, also yields sub-optimal KC-parameters (Fig. 3, dotted blue curve). Lastly, optimizing only for the \(z\)-coordinates of the metal atoms, which further reduces the target space by one third, results in a much better fit (Fig. 3, dashed red curve). As such, we ultimately choose the \(z\)-coordinates of the metal atoms and the lattice constant as target values for lattice-mismatched systems, which yields satisfactory KC-parameters, as discussed in Sec. III. ### The Kolmogorov-Crespi Potential As mentioned previously, the KC potential, \(V_{ij}\), is intended to model interlayer effects between atom \(i\) in one layer and atom \(j\) in another, and is given by \[V_{ij} =\mathrm{e}^{-\lambda(r_{ij}-z_{0})}\left[C+f(\rho_{ij})+f(\rho_{ ji})\right]-A\left(\frac{r_{ij}}{z_{0}}\right)^{-6},\] \[\rho_{ij}^{2} =r_{ij}^{2}-(\mathbf{n}_{i}\mathbf{r}_{ij})^{2},\] \[\rho_{ji}^{2} =r_{ij}^{2}-(\mathbf{n}_{j}\mathbf{r}_{ij})^{2},\] \[f(\rho) =\mathrm{e}^{-(\rho/\delta)^{2}}\sum_{n=0}^{2}C_{2n}(\rho/\delta) ^{2n}. \tag{1}\] \(\mathbf{n}_{i}\) and \(\mathbf{n}_{j}\) are the surface normals of the atom site \(i\) and \(j\) in each layer. The choice of neighbors used to determine the surface normals are the six nearest atoms in the respective strata (sublayer of the monolayer). The last term of \(V_{ij}\) contains the \(r^{-6}\) vdW dependence, and the first term has an exponentially decaying repulsion reflecting interlayer wave-function overlap. The square bracket functions contain a stacking dependent term, in contrast to e.g. the Lennard-Jones potential [31]. As seen, \(V_{ij}\) leaves in total eight parameters to be fitted. As mentioned in Ref. [28], it is possible to approximate \(\mathbf{n}_{i,j}=\hat{z}\) corresponding to completely rigid layers, however, we do not make use of this approximation in order to capture more accurately the corrugation caused by the relaxation. ### Computational details We parametrize the potentials with different combinations of exchange correlation plus vdW correction. \begin{table} \begin{tabular}{c c c c c c} \hline X\({}^{1}\) & X\({}^{2}\) & \(\theta\) (\({}^{\circ}\)) & \(\delta\) (\%) & \(n_{atom}\) & \(m_{0}\) (nm) \\ \hline S & Se & 5.68 & \(4.1\pm 0.1\) & 525 & 3.0 \\ Se & Te & 5.07 & \(7.0\pm 0.1\) & 471 & 3.0 \\ S & Te & 0.00 & \(11.3\pm 0.1\) & 543 & 3.2 \\ \hline \end{tabular} \end{table} Table 1: Angles chosen for fitting lattice-mismatched structures accompanied by the lattice-mismatch (\(\delta\), found using DFT), number of atoms (\(n_{atom}\)), and the moire lattice constant \(m_{0}\) (moiré period). Figure 3: \(z\)-coordinates of the metal atom in a \(5.68^{\circ}\) twisted WS\({}_{2}\)-MoSe\({}_{2}\) bilayer along the long diagonal of the moiré unit cell, \(r_{diag}\). The bottom curve layer corresponds to WS\({}_{2}\), and the top more rigid layer corresponds to MoSe\({}_{2}\). The solid black curve is the DFT relaxed structure. The dashed red, dotted blue, and dashdotted green curves are force-field relaxed with KC-parameters (see text). (a) and (b) show R- and H-stackings, respectively. We find that using PBE [42] from PseudoDojo [43; 44] with Grimme's DFT-D3 vdW correction [45] plus Becke-Johnson damping [46] is best suited for parametrization. The structures are relaxed with QuantumEspresso [47; 48] using a \(k\)-space density of \(15\times 15\) (\(1\times 1\)) for high-symmetry (moire) unit cells. DFT computations of moire systems are performed without spin-orbit coupling (SOC) to save computational resources, since they are only used for comparing DFT to SW+KC relaxed structures. We find that the lattice constant only converges at a cut-off energy of 40 Ha in all cases. More importantly, the chosen cut-off energy should be consistent between monolayers, untwisted bilayers and moire structured bilayers, when comparing DFT to SW+KC. We use the modified SW implementation in LAMMPS for ease of use. For optimization in Dakota, we apply a mesh adaptive direct search algorithm starting from the parameters presented in Ref. [28]. ## III Results For lattice-matched systems, i.e. homobilayers and heterobilayers having identical chalcogen sites in both layers, which are developed by use of the HSSCs, it is of high importance that the resulting structures can accurately reproduce the electronic properties. In Fig. 4, a comparison between purely DFT calculated parameters and SW+KC can be seen. Note, that \(E_{g}\) shown in Fig. 4(a,d) is the energetically lowest momentum-conserving transition between the highest-lying valence band and the lowest-lying conduction band, which occurs at the \(\mathbf{K}_{\pm}\) points for all stacking configurations and materials considered here. LAMMPS does not provide \(E_{g}\), instead this is calculated using DFT with the relaxed structures generated by our SW+KC force-field method. Note, that for the purpose of consistency, we adopt the notation that MoS\({}_{2}\)-WS\({}_{2}\) implies that WS\({}_{2}\) lies above MoS\({}_{2}\) with respect to \(z\). In the case of homobilayers, the maximum deviation of \(E_{g}\) is 22 meV, and occurs in the H\({}_{\rm M}^{\rm M}\)-stacking configuration. A similar maximum deviation of 25 meV is seen for the heterobilayer, which occurs in the H\({}_{\rm M}^{\rm M}\)-stacking configuration as well. For the remaining lattice-matched structures, the deviations are of similar magnitude. Fig. 4 also demonstrates the high sensitivity of the bandgap with respect to changes in the structural degrees of freedom. Having established the SW+KC parameters of lattice-matched systems using the HSSCs, we now tackle some larger moire structures. As such, we use some medium-scale moire structures as benchmarks. Fig. 5 shows comparisons between DFT- and SW+KC-relaxed band structures and interlayer spacing profiles for different material cases. Greek indices denote the corners of the mini Brillouin zone (BZ) associated with a moire structured bilayer. The interlayer spacing is plotted along the long diagonal of the unit cell (see Fig. 1), which has a length of \(\sqrt{3}m_{0}\), where \(m_{0}\) denotes the moire cell lattice constant. Fig. 5(a)-(d) displays the case of a WS\({}_{2}\)-WS\({}_{2}\) homobilayer twisted at \(6.0^{\circ}\). For R-stacking ((a),(b)), the bands are well represented using our SW+KC relaxed structure with only a 13 meV decrease of the bandgap, likely due to the slight interlayer spacing profile discrepancy. In the case of H-stacking ((c)-(d)), the same applies except the bandgap is a mere 5 meV larger compared to the DFT relaxed structure results. In Fig. 5(e)-(h), the case of a MoS\({}_{2}\)-WS\({}_{2}\) heterobilayer with a twist angle of \(6.0^{\circ}\) is shown. For both R- and H-stacking ((e),(f) and (g),(h) respectively), an excellent agreement is obtained between DFT- and SW+KC-relaxed structures in terms of band character. For the higher lying conduction bands around the \(\mathbf{\gamma}\) point, there is only a 10 meV discrepancy. We again attribute this to the slightly decreased interlayer spacing profiles of SW+KC in both cases, as seen in Fig. 5(f),(h). For lattice-mismatched systems, the optimization of the KC parameters was performed for all possible combinations of metal and chalcogen atoms, as explained in the methodology section (see Table 1). A good agreement is obtained between the DFT- and SW+KC-relaxed structures for all lattice-mismatched cases. For the sake of brevity, only the case of a WS\({}_{2}\)-MoSe\({}_{2}\) heterobilayer rotated at \(5.1^{\circ}\) is shown in Fig. 5(i)-(l). For R-stacking ((i),(j)), the highest lying valence band is only 7 meV higher than the DFT value at the \(\mathbf{\kappa}\)-point. The lowest-lying conduction band is only 6 meV above the DFT one. In general, we see small discrepancies between the valence and conduction bands for the DFT and SW+KC-relaxed structures below 20 meV. For H-stacking ((k),(l)), the valence bands are well described except for a 4 meV discrepancy of the highest-lying valence band near the \(\mathbf{\kappa}\) point. Figure 4: Comparison of \(E_{g}\) at \(\mathbf{K}_{\pm}\) in (a) and (d), interlayer spacing (\(d_{inter}\)) in (b) and (e), and lattice constant \(a_{0}\) in (c) and (f) for the six HSSCs of a WS\({}_{2}\) homobilayer and a MoS\({}_{2}\)-WS\({}_{2}\) heterobilayer in (a)-(c) (top panels) and (d)-(f) (bottom panels), respectively. DFT is marked with black and SW+KC with red. In general, we note that the slight difference in bandgap and band curvature between DFT and our SW+KC-relaxed moire structures arise from small inaccuracies in the interlayer spacing profiles. Note, that this is not always the case with the KC-parameters presented by Ref. [28], where the binding energy was the target property. We also find that the accuracy of our lattice-matched SW+KC parameters reduce with growing twist angle. This is expected, since we fit to the untwisted HSSCs, which are not well represented in moire structures with such low periodicity. Conversely, the parameters are expected to have better accuracy with decreasing twist angle. For angles below \(~{}3^{\circ}\), where large-scale atomic reconstruction starts to appear, the accuracy of methodology is still maintained and most properties are well captured, including the atomic reconstructions, as discussed in Sec. V. ## IV Approximating moire potentials A defining feature of two-dimensional lattice-matched moire structures is the spatial variation of local stacking order across the structure, leading to variation of local properties. Many combinations of TMDs possess type-II band alignment [49, 50], and as such, the variation of the local bandgap at \(\mathbf{K}_{\pm}\) across the structure will, for many purposes, describe the _interlayer moire potential_[17, 27, 51]. However, it is worth mentioning, that in the case of a large lattice-mismatch between the constituting layers, developing such a potential becomes non-trivial. We propose two interpolation-based methods for calculating the interlayer moire potential of lattice-matched systems. Moreover, any electronic property that can be identified locally, can be accessed in the moire structure directly with these two methods, e.g. variation of the VBM, CBM etc. In both methods, the moire supercell is subdivided into small units the size of the monolayer unit cell, for which local properties can be calculated. The first method, which we call the high-symmetry interpolation method (HSIM), is based on the _local high-symmetry stacking character_ - a geometrical quantity that measures the similarity between the local stacking configuration within the moire cell and the HSSCs. Being based only on the six HSSCs, computing the DFT-properties is fast and allows for high-throughput computations. It also allows for easy visualization of reconstructed domains. The second method, which we call the grid based interpolation method (GBIM), relies on computing the local properties using DFT not only for the HSSCs, but also every local stacking configuration in between, which can then be interpolated over the moire supercell. In principle, this scheme is more precise, since it relies less on interpolation and more on _ab initio_ calculations. However, it is time consuming, as many DFT computations using different in-plane displacements and interlayer spacing are needed. In what follows, both methods are explained in detail and case studies are shown. ### High-symmetry interpolation method (HSIM) For every metal site in one layer, \(\mathbf{\rho}_{\text{M},i}=(x_{\text{M},i},y_{\text{M},i})\), we find the transverse distance to the closest metal site in the adjacent layer, e.g. \(d_{\text{M},i}^{\text{M}}=\min(|\mathbf{\rho}_{\text{M},i}-\mathbf{\rho}_{\text{M},j}|)\), where Figure 5: Comparison of bandstructures and interlayer spacing profiles between DFT in solid black and SW+KC in dotted red. WS\({}_{2}\)-WS\({}_{2}\) homobilayer with \(\theta=6.0^{\circ}\) (\(n_{atom}=546\)) in (a),(b) and (c),(d) for R- and H-stacking, respectively. MoS\({}_{2}\)-WS\({}_{2}\) heterobilayer with \(\theta=6.0^{\circ}\) (\(n_{atom}=546\)) in (e),(f) and (g),(h) for R- and H-stacking, respectively. WS\({}_{2}\)-MoSe\({}_{2}\) lattice-mismatched heterobilayer with \(\theta=5.1^{\circ}\) (\(n_{atom}=642\)) in (i),(j) and (k),(l) for R- and H-stacking, respectively. The bandstructures have the valence band maximum shifted to 0 in all cases, and the Greek indices (\(\gamma\), \(\mu\) and \(\kappa\)) denote the high-symmetry points of the moiré (mini) BZ (usually denoted \(\Gamma\), M and K in the BZ of the monolayer/untwisted bilayer). The interlayer spacing is interpolated and plotted along the long diagonal of the unit cell. \(j\) runs through every metal site in the adjacent layer (see Fig. 6). The largest distance possible is \(a_{0}/\sqrt{3}\). As such, we can define the parameter \(c^{\rm M}_{{\rm M},i}=1-\sqrt{3}d^{\rm M}_{{\rm M},i}/a_{0}\), which is unity for perfectly aligned metal atoms, e.g. \({\rm R}^{\rm X}_{\rm X}\)- and \({\rm H}^{\rm M}_{\rm M}\)-stacking, and zero for the remaining HSSCs. Eight analogous parameters can be developed, e.g. \[\{c^{\rm S_{2}}_{{\rm S}_{1},i}(\mathbf{\rho}_{{\rm S}_{1},i})\quad\text{for} \quad{\rm S}_{1},{\rm S}_{2}\in\{{\rm M},{\rm X},{\rm H}\}\},\] where \({\rm X}\) and \({\rm H}\) denote chalcogen sites and hexagonal centers, respectively. For the purpose of consistency, it is assumed that \({\rm S}_{2}\) lies above \({\rm S}_{1}\) with respect to \(z\). \(\{c^{\rm S_{2}}_{{\rm S}_{1},i}\}\) is then interpolated on a skewed grid that spans the moire unit cell. Stacking coefficients are now found as \[C_{{\rm R}^{\rm X}_{\rm X}} =c^{\rm M}_{\rm M}c^{\rm X}_{\rm X}{\rm H},\quad C_{{\rm H}^{\rm M }}=c^{\rm M}_{\rm X}c^{\rm X}_{\rm M}c^{\rm H}_{\rm H},\] \[C_{{\rm R}^{\rm M}_{\rm X}} =c^{\rm M}_{\rm X}c^{\rm X}_{\rm H}c^{\rm M}_{\rm H},\quad C_{{\rm H }^{\rm X}}=c^{\rm M}_{\rm H}c^{\rm X}_{\rm X}c^{\rm H}_{\rm M},\] \[C_{{\rm R}^{\rm X}_{\rm M}} =c^{\rm M}_{\rm H}c^{\rm X}_{\rm M}c^{\rm H}_{\rm X},\quad C_{{\rm H }^{\rm M}}=c^{\rm M}_{\rm M}c^{\rm X}_{\rm H}c^{\rm H}_{\rm X}.\] Finally, the stacking coefficients are normalized such that \(\sum_{n}C_{n}(\mathbf{\rho})=1\), where \(n\) spans the HSSCs. \(C_{n}\) is seen in Fig. 7 for R-stacking. The \(C_{n}\) with \(n\in\{{\rm H}^{\rm M}_{\rm X},{\rm H}^{\rm X}_{\rm X},{\rm H}^{\rm M}_{\rm M}\}\) are all 0 in this case. The next step is finding the interlayer spacing profile, \(d_{inter}(\mathbf{\rho})\), where \(\mathbf{\rho}=(x,y)\). Using the variation of \(E_{g}\), \(E_{g}(\mathbf{\rho})\), as an example, it can be seen that \[E_{g}(\mathbf{\rho})=\sum_{n}C_{n}(\mathbf{\rho})E_{g}(n,d_{inter}(\mathbf{\rho})), \tag{2}\] assuming the variation of \(E_{g}\) with \(d_{inter}\) is known for all HSSCs. Assuming that every subcell of the moire structure can be described by a superposition of HSSCs is an approximation, but has the benefit of easy visualization of domains, as seen in Fig. 7. It shows great accuracy and \(E_{g}(n,d_{inter})\) can be extracted within few calculations, making it quite fast to implement for all lattice-matched systems. ### Grid based interpolation method (GBIM) A more general implementation can be developed by using the untwisted bilayer with a transverse shift \(\mathbf{\rho}_{s}=(x_{s},y_{s})\) between the layers, where \(\mathbf{\rho}_{s}=0\) corresponds to either \({\rm R}^{\rm X}_{\rm X}\)- or \({\rm H}^{\rm M}_{\rm M}\)-stacking. We calculate \(E^{\rm BL}_{g}(\mathbf{\rho}_{s},d_{inter})\), where \(\mathbf{\rho}_{s}\) is the transverse distance between metal sites in each layer. Then, for a given lattice-matched moire system, for metal site \(i\) in one layer, we can find the vector \(\mathbf{\rho}_{i}=\mathbf{\rho}_{{\rm M},j}-\mathbf{\rho}_{{\rm M},i}\), where \(j\) denotes the index of the nearest metal site in the adjacent layer. Then, the value of \(E_{g}\) at metal site \(i\) is simply \[E_{g}(\mathbf{\rho}_{{\rm M},i})=E^{\rm BL}_{g}(\mathbf{\rho}_{i},d_{inter}(\mathbf{\rho}_{ {\rm M},i})). \tag{3}\] Note, that \(\mathbf{\rho}_{i}\) should be adjusted relative to the rotation of the individual layers, since the layers will likely be slightly angled compared to the systems used in computing \(E^{\rm BL}_{g}(\mathbf{\rho}_{s},d_{inter})\). Finally, \(E_{g}\) is interpolated over the entire moire unit cell. In principle, the GBIM should be more accurate than the HSIM, but is also computationally more expensive. We use twelve steps for \(x_{s}\) and \(y_{s}\) combined with sixteen increments for \(d_{inter}\) when tabulating \(E^{\rm BL}_{g}(\mathbf{\rho}_{s},d_{inter})\). This translates to 4608 separate DFT calculations to cover R- and H-stacking for one material, whereas the HSIM needs only 96. In Fig. 8, a comparison between the HSIM and the GBIM can be seen for the variation of \(E_{g}\) in MoS\({}_{2}\)-WS\({}_{2}\) with \(\theta=4.41^{\circ}\). At the high-symmetry points, both methods yield the same value as expected, but the HSIM is slightly inaccurate in between. ## V Atomic reconstruction and energetic landscape As mentioned, the energetic landscape of 2D moire structures is constituted by three codependent factors: the local stacking arrangement, the associated interlayer spacing, and the atomic reconstruction. Often, the latter two, being relaxation effects, are not considered in simulations [11; 15; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62], but can be managed with SW+KC force-field relaxation. Figure 6: Close-up of an R-stacked lattice-matched moiré structure for \(\theta=6.0^{\circ}\). For a metal site \(i\) in one layer, the nearest transverse metal site, \({\rm M},j\), chalcogen site, \({\rm X},j\), and hexagonal center, \({\rm H},j\), in the adjacent layer is seen. For MoS\({}_{2}\)-WS\({}_{2}\), which possesses type-II band alignment [49; 50], the interlayer moire potential is often described as the spatial variation of the local bandgap at \(\mathbf{K}_{\pm}\). In Fig. 9, the variation of \(E_{g}-\overline{E}_{g}\) across an R- and H-stacked MoS\({}_{2}\)-WS\({}_{2}\) bilayer with \(\theta=1.01^{\circ}\) is seen, where \(\overline{E}_{g}\) is the mean value across the unit cell. In the rigidly twisted case, the average interlayer spacing of the three R- or H-stacked HMSCs are used as interlayer spacing for Fig. 9(a) and Fig. 9(d), respectively. The discrepancy between modeling the potential with- and without relaxation effects is apparent. In the case of R-stacking, which has larger potential depth than H-stacking, the depth with- and without relaxation effects are here estimated to be 80 meV and 135 meV, respectively. For H-stacking, these numbers are much lower, namely 10 meV and 28 meV for rigid and relaxed, respectively. As a consequence, phenomena such as exciton trapping may be realized more easily in R-stacked systems than H-stacked analogs. Interestingly, for H-stacking, the minimum of the potential resides in the H\({}_{\rm X}^{\rm M}\)-domain post-relaxation as opposed to the H\({}_{\rm M}^{\rm M}\)-domain pre-relaxation. Finally, the effect of atomic reconstruction also greatly changes the relative widths of the potential wells, resulting in a sharper and more well-defined potential. We conclude that atomic reconstruction significantly alters the range of \(\theta\) in which exciton trapping occurs. In Table 2, the _interlayer binding energy per atom_, \(E_{b}\), found as \(E_{b}=(E_{\rm MoS_{2}\text{-WS}_{2}}-E_{\rm MoS_{2}}-E_{\rm WS_{2}})/6\) is shown, where \(E_{\rm MoS_{2}\text{-WS}_{2}}\) is the total energy of the untwisted bilayer system, and \(E_{\rm MoS_{2}}\) and \(E_{\rm WS_{2}}\) denote the total energies of constituting monolayers found separately. As mentioned, the discrepancy in \(E_{b}\) between DFT and SW+KC is expected, since this was not the target property during development of our KC parameters. For R-stacking, the nearly identical \(E_{b}\) of the R\({}_{\rm M}^{\rm X}\)- and R\({}_{\rm X}^{\rm M}\)-configurations facilitates a simultaneous growth of these domains with decreasing \(\theta\) (i.e. large moire unit cells), while the opposite is true for R\({}_{\rm X}^{\rm X}\), explaining the formation of a mesh of triangular domains, as seen in Fig. 9(b). For H-stacking, the H\({}_{\rm X}^{\rm M}\)-configuration is energetically favorable, resulting in hexagonal domains with decreasing \(\theta\). The H\({}_{\rm M}^{\rm M}\)-like domains shrink slower than those associated with H\({}_{\rm X}^{\rm X}\), as seen from the associated \(E_{b}\) (see also [63]). Lastly, \(E_{b}(\theta)\) can be considered in order to access the formation rate of domains. Using the HSIM, the variation of the local \(E_{b}\) across a moire unit cell can be approximated, and the mean can be used to approximate \(E_{b}\) of the moire unit cell, albeit neglecting the effects of strain imposed by atomic reconstruction and corrugation from the varying interlayer spacing. In the case of pure DFT, \(E_{b}\), is found directly as \[E_{b}=E_{\rm moire}-(E_{\rm MoS_{2}}+E_{\rm WS_{2}})/2, \tag{4}\] where all energies are divided by the number of atoms, and \(E_{\rm moire}\) denotes the total energy per atom of the moire structure. However, \(E_{b}\) has contributions from the strain imposed by layer corrugation and atomic reconstruction. The energy associated with these effects is denoted \(E_{\rm corr}\) and is not captured by the HSIM. Instead, the \(E_{b}\) found by the HSIM should be compared to \[E_{b}-E_{\rm corr}=E_{\rm moire}-(E_{\rm MoS_{2},moire}+E_{\rm WS_{2},moire})/2, \tag{5}\] where \(E_{\rm MoS_{2},moire}\) and \(E_{\rm WS_{2},moire}\) denote the total energy per atom for the corrugated and reconstructed constituting monolayers. This is computed in separate DFT calculations having half the number of atoms as the moire structure they constitute. \begin{table} \begin{tabular}{c c c c c c c} \(E_{b}\) (meV) & R\({}_{\rm X}^{\rm X}\) & R\({}_{\rm M}^{\rm X}\) & R\({}_{\rm X}^{\rm M}\) & H\({}_{\rm X}^{\rm M}\) & H\({}_{\rm X}^{\rm X}\) & H\({}_{\rm M}^{\rm M}\) \\ \hline DFT & -21.8 & -34.3 & -34.5 & -34.6 & -22.4 & -31.6 \\ \hline SW+KC & -25.4 & -44.4 & -44.6 & -44.7 & -28.9 & -38.4 \\ \end{tabular} \end{table} Table 2: Binding energy of MoS\({}_{2}\)-WS\({}_{2}\) in the six high-symmetry stacking configurations from DFT and from SW+KC. Figure 8: Variation of \(E_{g}\) along the long diagonal of an R- and H-stacked MoS\({}_{2}\)-WS\({}_{2}\) bilayer with \(\theta=4.41^{\circ}\) in (a) and (b), respectively. The solid black and dashed red curves represent bandgap variation found using the GBIM and HSIM, respectively. Figure 9: Variation of \(E_{g}\) at \(\mathbf{K}_{\pm}\) across a \(1.01^{\circ}\) twisted R-stacked MoS\({}_{2}\)-WS\({}_{2}\) bilayer without and with relaxation effects in the (a),(d) and (b),(e), respectively. Comparison between the two cases along the long diagonal of the moiré unit cells in (c),(f). (a)-(c) and (d)-(e) represent R- and H-stackings, respectively. With SW+KC, \(E_{b}\) is found analogously to Eq. (4), but \(E_{\rm corr}\) is found directly by comparing the energy of the SW-potential in the two layers to that of the constituting rigid monolayers. The variation of these quantities with \(\theta\) is seen in Fig. 10. A common feature for all energy scales in Fig. 10 is the tendency towards the value of the stable configurations for \(\theta\to 0\). For vanishing \(\theta\), the relative size of the domain walls becomes negligible. As such, \(E_{\rm corr}\) should vanish in the limit of vanishing \(\theta\). The faster convergence towards the \(E_{b}\) of \(\rm R_{\rm X}^{N}/\rm R_{\rm M}^{N}\) for R-stacking indicates that the triangular domains form more rapidly with decreasing \(\theta\) compared to the hexagonal \(\rm H_{\rm M}^{N}\)-domains of H-stacking. Although the values of \(E_{b}^{\rm SW+KC}\) and \(E_{\rm corr}^{\rm SW+KC}\) may appear off scale, they illustrate the tendencies faithfully. Additionally, the graph of \(E_{b}^{\rm DFT}-E_{\rm corr}^{\rm DFT}\) serves as a benchmark, showing that the HSIM has accuracy within the 0.5 meV range, and further that \(E_{b}\) of SW+KC relaxed structures can be recovered to agree with DFT. Fig. 11 shows the mean of the stacking coefficients \(C_{n}\) over the moire unit cell of MoS\({}_{2}\)-WS\({}_{2}\), which can be computed using the HSIM as described in Sec. IV.1. \(C_{n}\) represents the normalized contributions of the different stacking configurations to the fully relaxed (reconstructed) moire structure. For R-stacking (H-stacking), the three possible domains are: \(\rm R_{\rm M}^{N}\) (green), \(\rm R_{\rm X}^{N}\) (red), \(\rm R_{\rm X}^{M}\) (blue) (\(\rm H_{\rm M}^{M}\) (green), \(\rm H_{\rm X}^{N}\) (red), \(\rm H_{\rm X}^{M}\) (blue)). For larger angles, the fraction of the unit cell area occupied by each of the three domains is about 1/3 for both R- and H-stacking. At an angle of 1\({}^{\circ}\) the structure for R-stacking (Fig. 11(a)) is already reconstructed in such a way that the energetically less favorable \(\rm R_{\rm X}^{N}\) (red) domains represent only 2,5% of the structure. Both \(\rm R_{\rm M}^{N}\) (green) and \(\rm R_{\rm X}^{M}\) (blue) domains are energetically equivalent, and hence, occupy roughly 50% of the structure in the limit of small \(\theta\). For H-stacking (Fig. 11(b)), at the same angle of 1\({}^{\circ}\), the less favorable \(\rm H_{\rm X}^{N}\) and \(\rm H_{\rm M}^{M}\) domains have significantly reduced contributions compared to the favorable \(\rm H_{\rm X}^{M}\) region, but \(\rm H_{\rm M}^{M}\) still represents 20% of overall structure. Fig. 11 allows us to draw quantitative conclusions on the angle dependence of the reconstruction effect. Indeed, neglecting reconstructions would lead to a constant equal proportion of all three coexisting stackings (dotted lines in Fig. 11). In the case of R-stacking the reconstruction is nearly complete at an angle of 1\({}^{\circ}\), i.e., the moire structure is made of basically two type of low energy domains (\(\rm R_{\rm M}^{N}\) (green), \(\rm R_{\rm X}^{M}\) (blue)) separated by a very narrow \(\rm R_{\rm X}^{N}\)(red) energetically unfavorable domain. For H-stacking at 1\({}^{\circ}\), the less favorable \(\rm H_{\rm M}^{M}\) (green) domain still covers 15-20% of the area. Fig. 12 shows the same graph as Fig. 11 for the remaining eight lattice-matched materials. Generally, all R-stacked materials (left panels of Fig. 12) display a simultaneous growth of \(\rm R_{\rm M}^{N}\) and \(\rm R_{\rm X}^{M}\) with decreasing \(\theta\) except for MoTe\({}_{2}\)-WTe\({}_{2}\), which can be attributed to the discrepancy in \(E_{b}\) for these stacking configurations. We conclude that for both stackings and all materials considered here, except for MoSe\({}_{2}\)-MoSe\({}_{2}\), that atomic reconstruction becomes especially prominent below an angle of 4-5\({}^{\circ}\). For MoSe\({}_{2}\)-MoSe\({}_{2}\), atomic reconstruction occurs for angles below 6-7\({}^{\circ}\). ## VI Conclusion In conclusion, we have shown the dramatic consequences of incorporating relaxation effects on the _interlayer moire potential_ of MoS2-WS\({}_{2}\). For R-stacking, this becomes about twice as deep at about 135 meV, and, for small angles, much wider. For H-stacking, the potential depth is nearly tripled, however, the width of the potential minima is still narrow, since it corresponds to the energetically unfavorable \(\rm R_{\rm X}^{N}\)-configuration. Moreover, we have quantified the formation rate of domains due to atomic reconstruction for nine lattice-matched TMD moire systems, and conclude that, in general, atomic reconstruction becomes prominent for \(\theta\) smaller than 4-5\({}^{\circ}\) Figure 10: \(E_{b}(\theta)\) for MoS\({}_{2}\)-WS\({}_{2}\). (a) and (b) are for R- and H-stacking, respectively. Black points are found using the HSIM on the SW+KC-relaxed structures, but with DFT-based parametrization of the HSIM as seen in Eq. (2). but does so in a continuous manner. Furthermore, we have presented a methodology for developing KC-parameters for lattice-matched and -mismatched systems, and have developed such parameters for TMD moire heterostructures. The method shows excellent agreement between DFT-calculated structural parameters and SW+KC-relaxed ones, which is further reflected in the bandstructure and the _interlayer binding energy_ with twist angle dependence. The force-field parameters along with a variety of relaxed structures can be found via Ref. [38]. We have further shown two methods for capturing moire induced fluctuations of local properties in lattice-matched systems that do not require extensive _ab initio_ treatment. These methods allow for visualization of the importance of relaxation effects and further serve as a first step in developing accurate moire potentials. However, further investigation is required to develop analogous tools for lattice-mismatched moire structures. In summary, starting from the force-field model, it is now possible to tackle excited state physics incorporating relaxation effects i.e. layer corrugation and atomic reconstruction. For models such as tight-binding, this was not possible before, and for _ab initio_ studies, the cumbersome first step of relaxation can be skipped, thus saving computational resources and time. Furthermore, a thorough dissection of the formation rate of domains with decreasing angle is required to gain quantitative insight into the mechanisms behind it. ###### Acknowledgements. The project is supported by the Deutsche Forschungsgemeinschaft (DFG) within the Priority Program SPP2244 2DMP and by the Cluster of Excellence "Advanced Imaging of Matter" of the Deutsche Forschungsgemeinschaft (DFG) - EXC 2056 - project ID 390715994.
2307.01771
AT2023fhn (the Finch): a Luminous Fast Blue Optical Transient at a large offset from its host galaxy
Luminous Fast Blue Optical Transients (LFBOTs) - the prototypical example being AT2018cow - are a rare class of events whose origins are poorly understood. They are characterised by rapid evolution, featureless blue spectra at early times, and luminous X-ray and radio emission. LFBOTs thus far have been found exclusively at small projected offsets from star-forming host galaxies. We present Hubble Space Telescope, Gemini, Chandra and Very Large Array observations of a new LFBOT, AT2023fhn. The Hubble Space Telescope data reveal a large offset (greater than 3.5 half-light radii) from the two closest galaxies, both at a redshift of 0.24. The location of AT2023fhn is in stark contrast with previous events, and demonstrates that LFBOTs can occur in a range of galactic environments.
A. A. Chrimes, P. G. Jonker, A. J. Levan, D. L. Coppejans, N. Gaspari, B. P. Gompertz, P. J. Groot, D. B. Malesani, A. Mummery, E. R. Stanway, K. Wiersema
2023-07-04T15:22:23Z
http://arxiv.org/abs/2307.01771v2
AT2023fhn (the Finch): a Luminous Fast Blue Optical Transient at a large offset from its host galaxy ###### Abstract Luminous Fast Blue Optical Transients (LFBOTs) - the prototypical example being AT 2018cow - are a rare class of events whose origins are poorly understood. They are characterised by rapid evolution, featureless blue spectra at early times, and luminous X-ray and radio emission. LFBOTs thus far have been found exclusively at small projected offsets from star-forming host galaxies. We present Hubble Space Telescope, Gemini, Chandra and Very Large Array observations of a new LFBOT, AT 2023fhn. The Hubble Space Telescope data reveal a large offset (\(>3.5\) half-light radii) from the two closest galaxies, both at redshift \(z\sim 0.24\). The location of AT 2023fhn is in stark contrast with previous events, and demonstrates that LFBOTs can occur in a range of galactic environments. keywords: supernovae:individual:AT 2023fhn - transients:supernovae - transients:tidal disruption events ## 1 Introduction The development of wide-field, high cadence and deep optical surveys in recent years - including the Zwicky Transient Facility (ZTF, Bellm et al., 2019), Asteroid Terrestrial-impact Last Alert System (ATLAS, Tonry et al., 2018), Panoramic Survey Telescope and Rapid Response System (PanSTARRS, Chambers et al., 2016), Gravitational-wave Optical Transient Observer (GOTO, Steeghs et al., 2022) and Black hole Gravitational-wave Electromagnetic counterpart array (BlackGEM, Bloemen et al., 2016), to name a few - is leading to ever more transient detections in the extremes of parameter space. This trend is set to continue with the Vera Rubin Observatory (LSST Science Collaboration et al., 2009). Such surveys led to the discovery of fast blue optical transients (FBOTs), first identified as a class by Drout et al. (2014) in ZTF. FBOTs rise and fade on timescales of days, and have (early-time) \(g\)-\(r\) colours of -0.3 or bluer. These events also have featureless, black-body-like spectra at early times with inferred temperatures \(>10^{4}\) K (Pursiainen et al., 2018). It has since become clear that the majority are infant supernovae with low ejecta masses (Pursiainen et al., 2018), but a small number fade too rapidly to be powered by Ni-56 decay (faster than 0.2-0.3 magnitudes per day), have peak absolute magnitudes rivalling superminous supernovae (\(<-20\)), and have accompanying luminous X-ray and radio emission. These bright, multi-wavelength FBOTs have been dubbed luminous-FBOTs (LFBOTs, Metzger, 2022), the first example of which is AT 2018cow ("the Cow", Prentice et al., 2018; Margutti et al., 2019; Perley et al., 2019). Since AT 2018cow, several other LFBOTs have been discovered (both in real time and archival searches), with varying degrees of multi-wavelength coverage. These include ZTF18abvkva ("the Koala", Ho et al., 2020), CSS161010 (Copppas et al., 2020), ZTF20acigment ("the Camel", Perley et al., 2021; Bright et al., 2022; Ho et al., 2022c), AT2020mf (Yao et al., 2022) and AT 2022lsd ("the Tasmanian Devil", Ho et al., 2022a; Matthews et al., 2023). There are also a number of other lower-confidence candidates (e.g. Ho et al., 2022b; Jiang et al., 2022; Perley et al., 2023). Despite the growing number of LFBOT discoveries, these events are intrinsically rare - the volumetric rate of AT 2018cow-like LFBOTs is estimated to be no more than 0.1 per cent of the local supernova rate (Ho et al., 2023b). The nature of LFBOTs remains unclear. The timescale of their light-curve evolution, X-ray and radio luminosity, late-time UV emission in the case of AT 2018cow (Sun et al., 2022, 2023; Chen et al., 2023a; Inkenhaag et al., 2023), and preference for star-forming dwarf and spiral hosts have proved challenging to explain with a single self consistent model. Circumstellar medium interactions around young supernovae are a plausible origin for the early-time spectra and X-ray/radio emission of some FBOTs (Pursaianen et al., 2018; Ho et al., 2023), as well as for the optical polarisation behaviour (Maund et al., 2023). However, the peak absolute magnitude, rapid subsequent fading, high radio/X-ray luminosity and peculiar optical and radio polarisation of LFBOTs (Huang et al., 2019; Maund et al., 2023) require an alternative explanation. Following AT 2018cow, a few main classes of model emerged. These include central engines born in low-ejecta core-collapse events, powered by black hole accretion or magnetar spin-down (e.g. Perley et al., 2019; Margutti et al., 2019); mergers of stellar-mass black holes and hydrogen-poor stars (e.g. Metzger, 2022); or the tidal disruption of a main sequence star (Perley et al., 2019) or white dwarf by an intermediate mass black hole (IMBH, Kuin et al., 2019). The former is motivated by the rapid light-curve decay and multi-wavelength evolution which severely limits the possible ejecta mass; the latter two also by the timescale - which is too fast for a supermassive black hole (SMBH) tidal disruption event (TDE) - and the weak (initially absent) hydrogen lines in the spectra. Many of these scenarios face challenges. For example, a magnetar central engine can power the early or late-time UV emission in AT 2018cow, but not both (Chen et al., 2023), while the environments of LFBOTs thus far - at small offsets within star-forming dwarfs and spirals, and with high circumstellar densities (Margutti et al., 2019) - favour a short-lived, massive star progenitor over an IMBH TDE. Further insight will come from similarly detailed studies of other LFBOTs, to establish which features are common to all objects in this class, and to understand the variety among them. In this letter, we present multi-wavelength observations of a new LFBOT, AT2023fhn ("the Finch"). The transient is significantly offset from the nearest galaxies, representing a deviation in terms of its environment from previous LFBOTs. This letter is structured as follows. In Section 2 we review how AT 2023fhn was discovered, and present early-time X-ray and radio observations. Section 3 presents follow-up observations, including _Hubble Space Telescope (HST)_ imaging and Gemini spectroscopy. In Section 4 we discuss possible interpretations, and conclusions are drawn in Section 5. We adopt a cosmology with H\({}_{0}=69.6\) km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega_{\rm m}=0.29\) and \(\Omega_{\Lambda}=0.71\)(Wright, 2006; Bennett et al., 2014). Uncertainties are given as 1\(\sigma\) unless otherwise stated, and magnitudes are quoted in the AB system (Oke and Gunn, 1982). ## 2 Discovery and classification ### Early photometry and spectra AT 2023fhn was discovered on 10 Apr 2023 with \(m(r)=19.74\) by ZTF (Fremling, 2023). The blue colour of \(g-r\sim-0.47\) and rapid \(\sim\)0.2 mag day\({}^{-1}\) evolution immediately classified AT 2023fhn as an LFBOT candidate. Ho et al. (2023) subsequently obtained Gemini GMOS-S spectroscopy of AT 2023fhn on 19-04-2023 (programing GS-2023A-Q-127), finding a featureless blue spectrum. On 20 Apr 2023 they obtained a spectrum of the nearby spiral galaxy (\(\sim\)5 arcsec offset), yielding a redshift of \(z\sim 0.24\). At this redshift, the earliest ZTF \(g\)-band (12 Apr 2023) absolute magnitude is -21.5. ### X-ray and radio observations We triggered _Chandra X-ray Observatory_ observations (PI: Chrimes; program 24500143; Obs ID 26624), which were obtained on 25 Apr 2023 (06:58:08 - 15:46:51 UT). The faint-mode ACIS-S exposure lasted 30 ks. The data were reduced and analysed with standard ciao (v4.13, caldb v4.9.3) procedures including reprocessing, filtering and source measurement with srcflux. Assuming a power-law with a photon index \(\Gamma=2\)(Rivera Sandoval et al., 2018; Matthews et al., 2023), the unabsorbed source flux after correction for the Galactic neutral hydrogen column density of \(NH=2.4\times 10^{20}\)cm\({}^{-2}\)(Kalberla et al., 2005) is \(7.6^{-1.8}_{-2.2}\times 10^{-15}\) erg cm\({}^{-2}\) s\({}^{-1}\) (0.5-7.0 keV). At the redshift of the spiral, this corresponds to a luminosity of \(1.3^{-0.3}_{+0.4}\times 10^{42}\) erg s\({}^{-1}\), comparable to other LFBOTs at the same epoch (Rivera Sandoval et al., 2018; Margutti et al., 2019; Kuin et al., 2019; Coppejans et al., 2020; Bright et al., 2022; Yao et al., 2022; Matthews et al., 2023). Early radio observations (within a few weeks of discovery) produced non-detections, including a 10 GHz Northern Extended Millimeter Array upper limit of \(2\times 10^{29}\) erg s\({}^{-1}\) Hz\({}^{-1}\) on the luminosity (Ho, 2023), and upper limits from our own programme (SC240143, PI: Chrimes) on the Karl G. Jansky Very Large Array (VLA). We observed AT 2023fhn on 22 Apr 2023 (\(\approx 12\) days post detection) in standard phase-referencing mode using 3C286 as a flux density and bandpass calibrator, with J1014+2301 and J1016+2037 as complex gain calibrators. The observations were calibrated using the VLA Calibration Pipeline 2022.0.64 in CASA version 6.4.1 with additional manual flagging. We imaged the data using the task tclean in CASA with Briggs weighting with a robust parameter of 1. No significant emission was detected at the source location. We provide the upper-limits in Table 1. These early-time non-detections are consistent with the behaviour of previous LFBOTs. The transient was subsequently detected with the VLA on 15 Jun 2023 (Ho, 2023) with luminosity \(7.6\times 10^{28}\) erg s\({}^{-1}\) Hz\({}^{-1}\) (at 10 GHz), also similar to other LFBOTs at the same epoch (e.g. Margutti et al., 2019; Coppejans et al., 2020). The rapid evolution (timescale of a few days) and peak optical absolute magnitude of -21.5 places AT 2023fhn firmly within the LFBOT region of timescale/peak luminosity parameter space (see Figures 3 and 14 of Ho et al., 2023). Along with the hot featureless optical spectrum, X-ray and radio detections, AT 2023fhn is unambiguously identified as a new AT 2018cow-like LFBOT. ## 3 Follow-up observations ### Hubble Space Telescope Imaging #### 3.1.1 Data reduction and photometry _HST_ WFC3/UVIS observations were taken with the F555W and F814W filters on 17 May 2023 (PI: Chrimes; proposal ID 17238). Three 364 s exposures with sub-pixel dithers were taken in each filter. The F555W exposures began 09:02:23 and ended \begin{table} \begin{tabular}{c c c c c c} \hline \hline Start date & Freq. & BW & T\({}_{\rm exp}\) & Upper-limit & Upper-limit \\ JD-2460056 & GHz & GHz & Min. & \(\mu\)Jy/beam & \(10^{28}\) erg s\({}^{-1}\) Hz\({}^{-1}\) \\ \hline 0.80733 & 1.50 & 1.024 & 35.9 & 130 & 22.5 \\ 0.78309 & 3.00 & 2.048 & 30.0 & 35 & 6.0 \\ 0.76507 & 6.05 & 2.048 & 21.0 & 18 & 3.1 \\ 0.74688 & 10.00 & 4.096 & 21.1 & 18 & 3.1 \\ 0.72090 & 15.02 & 6.144 & 30.1 & 11 & 1.9 \\ 0.69229 & 21.94 & 8.192 & 28.2 & 17 & 2.9 \\ 0.66552 & 32.94 & 8.192 & 25.4 & 25 & 4.3 \\ \hline \end{tabular} \end{table} Table 1: VLA flux density upper-limits. These are given as 3 times the local RMS. The third column lists the bandwidth. The final column lists limits on the luminosity, assuming a redshift of \(z=0.238\) (see Section 3.2). 09:23:41 UT, the F814W exposures began 09:25:31 and ended 09:48:13 UT. The _flc images were combined using astrodrizzle1(Fruchter & Hook, 2002), with px_frac = 0.8 and a final pixel scale of 0.025 arcsec pixel\({}^{-1}\). The transient is clearly identified in the reduced images, as shown in Figure 1. Two adjacent galaxies are fully resolved: a barred spiral to the south and a dwarf/irregular to the southeast. These galaxies have Sloan Digital Sky Survey (SDSS) data release 16 (Ahumada et al., 2020) IDs SDSS J100803.73+210422.5 and SDSS J100803.87+210425.8. We perform photometry on AT 2023fhn with three methods. The first two use standard photutils aperture photometry procedures in python (Bradley et al., 2021), but the background level is calculated in two ways. The first uses the MedianBackground estimator (using the whole image for the estimate). The second uses an annulus around the source (inner and outer radii of 1.5 and 4 times the aperture radius, and pixel values in the annulus clipped at \(\pm 3\sigma\)). For each of these background estimations, two aperture sizes are used - 0.2 and 0.4 arcsec - with the appropriate aperture corrections for F555W and F814W applied2. AB magnitudes are derived from the photrlam and photrlam header values and the published conversion procedures3. For the third method we use dolphot (v2.0, Dolphin, 2000). dolphot performs PSF photometry on each _flc image separately; these measurements are combined to give the reported value and its error. dolphot provides instrumental magnitudes in the Vega system, but we instead report AB magnitudes using conversions calculated with systemphot (STScI Development Team, 2020). Magnitude measurements for each combination of filter and methodology are given in Table 2. Smaller apertures and annulus background subtraction results in fainter magnitudes, indicative of the presence of diffuse emission around the transient (as can be seen in Figure 1, see insets). Footnote 1: Part of drizzlepac, [http://drizzlepac.stsci.edu/](http://drizzlepac.stsci.edu/) Footnote 2: [https://hubblesite.org/sites/wn/home/](https://hubblesite.org/sites/wn/home/) hst/instrumentation/wfc3/data-analysis/ photometric-calibration/wvis-encircled-energy Footnote 3: [https://hst-docs.stsci.edu/wfc3dhb/](https://hst-docs.stsci.edu/wfc3dhb/) chapter-9-wfc3-data-analysis/9-1-photometry #### 3.1.2 Galaxy offsets and enclosed flux radii The sky-projected spatial offset of a transient from its host is a key piece of information for understanding its origin. Host-normalised offsets, offsets divided by the half-light radius of the host, are widely used in the literature (see Figure 4) as they account for the projected extent of the host galaxy. In order to measure the offsets and host-normalised offsets of AT 2023fhn from the two nearby galaxies, we measure their centroids and half-light radii \(r_{50}\) (from Petrosian profile fitting) using the python package stamorph(Rodriguez-Gomez et al., 2019). We require objects to have at least 5 adjacent pixels, each \(>\)1 \(\sigma\) above the background. The resultant segmentation maps are convolved with a uniform filter of size 10 pixels and these filtered segmentation maps are used to identify objects by requiring values \(>\) 0.5. Enclosed flux measurements are not restricted to the galaxy-associated pixels identified with this method; flux is measured out to \(r_{\rm max}\) which extends beyond the segmentation area to the faint outer regions (further than twice then Petrosian radius, for details see Rodriguez-Gomez et al., 2019). We note that the transient lies outside the pixels selected as associated with the galaxy in both cases. Segmentation maps, radial light profiles in the direction of \begin{table} \begin{tabular}{l c c c c} \hline \hline Filter & Method & Background & Aperture & m & \(\delta\)m \\ \hline F555W & photutils & Median & 0.2\({}^{\prime\prime}\) & 24.31 & 0.02 \\ F555W & photutils & Annulus & 0.2\({}^{\prime\prime}\) & 24.38 & 0.02 \\ F555W & photutils & Median & 0.4\({}^{\prime\prime}\) & 24.13 & 0.03 \\ F555W & photutils & Annulus & 0.4\({}^{\prime\prime}\) & 24.30 & 0.02 \\ F555W & dolphot & – & PSF & 24.57 & 0.01 \\ F814W & photutils & Median & 0.2\({}^{\prime\prime}\) & 24.17 & 0.03 \\ F814W & photutils & Annulus & 0.2\({}^{\prime\prime}\) & 24.27 & 0.02 \\ F814W & photutils & Median & 0.4\({}^{\prime\prime}\) & 23.94 & 0.04 \\ F814W & photutils & Annulus & 0.4\({}^{\prime\prime}\) & 24.11 & 0.03 \\ F814W & dolphot & – & PSF & 24.45 & 0.07 \\ \hline \end{tabular} \end{table} Table 2: _HST_ magnitudes \(m\), and their uncertainties \(\delta m\), for AT 2023fhn. In both filters, three photometry methods are listed - aperture photometry with median background estimation, aperture photometry with annulus background estimation, and dolphot. For the non- dolphot measurements, two aperture sizes (and hence enclosed energy corrections) are listed. Figure 1: _HST_ images of AT 2023fhn, indicated by red pointers, and the nearby host galaxy candidates. North is up and east is left in all images. The transient lies at a large offset from both the barred spiral to the south and the dwarf galaxy to the southeast. Smoothed and scaled 3.75\(\times\)3.75 arcsec cutouts around AT 2023fhn are shown in the inset panels. The diffuse emission northwest of the dwarf (satellite) galaxy is an alternative parent stellar population. the transient, and statmorph Sersic fits for the two galaxies in each filter, are provided in the associated github repository4. Footnote 4: [https://github.com/achrimes2/Finch](https://github.com/achrimes2/Finch) At \(z=0.238\) - the redshift of the spiral (and its satellite, see Section 3.2) - the physical scale is \(3.80\,\)kpc arcsec\({}^{-1}\). From the centre of the spiral, the projected offset of AT 2023fhn \(\delta r\) is \(16.51\pm 0.09\,\)kpc. From the centre of the satellite, the offset is \(5.35\pm 0.06\,\)kpc (uncertainties as described below). The non-parametric half-light radius r\({}_{\rm 50}\) (enclosing 50 per cent of the flux, \(r_{\rm 50}\)) is measured to be \(4.5\pm 0.2\,\)kpc in F555W for the spiral. Given the satellite's ellipticity of 0.4 and the orientation of AT 2023fhn, we take r\({}_{\rm 50}\) along the semi-major axis, which is \(1.48\pm 0.10\,\)kpc in F555W. In F814W, these values are \(3.90\pm 0.13\,\)kpc and \(1.29\pm 0.10\,\)kpc, respectively. This corresponds to host-normalised offsets (\(r_{\rm n}=\delta r/r_{\rm 50}\)) of \(3.7\pm 0.2\) and \(3.6\pm 0.2\) in F555W, while in F814W, \(r_{\rm n}=4.25\pm 0.14\) and \(4.1\pm 0.3\) (for the spiral and satellite respectively). The quoted offset uncertainties are the quadrature sum of the transient positional uncertainty (given by FWHM/(2.35\(\times\)SNR), where FWHM is the full-width at half-maximum and SNR the signal-to-noise ratio) and the uncertainty on the galaxy centroids (\(x_{\rm c}\),\(y_{\rm c}\)). The centroid uncertainties are calculated by re-sampling the input _r_idc image set 100 times using their [ERR] extensions, re-drizzling each re-sampled set, and measuring the morphological properties with statmorph on each iteration of the re-drizzled image (see Lyman et al., 2017; Chrimes et al., 2019). The mean and standard deviation of the resultant \(x_{\rm c}\), \(y_{\rm c}\) and r\({}_{\rm 50}\) distributions are used, along with the AT 2023fhn positional uncertainties, to calculate the values and their uncertainties quoted above. #### 3.1.3 Search for underlying and extended emission Given the apparently isolated location of AT 2023fhn, it is prudent to search for any underlying (extended) emission at the transient location, such as a knot of star formation, cluster or background galaxy. To establish whether the emission is unresolved, we first select a reference point source in the image (the object at coordinates \(\alpha=10\)h08m03.13s, \(\delta=+21\)d04m22.8s). Cutouts around AT 2023fhn and the reference star are interpolated onto a pixel grid with twice the resolution (enabling sub-pixel shifts), before subtraction of the reference image from the one containing AT 2023fhn. The reference is scaled in peak flux and shifted in \(x\),\(y\) to minimize the standard deviation at the location of AT 2023fhn in the residual image. The transient, reference and residual images are shown in Figure 2. To determine if the residuals are consistent with a clean point source subtraction, we perform photutils aperture photometry (with an annulus) as described above. No significant residual flux is detected, demonstrating that any underlying (non-transient) source contributing significantly to the flux must be precisely co-located and also unresolved (the physical scale at this distance is \(95\,\)pc pixel\({}^{-1}\)). Making use of BPASS (Binary Population and Spectral Synthesis v2.2, Eldridge et al., 2017; Stanway and Eldridge, 2018) synthetic spectra, we calculate the maximum mass of a stellar cluster which can be present at the location of AT 2023fhn, without exceeding the observed luminosity in either F555W or F814W. We find that the maximum possible mass of an unresolved cluster rises with population age, from \(3\times 10^{6}\)M\({}_{\odot}\) at \(10^{6}\) yr to \(\sim 10^{9}\)M\({}_{\odot}\) at \(10^{10}\) yr. Therefore, the presence of a typical stellar cluster - at any age - cannot be ruled out. To search for extended emission, we smooth the images with a Gaussian filter (\(\sigma=1.5\)) and scale them to show diffuse background light. The inset panels of Figure 1 show cutouts of the smoothed and scaled images. Faint emission can be seen extending northwest of the satellite, plausibly a tidal stream. The surface brightness near the transient location (measured in a 1 arcsec radius around AT 2023fhn) is \(25.2\,\)mag arcsec\({}^{-2}\) in F555W and \(24.6\,\)mag arcsec\({}^{-2}\) in F814W. ### Gemini spectroscopy We obtained two epochs of Gemini/GMOS-S spectroscopy on 22/23 Apr 2023 and 12 May 2023, \(\sim\)10 and \(\sim\)26 days post discovery respectively (PI: Chrimes, programme GS-2023A-DD-102). The first epoch consisted of 4\(\times\)500s exposures with the R400 grating, 1 arcsec slit width and two central wavelengths (two exposures at 520 nm and two at 565 nm). The second epoch consisted of 4\(\times\)1845s exposures with the R400 grating, 1 arcsec slit and central wavelength 675 nm. Data reduction was performed using the python package dragons(Labrie et al., 2019). Associated arcs, flats and bias frames were taken as part of the programme. Sky lines and unusable regions (e.g. due to the amplifier 5 failure5) are manually masked. We bin the pixels by a factor of 6 along the wavelength axis to increase the signal-to-noise ratio, and combine the 520 nm and 565 nm centred spectra by taking the Figure 3: Upper panel: the background-subtracted spectrum of AT 2023fhn obtained with Gemini/GMOS-S on 22/23 Apr 2023, \(\sim\)10 rest-frame days post-discovery, and shifted into the transient rest-frame. A black-body fit returns \(T=24.8^{+2.4}_{-2.3}\times 10^{3}\) K. Background traces are shown in grey. Lower panel: a spectrum of the satellite galaxy. A robust detection of the H\(\alpha\) emission line at \(z=0.238\pm 0.004\) confirms an association with the adjacent spiral. Figure 2: Subtraction of a reference star at the location of AT 2023fhn. The 2\(\times\)2 arcsec cutouts show the transient (left), the reference star (middle) and the residual (right), after interpolating onto a finer pixel scale and subtraction of the shifted and vertically scaled reference star. The emission is consistent with being a point source. mean where they overlap. We correct for Galactic extinction by adopting \(E(B-V)=0.025\)(Schlafly and Finkbeiner, 2011), and calculate the extinction at each wavelength with the python extinction(Barbary, 2016) module assuming \(R_{\rm V}=3.1\). For flux calibration, spectro-photometric standard stars observed with the closest-matching set-up were found in the Gemini archive. For the 525 nm data we use spectra of EG274 (programme GS-2023A-FT-205), for the 565 nm data we use LTT6248 (GS-2022A-Q-315) and for the 675 nm data we use LTT1020 (GS-2022B-Q-126). The final extinction-corrected spectra are plotted in Figure 3. In our first epoch of spectroscopy (22/23 Apr), AT 2023Th is detected as shown in Figure 3. Fitting a black-body to the Galactic extinction-corrected, rest-frame spectrum yields a temperature of \(24.8^{+2.4}_{-2.3}\times 10^{3}\) K (\(\chi^{2}_{\nu}=3.66\) with 282 degrees of freedom, where uncertainties are derived from the local standard deviation of the spectrum). This compares with a temperature of \(17.5^{+1.2}_{-1.0}\times 10^{3}\) K derived from FORS2 photometry taken on the following night(Wise and Perley, 2023). The large \(\chi^{2}_{\nu}\) is likely due to correlated, systematic errors (e.g. from imperfect flux calibration) that have not been accounted for. A power-law produces a fit of similar quality - taking \(\rm F_{A}\propto v^{2-B}\), we find a best-fit power-law index \(\beta=-1.24^{+0.06}_{-0.09}\), with \(\chi^{2}_{\nu}=3.63\). Nevertheless, temperatures of \(\sim\)20\(\times 10^{3}\) K are comparable to AT 2018cow, which had a black-body temperature of \(19.3^{+0.7}_{-0.8}\times 10^{3}\) K at a similar rest-frame epoch (Prentice et al., 2018). No correction for host-intrinsic extinction has been made, however as revealed in the _HST_ imaging, the transient appears to be far away from any significant sources of dust, as it lies outside the bulk of the optical light of both nearby galaxies. In the second epoch of spectroscopy (12 May) the transient had faded sufficiently to result in a non-detection, with an upper limit on H\(\alpha\) emission at its location (taking an aperture with the same radius as the seeing) of \(<1.2\times 10^{-16}\) erg s\({}^{-1}\) cm\({}^{-2}\). The slit was also placed on the edge of the satellite galaxy. From the centroid and width of the H\(\alpha\) line, we derive a redshift \(z=0.238\pm 0.004\), consistent with the spiral redshift of \(\sim 0.24\) reported by Ho et al. (2023), and backing up the satellite interpretation for this galaxy. We have adopted \(z=0.238\) for all relevant calculations in this letter. ## 4 Discussion All published LFBOTs to date have occurred in star-forming dwarfs (the Koala, CSS161010, the Camel, AT 2020mrf, Ho et al., 2020; Coppejans et al., 2020; Perley et al., 2021; Yao et al., 2022) or spirals (the Cow, Prentice et al., 2018; Lyman et al., 2020). AT 2023fhn also has a star-forming host, assuming one of the spiral or dwarf (both are strong H\(\alpha\) emitters) is the galaxy of origin. However, in contrast with LFBOTs so far, it lies far away from the bulk of the host light for either choice of host galaxy. Such offsets are atypical for core-collapse transients due to the short (10-100 Myr lifetimes) of the progenitor stars. Figure 4 compares the physical projected offsets and host-normalised offsets of a range of transients compiled from the literature, including long gamma-ray bursts (LGRBs), short gamma-ray bursts (SGRBs), superluminous supernovae (SLSNe), other core-collapse supernovae (CCSNe), fast radio bursts (FRBs), Ca-rich and type Ia SNe. The host offsets of four previous LFBOTs are also shown (\(r_{n}\) values were not reported for these events). AT 2023fh lies much further out from its host than other LFBOTs to date. To quantify this, we randomly draw 5 (the number of LFBOTs with host offset measurements in Fig. 4) offsets from the Schulze et al. (2021) CCSN distribution \(10^{4}\) times, and calculate the frequency with which at least one of these lies at 5.35 (16.51) kpc or greater (for the satellite and spiral respectively). For the satellite, this occurs in 85 per cent of random draws, for the spiral it occurs in 13 per cent. In terms of host-normalised offset, only \(\sim\)1 per cent of CCSNe occur at higher offsets than AT 2023fhn. In all 4 combinations of filter and galaxy choice, the transient lies outside the pixels selected as associated with the galaxies, therefore (by definition) the transient will have a fraction of light (Fruchter et al., 2006) value \(\rm F_{light}=0\) in both filters. This is unlikely but not unprecedented for core-collapse events; a few per cent of CCSN have \(\rm F_{light}=0\)(Svensson et al., 2010). Therefore, a core-collapse origin cannot be ruled out. If originating at a lower offset, time-of-travel arguments require a massive star with velocity \(\gtrsim\)50/350 km s\({}^{-1}\) for the spiral/satellite, assuming a long-lived 100 Myr-old progenitor (Eldridge et al., 2019) and an origin at \(\sim\)\(\)\(\tau_{50}\). Only a small fraction of massive stars have such high velocities (e.g. Portegies Zwart, 2000; de Wit et al., 2005; Figure 4: The cumulative offset and host-normalised offset distributions of a variety of transients, and the offset of AT 2023fhn from the spiral (thick black vertical lines) and its satellite (narrow vertical lines) - solid lines represent F555W, dashed lines F814W. The four previous LFBOT offsets are from Prentice et al. (2018, the Cow), Ho et al. (2020, the Koala), Coppejans et al. (2020, CSS161010) and Yao et al. (2022, AT 2020mrf). The comparison distributions are from Blanchard et al. (2016); Lyman et al. (2017, LGRBs), Luman et al. (2015); Schulze et al. (2021, SLSNe), Kelly and Kirshner (2012); Schulze et al. (2021, CCSNe), Bhandari et al. (2022, FRBs), Wang et al. (2013, type Ia SNe), Luman et al. (2017); De et al. (2020, Ca-rich SNe) and Fong et al. (2022, SGRBs). Also shown is the globular cluster (GC) offset distribution around M81 (Lomeli-Núñez et al., 2022). Eldridge et al., 2011; Renzo et al., 2019; Chrimes et al., 2023). The delayed mergers of compact objects can also achieve high offsets (i.e. SGRBs), but the luminosity, spectra and rapid evolution of LFBOTs effectively rule out an association with even the most extreme of these transients (e.g. Kann et al., 2011; Sarin et al., 2022). Since no spectroscopic redshift for the transient has been measured, we consider the probability of a chance alignment P\({}_{\rm chance}\) between AT 2023fhn and the two galaxies (following Bloom et al., 2002; Berger, 2010). P\({}_{\rm chance}\) is calculated using SDSS DR16 \(r\) -band magnitudes for the spiral and satellite, which are \(18.94\pm 0.02\) and \(22.61\pm 0.14\), respectively. For the spiral we find P\({}_{\rm chance}=0.78\) per cent, and for the satellite P\({}_{\rm chance}=1.38\) per cent. Therefore, AT 2023fhn is likely associated with one of the two galaxies. As shown in the inset panels of Figure 1, the progenitor may have originated in a faint tidal stream or spiral arm. Based on our early-time radio and H\(\alpha\) upper limits (Sections 2 and 3.2), and using the star formation rate (SFR) calibrations of Murphy et al. (2011), we derive 3 \(\sigma\) upper limits on the underlying SFR at the location of AT 2023fhn of \(\sim\)6 M\({}_{\odot}\)yr\({}^{-1}\) (at 6.05 GHz, the strongest radio constraint) and \(\sim\)0.1 M\({}_{\odot}\)yr\({}^{-1}\) (H\(\alpha\)). The F555W (rest-frame \(\sim\)B-band) surface brightness of 25.2 mag arcsec\({}^{-2}\) (Sec. 3.1.3) is among the faintest \(\sim\)2 per cent of (\(u\)-band) local surface brightnesses for CCSNe (Kelly & Kirshner, 2012). Unless the population is extremely young, adjusting for the \(B\)-band/\(u\)-band discrepancy would give an even fainter surface brightness (due to lower flux blue-wards of the Balmer break). An IMBH TDE explanation requires an underlying cluster, since a dense stellar environment is necessary to make encounters likely (e.g. Ye et al., 2023). As shown in Section 3.1.3, a cluster at the location of AT 2023fhn cannot be ruled out. At \(z\sim 0.24\), even the brightest and largest globular clusters (GCs) would have optical apparent magnitudes of \(\sim\)30 - far fainter than the source in the _HST_ images - and angular extents too small to be resolved (Harris, 2010). Finally, we compare the offset of AT 2023fhn from the spiral with the distribution of GCs around M81 (which has a similar physical size and morphology), using the Sersic distribution of Lomel-Nunez et al. (2022) (see also Perelmutter & Racine, 1995). The GC offsets, and distribution normalised by the F555W half-light radius of the spiral, are shown in Figure 4. Only 0.5 per cent of GCs occur at the offset of AT 2023fhn or higher. While unlikely based on this statistic, the lack of strong photometric constraints mean that an origin in a globular cluster is also not ruled out. ## 5 Conclusions In this letter, we have presented _HST_, Gemini, Chandra and VLA observations of AT 2023fhn, the first LFBOT to lie at a large offset from its host galaxy. Although the location is more representative of other transient types, given the offset, local surface brightness, limit on star-formation and constraints on an underlying cluster, we cannot rule out a massive star progenitor. Likewise, a tidal disruption event in a unseen cluster cannot be ruled out. Environmental studies are needed for a population of LFBOTs to determine if AT 2023fhn is a significant outlier. Late-time imaging will put further constraints on the underlying stellar population, while detailed modelling of the spectra and multi-wavelength light-curve is needed to reveal more about the origin of this enigmatic transient. ## Acknowledgements This work is part of the research programme Athena with project number 184.034.002, which is (partly) financed by the Dutch Research Council (NWO). This research has made use of computing facilities provided by the Scientific Computing Research Technology Platform of the University of Warwick. Observations analysed in this work were taken by the NASA/ESA Hubble Space Telescope under program 17238. This research has made use of software provided by the Chandra X-ray Center (CXC) in the application of the CIAO package (Fruscione et al., 2006). The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. Based on observations obtained at the international Gemini Observatory, a program of NSF's NOIRLab, which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation on behalf of the Gemini Observatory partnership. Finally, we thank the anonymous referee for their helpful feedback on this manuscript. ## Data Availability The data used are available upon request. Scripts and parameter files are available at [https://github.com/achrimes2/Finch](https://github.com/achrimes2/Finch).
2305.14263
LIMIT: Language Identification, Misidentification, and Translation using Hierarchical Models in 350+ Languages
Knowing the language of an input text/audio is a necessary first step for using almost every NLP tool such as taggers, parsers, or translation systems. Language identification is a well-studied problem, sometimes even considered solved; in reality, due to lack of data and computational challenges, current systems cannot accurately identify most of the world's 7000 languages. To tackle this bottleneck, we first compile a corpus, MCS-350, of 50K multilingual and parallel children's stories in 350+ languages. MCS-350 can serve as a benchmark for language identification of short texts and for 1400+ new translation directions in low-resource Indian and African languages. Second, we propose a novel misprediction-resolution hierarchical model, LIMIt, for language identification that reduces error by 55% (from 0.71 to 0.32) on our compiled children's stories dataset and by 40% (from 0.23 to 0.14) on the FLORES-200 benchmark. Our method can expand language identification coverage into low-resource languages by relying solely on systemic misprediction patterns, bypassing the need to retrain large models from scratch.
Milind Agarwal, Md Mahfuz Ibn Alam, Antonios Anastasopoulos
2023-05-23T17:15:43Z
http://arxiv.org/abs/2305.14263v2
LIMIT: Language Identification, Misidentification, and Translation using Hierarchical Models in 350+ Languages ###### Abstract Knowing the language of an input text/audio is a necessary first step for using almost every natural language processing (NLP) tool such as taggers, parsers, or translation systems. Language identification is a well-studied problem, sometimes even considered _solved_; in reality, most of the world's 7000 languages are not supported by current systems. This lack of representation affects large-scale data mining efforts and further exacerbates data shortage for low-resource languages. We take a step towards tackling the data bottleneck by compiling a corpus of over 50K parallel children's stories in 350+ languages and dialects, and the computation bottleneck by building lightweight hierarchical models for language identification. Our data can serve as benchmark data for language identification of short texts and for understudied translation directions such as those between Indian or African languages. Our proposed method, Hierarchical LIMIT, uses limited computation to expand coverage into excluded languages while maintaining prediction quality.1 Footnote 1: Data and code are available on [https://github.com/magarw/limit](https://github.com/magarw/limit) ## 1 Introduction Building natural language processing (NLP) tools like machine translation, language identification, part of speech (POS) taggers, etc. increasingly requires more and more data and computational resources. To attain good performance on a large number of languages, model complexity and data quantity must be substantially increased. However, for low-resource languages, large amounts of data are often unavailable which creates a high barrier of entry for a majority of the world's 7000 languages. Increasing model complexity for large-scale models also requires disproportionate amount of computational resources, further disincentivizing researchers to work towards including these languages in modern NLP systems. A popular data collection approach is large-scale web mining Tiedemann and Nygaard (2004); Banon et al. (2020); Schwenk et al. (2021), where large parts of the internet are scoured to find training data for data-hungry NLP algorithms. When faced with a piece of text (ex. a sentence or a phrase), such algorithms must know how to reliably sort this text into the appropriate language bucket. Since the web is replete with content in a variety of languages, a model needs to recognize text in a sufficiently large number of these languages with high accuracy. Identifying parallel bitext is even more demanding as some translation models must also be available to correctly identify and align parallel data Vegi et al. (2022); Kunchukuttan et al. (2018). This data-collection paradigm becomes inaccessible for low-resource languages because high-quality translation models usually require substantial amounts of parallel data for training, which is often unavailable. Without good quality language identification and translation tools, it becomes impractical to mine the internet for relevant text during such collection efforts. Figure 1: Most languages in our dataset are from the Indian Subcontinent and Sub-Saharan Africa, with significant minorities from Europe (primarily in the role of the high-resource language parallel translation available for each story). Color broadly indicates continent or region (North America, South America, Africa, Europe, Asia, Oceania) and size indicates number of languages per country in our dataset. Low-quality language identification and machine translation plagues low-resource languages disproportionately which need large-scale resource creation efforts the most (Jauhiainen et al., 2019; Schwenk et al., 2021). Additionally, mispredictions by language identification and data collection algorithms can increase inter-class noise, reducing the crawled data's quality, and harming performance in downstream tasks without strong quality evaluation metrics (Kocyigit et al., 2022). How can we better understand the errors made by such models? And, how can we correct mispredictions to improve accuracy in supported languages with limited data and sustainably trained models? Can we expand language coverage of current models without complete retraining? How can this be done without compromising performance on already supported languages? To tackle data scarcity in low-resource languages, we share a parallel children's stories dataset created using two resources: African Storbybooks Initiative2 and Indian non-profit publishing outfit Pratham Books' digital repository Storyweaver3 (data available under appropriate permissive Creative Commons licenses). The combined dataset includes original and human-translated parallel stories in over 350 languages (visualized in Figure 1) and we merge, preprocess, and structure it so it is easily utilizable by NLP researchers for training and benchmarking (Section 2). Footnote 2: [https://www.africanstorybook.org/](https://www.africanstorybook.org/) Footnote 3: [https://storyweaver.org.in/](https://storyweaver.org.in/) Armed with parallel stories in many low-resource African and Indian languages, we utilize a pre-trained multilingual translation model (Alam and Anastasopoulos, 2022) and continue training with hierarchical language-level and language family-level adapter units to translate children's stories at the page-level (Section 3). By leveraging hierarchically organized adapter units on top of a root translation model, we save computational resources, while expanding machine translation into many new and understudied language pairs (especially those between two low-resource languages), creating new benchmarks for the story translation domain, as well as evaluating our models on the FLORES (NLLB Team et al., 2022) benchmark. To use this diverse linguistic data judiciously, we also propose hierarchical models to resolve confusion in language identification systems. The proposed approach is exciting because unlike previously published language identification models like AfroLID (Adebara et al., 2022), CLD3 (Salcianu et al., 2020) and Franc4 it avoids training large multilingual models for a new set of languages and still outperforms existing systems. In contrast with other recent work in hierarchical language identification (Goutte et al., 2014; Lui et al., 2014; Bestgen, 2017; Jauhiainen et al., 2019), our work stands out because it accounts for _mispredictions_ made by existing trained models. It does not predict a group/language family first, but rather directly learns confusion relationships between language pairs (which may not be from the same language family). Footnote 4: [https://github.com/wooorm/franc/](https://github.com/wooorm/franc/) We leverage lightweight, hierarchical, classification units to improve linguistic diversity and performance of a root system. This is made possible by analyzing the root model's mispredictions and identifying commonly confused language clusters. Furthermore, this confusion-based hierarchical model approach is applicable to both a model's supported languages and unsupported languages subject to availability of some training data (Section 4). To summarize, our main contributions are: 1. We compile a dataset of 50K+ parallel children's stories from African Storybooks Initiative and Storyweaver in 350+ languages. (SS2) 2. We perform machine translation experiments with hierarchical adapter-based multilingual translation models. Our benchmark data enables translation evaluation in more than 1400 new translation directions (SS3) 3. We propose a _misidentification_ based hierarchical model whose units act as an alternative to large and expensive multilingual models for low-resource languages. Using these, we expand language identification coverage without retraining entire models from scratch. (SS4) ## 2 Data Curation We identify two large-scale parallel repositories - African Storybooks Initiative and Pratham Books' Storyweaver, both under permissive Creative Commons Licenses, with their storybooks available for non-commercial and research use. As the name suggests, African Storybooks Initiative focuses on children's stories in languages and dialects from Africa, and hosts parallel translated and human-verified children's stories in over 200 African languages. Pratham Books is a non-profit Indian publisher that aims to increase literacy of children and adults alike in Indian languages. Their digital repository, Storyweaver, publishes parallel translated stories in 300+ languages. This includes not only Indian languages but also African, European, and Indigenous languages from the Americas. ### Parallel Dataset We collect stories through a mix of web scraping and public APIs, preprocess them to remove mismatched/incorrect text, extract monolingual text for language identification and parallel text for machine translation. We maintain metadata about authors, translators, illustrators, reading level, parallel translations, and copyrights for each story. We remove stories that are either empty or those from non-English languages that have over 50% of pages classified as containing English text with 90% confidence using langdetect(Nakatani, 2010). This leaves us with \(\sim\)52K stories. ### Multilingual Documents The dataset also contains multilingual stories with language identifiers denoted by \(L_{1}\_L_{2}\) for a story multilingual in \(L_{1}\) and \(L_{2}\). Such stories include text in multiple languages within the same page. This text may be code-mixed or consecutively presented. In order to extract as many parallel sentences as possible to support vulnerable languages and also create new translation directions, we employ string-similarity based matching to identify the segments corresponding to the high-resource language in the pair, and therefore automatically generate over 10K new parallel pages across 52 languages. This was facilitated by the highly parallel nature of the dataset and the guaranteed occurrence of high-resource language translations for each story. For example, through this process, we extracted 1000+ sentences in Kui (0 sentences pre-extraction), a minority Dravidian language with about 900K native speakers. These extracted sentences can be used for language identification training as a monolingual seed corpus and for translation since the sentences are parallel with Odia (the official language in Odisha, where Kui is spoken). ### Language Varieties and Dialects We attempt to separate language varieties into unique prediction classes if there is sufficient training data for them, setting a cuttoff at 1000 sentences. If a language ISO code is available for the variety, it is used. Otherwise, we assign a class name with the ISO code and the subdivision specified as follows - ISO_subdivision. For instance, we separated Gondi's South Bastar variety (gon_bastar, 4000+ sentences) from the generic language code for Gondi (gon). For fair evaluation and comparison, we provide manual mappings from various language identification tools' (CLD3, Franc, Langid.py (Lui and Baldwin, 2012)) outputs to our output class space during inference. Language varieties/dialects with no unique ISO code and with little data are naturally merged according to their parent language's ISO code. For example, "Bangla (Banglades)" and "Bengali" are merged since Bangladesh's Bengali variety doesn't have a unique ISO code and it doesn't have over 1000 sentences in the dataset. Here "Bengali" \begin{table} \begin{tabular}{l r r} \hline \hline **Family** & **Languages** & **Sentences** \\ \hline Niger-Congo & 129 & 142605 \\ Indo-European & 84 & 169823 \\ Nilo-Saharan & 22 & 23204 \\ Sino-Tibetan & 21 & 19264 \\ Austronesian & 18 & 28096 \\ Afro-Asiatic & 15 & 20266 \\ Dravidian & 13 & 35638 \\ Austro-Asiatic & 10 & 22989 \\ Otomanguean & 9 & 6761 \\ Creole & 8 & 1037 \\ Mayan & 7 & 1379 \\ Turkic & 5 & 5970 \\ Uto-Aztecan & 4 & 7245 \\ Mixe-Zoquean & 3 & 2005 \\ \hline \hline \end{tabular} \end{table} Table 1: Some key language families with 1000+ sentences across languages in the combined African Storybooks and Storyweaver data. \begin{table} \begin{tabular}{l r r} \hline \hline **Script** & **Languages** & **Examples** \\ \hline Devanagari & 38 & Hindi, Marathi \\ Cyrillic & 14 & Russian, Bulgarian \\ Arabic & 8 & Arabic, Persian \\ Tibetan & 3 & Tibetan, Ladakhi \\ Telugu & 3 & Telugu, Konda \\ Odia & 3 & Odia, Ho, Kui \\ \hline \hline \end{tabular} \end{table} Table 2: Some prominent non-Latin writing systems in the combined African Storybooks and Storyweaver data doesn't specifically refer to Indian Bengali but to broader Bengali text with many dialects included (i.e. where the author/translator didn't specify a specific dialect). Both are assigned the ISO code ben and their stories merged. Note that we perform such merges as the last step of the data processing pipeline and unmerged stories with complete metadata are made available. A full list of these transformations with explanations is located in our GitHub repository. ### Data Overview The combined data covers over 350 languages from a diverse pool of language families. In Table 1, we share the number of languages and the number of sentences in each language family in the dataset. The data is roughly evenly split between stories from the large Niger-Congo and Indo-European language families, with a sizeable minority in other language families like Nilo-Saharanan, Sino-Tibetan, Austronesian, Dravidian, Creole, etc. An exhaustive list of all ISO codes and language variety-specific codes used for language identification and machine translation tasks is available on our GitHub repository. Compared to the most multilingual existing translation benchmarks like NTREX (parallel data of 128 languages with English; Federmann et al., 2022), FLORES-200 (\(n\)-way, 200 languages; NLLB Team et al., 2022), or OPUS-100 (parallel data for 99 languages to/from English; Aharoni et al., 2019), our benchmark introduces up to 82 new languages leading to more than 1400 new language pairs (see Table 3). About 70% of the dataset's languages use the Latin script or its extended variants with diacritics, in line with global adoption and usage of the Latin script. However, the data is quite typographically rich, and stories with non-Latin scripts are in abundance, enumerated in Table 1. Details to reproduce raw data, intermediate preprocessing, and the merged data can be found in Appendix A.1. ## 3 Machine Translation Benchmark To test whether our dataset can improve machine translation performance, we perform experiments with hierarchical adapters units and provide new baselines among low-resource African languages. ### Experimental Settings As our baseline, we used the model from Alam and Anastasopoulos (2022), which is the best-performing publicly available model from the WMT Shared Task on Large Scale Evaluation for African Languages (Adelani et al., 2022).5 They first fine-tuned the DeltaLM6 model (Ma et al., 2021) in 26 languages. After that, they added lightweight language-specific adapter layers (Pfeiffer et al., 2022) and fine-tuned only the adapters in those 26 languages. We can either use a single adapter per language (L-Fine) or organize the adapters in a phylogenetically-informed hierarchy (F-Fine) so that similar languages share language-family- and genus-level adapters (Faisal and Anastasopoulos, 2022). See Appendix A.3 for details on the phylogenetic trees we used in our experiments. Footnote 5: That system ranked third in the Shared Task, but the top two systems were industry submissions that are not publicly available. Footnote 6: [https://aka.ms/deltalm](https://aka.ms/deltalm) We perform both L-Fine and F-Fine experiments using the publicly available code 7 and also share an additional baseline by finetuning the DeltaLM model without adapters. Details to reproduce our machine translation experiments, baselines, and results can be found in Appendix A.3. Footnote 7: [https://github.com/mahfuzibnalam/large-scale_MT_African_languages](https://github.com/mahfuzibnalam/large-scale_MT_African_languages) ### Train-Test Split We shuffle all stories and split them to achieve at least 1000 pages for test sets. All excess stories are kept for training and are used for fine-tuning. We ensure that even different sentences from the same story don't appear across the train test, i.e., all training stories are separate from test stories. This is done to get a more realistic estimate of translation quality on new stories. For languages with 1000 or fewer pages, we use 500-page test sets. ### Results: Machine Translation In Table 5, we show the performance of our L-Fine and F-Fine models compared to the baseline on our test set. We evaluate using three well \begin{table} \begin{tabular}{l c c} \hline \hline **Dataset** & **New languages** & **New pairs** \\ \hline Microsoft & 67 & 2835 \\ FLORES-200 & 51 & 1449 \\ OPUS & 82 & 2853 \\ \hline \hline \end{tabular} \end{table} Table 3: Additional languages and pairs (with test data) in our corpus compared to other benchmarks. known MT metrics: BLEU [11], CHRF++ [12], and spBLEU (NLLB Team et al., 2022). For spBLEU, we use the FLORES200 SPM model to create subwords. In all three metrics, we see the same trend across different averages. Our L-Fine model outperforms the Baseline model by 4.0-11.5 spBLEU points by fine-tuning only the language-specific adapters on our training set. Our F-Fine model outperforms the L-Fine model by 5.0-7.5 spBLEu points by fine-tuning only some shared parameters among languages and language-specific adapters. We also test our models on the FLORES200 benchmark (Appendix B) and observe that our L-Fine model and F-Fine model under-perform the Baseline model except for \(\textbf{Avg}_{eng\to X}\) directions across the three evaluation metrics. This is likely due domain adaptation of L-Fine and F-Fine models to the story domain upon fine-tuning. Since the dataset consists of children's stories which are usually written in simpler language, it may also be a slightly easier domain than FLORES. Even then, there are low-resource language pairs that benefit from fine-tuning using adapters across domains. We report these language pairs and their respective spBLEU gains for the F-Fine model in Table 4. We get the highest gains for English-Xhosa (20.1 points) and English-Hausa (18.8 points) across domains, both of which had poor performance from the Baseline model with spBLEU of 3.5 and 4.5, respectively. Exhaustive results for other language pairs can be found in Appendix B. ## 4 Language (Mis)Identification Language identification affects low-resource language resource creation efforts severely [1, 13] because to collect data, we need accurate language identifiers that themselves need data to train, creating a vicious cycle. Low-quality language identification systems often make mispredictions which increases inter-class noise and reduces the crawled data's quality [15] both for the predicted language and the true language. To correct mispredictions and improve accuracy in supported languages with limited data and sustainably trained models, we propose a hierarchical modeling approach. Hierarchical modeling is an extremely popular choice for a wide variety of algorithmic tasks and it has been explored for language identification as well [1, 16, 17, 18]. However, previous work has focused on predicting language group/family first, followed by finer-grained predictions with a smaller set of classes. Our work departs from this paradigm in two ways - first, we bring focus onto expanding language identification coverage in pre-trained or off-the-shelf systems without retraining, and second, we predict a prior and posterior language based on confusion and misprediction patterns of the model directly (without predicting language family/group first). First, we choose a well-performing root model with high-coverage that provides us with the base/prior prediction. Such base predictions are obtained for a sample of data (ex. our benchmark training set), allowing us to identify systemic confusion patterns embedded within the model using a confusion matrix. Based on the identified misprediction patterns (which may or may not be between languages in the same family), we train lightweight confusion-resolution subunits that can be attached onto the root model to make the posterior prediction. Our results showcase that a sample of data can be used to investigate a pretrained, off-the-shelf or even blackbox commercial model, identify systemic misprediction patterns, and resolve them with hierarchical models. ### Experimental Settings To establish a root system for the hierarchical model architecture, we train our own Multinomial Naive Bayes model with training data from 355 classes (355 languages + _unknown_ class), using character-level \(n\)-gram features. We withhold 50 pages from randomly selected stories for each language to create the test set. As is common in extremely low-resource settings, 123 languages in our selection had less than 200 sentences. Therefore, \begin{table} \begin{tabular}{c c|c c} \hline **Pair** & \(\Delta\)**spBLEU** & **Pair** & \(\Delta\)**spBLEU** \\ \hline eng-xho & 20.1 & eng-hau & 18.8 \\ fra-lug & 3.6 & nso-lug & 3.0 \\ lug-kin & 2.9 & kin-lug & 2.4 \\ nya-lug & 2.1 & eng-kam & 1.8 \\ ibo-lug & 1.7 & eng-lug & 1.5 \\ zul-lug & 1.5 & fra-tso & 1.3 \\ xho-lug & 1.2 & fra-yor & 1.1 \\ nso-tso & 1.0 & amh-lug & 1.0 \\ \hline \end{tabular} \end{table} Table 4: Example language pairs with performance gains for the F-Fine model over the baseline one. we used synthetic minority oversampling (Chawla et al., 2002) and tested on 10 human-verified sentences per language. To condense the large number of features, improve inference speed, and keep the model size low, we use Incremental PCA (always preserving at least 90% variance of the original features). As recommended in Chawla et al. (2002), minority class upsampling is done after feature extraction. For the confusion-resolving classification units, we again train simple Multinomial Naive Bayes models with up to 1000 sentences per language and character-level \(n\)-grams (2-4 grams) and word-level \(n\)-grams (1-2 grams) as features. We use Multinomial Naive Bayes models over other methods such as transformers to keep model complexity low, model sizes lean, and show that reasonable performance of low-resource languages is possible even with limited computation, space, and training data. Similarly, we rely on character and word-level \(n\)-grams since they can be universally computed, and do not share the disadvantages of low-coverage in pre-trained models like BERT (Devlin et al., 2019) that are not trained on sufficiently wide low-resource language data. ### Misidentification and Confusion Resolution To resolve high-confidence incorrect predictions in the multilingual root model, we inspect its confusion matrix (a representative example in Figure 2). For each test language, we divide the root model's predictions by the total number of tested examples giving us a hit ratio for each pair. For example, (Gujarati, Kutchi) would represent the ratio of Kutchi sentences that were confused with Gujarati. We select the 9 clusters (given below) with a confusion ratio \(>0.7\) and train a hierarchical LIMIT model. 1. Gujarati, Kutchi, Bhlori 2. Amharic, Tigrinya, Silt'e 3. Koda, Bengali, Assamese 4. Mandarin, Yue Chinese 5. Konda, Telugu 6. Kodava, Kannada 7. Tsonga, Tswa 8. Dagaare, Mumuye 9. Bats, Georgian As shown in Figure 2, we can identify that Amharic and Tigringya (both supported languages) are often misidentified as Tigrinya. Another kind of misidentification is when the source language is not supported by the model, i.e. Silt'e being misidentified as Tigrinya. To resolve this, we train a small unit trained to distinguish between Amharic, Tigrinya, and Silt'e. When the root model predicts Amharic or Tigrinya, the example gets passed down to the unit for a more fine-tuned prediction. This increases the model's coverage and resolves confusion without needing to retrain the root model. ### Evaluation All models are evaluated on held-out test sets as per Section 3.1. To evaluate the confusion-resolution units, we report language-level scores as well as aggregates. For root model selection, we report macro \(F_{1}\) scores. Details to reproduce all experiments, models, and results can be found in Appendix A.2. ### Results: Language Identification at Scale In Table 6, we show macro-\(F_{1}\) scores for all 4 systems - Google's CLD3, Langid.py, Franc, and our baseline system, LIMIT. Scores are reported across all 355 languages in the test set to better compare model performances on large multi-class classification tasks with limited data. Our system, although trained with very limited data, on a simple Multinomial Naive Bayes classifier (with 2-5x the number of classes compared to the other models) still performs on par with CLD3 and langid.py. Franc, built using the Universal Declaration of Human Rights (UDHR) data, comes out to be the best model, cov \begin{table} \begin{tabular}{c|c|c|c c c c c} \hline \hline **Metric** & **Models** & **Avg\({}_{all}\)** & **Avg\({}_{African\toAfrican}\)** & **Avg\({}_{X\to eng}\)** & **Avg\({}_{eng\to X}\)** & **Avg\({}_{Y\to fra}\)** & **Avg\({}_{fa\to Y}\)** \\ \hline \multirow{3}{*}{spBLEU} & Baseline & 11.87 & 10.19 & 18.79 & 13.20 & 15.64 & 12.55 \\ & L-Fine & 19.52 & 18.21 & 30.38 & 17.46 & 21.93 & 17.86 \\ \cline{1-1} & F-Fine & **24.93** & **23.58** & **35.66** & **25.26** & **27.06** & **21.36** \\ \hline \hline \end{tabular} \end{table} Table 5: Evaluation results on our test set of 176 language directions. **Avg\({}_{all}\)** denotes the average result of 176 translation directions. **Avg\({}_{African\toAfrican}\)** denotes the average score of directions between African languages. **Avg\({}_{X\to eng}\)** denotes the average score for translating into English, and **Avg\({}_{eng\to X}\)** for out of English (similarly for French in the last two columns. ering \(30\)% of our languages (\(105/356\) languages). It is derived from guess-language8 which uses a mix of writing system detection and character-level trigrams. Our baseline model, LIMIT, trains to identify 250 additional languages that Franc doesn't support, but due to limited data coupled with a large number of languages, it places second. Hence, we use Franc as the root system for our confusion resolution and coverage expansion experiments. Footnote 8: [https://github.com/kent37/guess-language](https://github.com/kent37/guess-language) ### Results: Language Misidentification In Table 7, we report \(F_{1}\) scores for each of the 9 highly confused clusters by Franc. We observe that languages within each cluster share a single writing system and are phylogenetically related. Below, we analyze some highlights from Table 7. * Gujarati, Kutchi, and Bhilori are Western Indo-Aryan languages spoken primarily in Gujarat and written in the Gujarati script. Franc doesn't support low-resource languages like Kutchi and Bhilori and confuses them with Gujarati (Figure 2). Our confusion-resolution unit resolves these to produce competitive Kutchi and Bhilori \(F_{1}\) scores, with only minor drop for Gujarati. * Amharic, Tigrinya, and Silt'e are all Ethiopic languages that use the Ge'ez script. Franc supports language identification for Amharic and Tigrinya, while it doesn't support Silt'e. Our confusion-resolution unit improves Amharic's \(F_{1}\) score while introducing a new language Silt'e at a reasonable baseline \(F_{1}\) score, with minor drop in performance for Tigrinya. * Bengali and Assamese are Eastern Indo-Aryan languages, whereas Koda is an endangered Munda language. All three languages use the Bengali-Assamese script. With our confusion resolution unit, we improve performance on all three languages and succesfully introduce Assamese and Koda language identification. Our hierarchical, confusion-resolution approach improves \(F_{1}\) score from \(0.2\) to \(0.55\), a 175% increase in performance, while providing novel language identification for 13 new low-resource and endangered languages. ### Computational and Space Complexity Each trained model has two components - the classifier and a projection model, which projects test-time examples into the train-feature embedding space. The traditional approach to train a large multilingual model takes \(\sim\)500MB space with a \(\sim\)15MB \begin{table} \begin{tabular}{l c c c c} \hline \hline **Lang** & **CLD3** & **langid** & **Franc** & **LIMIT** \\ \hline Macro F1 & 0.11 & 0.09 & **0.18** & 0.11 \\ \hline \hline \end{tabular} \end{table} Table 6: Our baseline multilingual language identification model (LIMIT) places second when compared to the state-of-the-art (aggregated \(F_{1}\) score on our test set). Based on this macro \(F_{1}\) score, we choose Franc as our root multilingual language identification model. Figure 2: Subset of the multilingual root model’s (Franc) confusion matrix (6 languages). Using the confusion matrix, clusters of highly confused languages are identified and confusion-resolution units trained according to the tree shown on the right. The tree, for demonstration purposes, is a subset of the entire tree which has 9 confusion-resolution units NaiveBayes classifier and a \(\sim\)450MB projection model. In contrast, our lightweight confusion-resolution approach creates units with size 7-10KB (\(0.06\%\) of base model) and a projection model of <100MB (\(33.34\%\) of base model). All reported sizes are uncompressed sizes. The traditional large multilingual model with 365+ languages and only 1000 training examples per language take 7-8 hours to train on CPU. In contrast, the hierarchical LIMIT units take 1-2 minutes to train (\(0.4\%\) of base time). ## 5 Related Work Parallel DatasetsLanguage identification models tend to use popular training datasets like Vatanen et al. (2010) (UDHR data used by Franc), Blodgett et al. (2017) for social media (70 languages), King and Abney (2013) (web-crawl in 30 languages), FLORES (200 languages), etc. Another recently published dataset, BLOOM (Leong et al., 2022), leverages text and audio in children's stories from similar sources (African Storybooks, The Asia Foundation, Little Zebra Books etc.) to create benchmarks for image captioning and speech recognition. However, their data is monolingual, unaligned, and can not be used for machine translation. We leveraged the highly parallel nature of the collected storybooks (5x the number of stories in BLOOM) and created test sets and baselines for understudied translation directions. Machine TranslationAs a result of its ability to produce translations between multiple languages, multilingual neural machine translation (Dong et al., 2015; Johnson et al., 2017; Arivazhagan et al., 2019; Dabre et al., 2020; Philip et al., 2020; Lin et al., 2021) has become a popular architecture. Thousands of languages are spoken worldwide, so representing them with bilingual models would require thousands of models. Neither scalability nor adaptability makes this an ideal solution. Through various training methods (Aharoni et al., 2019; Wang et al., 2020), model structures (Wang et al., 2018; Gong et al., 2021; Zhang et al., 2021), and data augmentation (Tan et al., 2019; Pan et al., 2021) a variety of research has attempted to improve multilingual translation models. Adapter units were initially proposed for light-weight domain adaptation for MT (Vilar, 2018) and then also for extending a large pre-trained model to a downstream task (Houlsby et al., 2019). Bapna and Firat (2019) improved pre-trained multilingual machine translation models for domain adaptation using bilingual adapters. Language IdentificationText-based language identification is usually modelled as a classification task. Similar to our featurization approach, other popular language identification models utilize byte, character and word-level \(n\)-gram features, followed by some dimensionality reduction, and classifers such as SVMs (Ciobanu et al., 2018; Malmasi and Dras, 2015), Naive Bayes (King et al., 2014; Mathur et al., 2017), Neural Networks (Medvedeva et al., 2017; Criscuolo and Aluisio, 2017; Eldesouki et al., 2016), for their straightforward modeling and high performance. By increasing the number of classes/languages a classifier handles, accuracy tends to decrease (Jauhainen et al., 2017), a prob \begin{table} \begin{tabular}{l c c} \hline \hline **Language** & **Franc** & **Hier. LIMIT** \\ \hline Gujarati (guj) & **0.50** & 0.48 \\ Kutchi (kfr) & & **0.48** \\ Bhilori & & **0.43** \\ \hline Amharic (amh) & 0.21 & **0.47** \\ Tigrinya (tir) & **0.43** & 0.28 \\ Silt’e (stv) & & **0.48** \\ \hline Koda (cdz) & & **0.32** \\ Bengali (ben) & 0.48 & **0.52** \\ Assamese (asm) & & **0.25** \\ \hline Mandarin (zho) & & **0.36** \\ Yue (yue) & & **0.68** \\ \hline Konda (kfc) & & **0.66** \\ Telugu (tel) & 0.64 & **0.69** \\ \hline Kodava (kfa) & & **0.55** \\ Kannada (kan) & 0.71 & **0.77** \\ \hline Tsonga (tso) & **0.49** & 0.41 \\ Tswa (tsc) & & **0.30** \\ \hline Dagaare & & **0.84** \\ Mumuye (mzm) & & **0.86** \\ \hline Bats (bbl) & & **0.91** \\ Georgian (kat) & 0.67 & **0.89** \\ \hline aggregate & 0.20 & **0.55** \\ \hline \hline \end{tabular} \end{table} Table 7: Our Hierarchical LIMIT approach improves \(F_{1}\) LID over Franc in highly confused languages (over 70% confusion) across language families and with very limited data. Empty Franc entries indicate languages unsupported by Franc. lem we propose to tackle by leveraging a confusion-informed hierarchical approach. To distinguish between closely related languages, a lot of exciting research has been published at various editions of VarDial - The Workshop on NLP for Similar Languages, Varieties and Dialects (Aepli et al., 2022; Scherrer et al., 2022; Chakravarthi et al., 2021; Zampieri et al., 2020, 2014). But, even at the workshop, a large number of ongoing tasks and papers are restricted to European languages, with very little space in the agenda for Indian, African, or other Indigenous languages. Over the last 3 iterations of VarDial from 2019-2022, many new datasets and techniques to identify Romance languages such as Italian (Jauhiainen et al., 2022; Camposampiero et al., 2022; Zugarini et al., 2020) or Romanian (Jauhiainen et al., 2021; Zaharia et al., 2021; Ceolin and Zhang, 2020; Zaharia et al., 2020), Nordic languages (Maehlum et al., 2022; Haas and Derczynski, 2021), Uralic languages (Jauhiainen et al., 2020; Bernier-Colborne et al., 2021), German varieties (Miaela et al., 2021; Nigmatulina et al., 2020; Gaman and Ionescu, 2020; Siewert et al., 2020), and the Slavic language continuum (Popovic et al., 2020; Abdullah et al., 2020) were published. In contrast, we see a very small number of papers or tasks on Indian languages at the venue with 2 focusing on Indo-Aryan and 2 focusing on Dravidian languages (Nath et al., 2022; Bhatia et al., 2021; Jauhiainen et al., 2021; Chakravarthi et al., 2020), and no papers or tasks, to our knowledge, on African languages or varieties. Hierarchical ModelingHierarchical approaches have proved successful in solving a myriad of computational problems, and have proved useful in language identification previously. The widely used approach first predicts a preliminary language group/family that a given input may belong to, and then does another fine-tuned prediction from the smaller set of output classes contained within the language group/family (Goutte et al., 2014; Lui et al., 2014; Bestgen, 2017; Jauhiainen et al., 2019). In contrast, our work extends this commonly accepted hierarchical modeling architecture to account for mispredictions made by existing trained models, and does not predict a group/language family first, but rather directly learns confusion relationships between language pairs. Then, similar to Bestgen (2017); Goutte et al. (2014), we train smaller classifiers for a fine-tuned prediction, but in contrast, our classifiers distinguish between highly-confused languages (which may not be part of the same language group/family), and map a first-pass language prediction (not family/group) into another refined language prediction. ## 6 Conclusion We introduce Hier-LIMIT, a hierarchical, _confusion_-based approach to counter the misidentifications in pretrained language identification systems while increasing language coverage without retraining large multilingual models for text classification. We release a large, massively parallel children's stories dataset covering languages from diverse language families, writing systems, and reading levels. We utilize this parallel dataset to create new translation directions for vulnerable and low-resource languages. We train adapter-based networks fine-tuned on language and family/sub-family level information and demonstrate improvements in the children's story domain and cross-domain improvement for several languages (on the FLORES benchmark dataset). Our dataset also includes monolingual text extracted from multilingual stories to enable the creation of language identification tools for low-resource languages that don't have such tools available. We perform experiments demonstrating the performance of pretrained language identification models on these languages, highlight their high-confidence incorrect predictions, and offer a lightweight hierarchical solution. In the future, we hope to use this children's story data to investigate better architectures, feature selection, and training setups to further improve our baselines. Armed with high-quality language identification systems with wide coverage, we will also experiment with large-scale data mining efforts for these under-resourced languages. ## Limitations * While our hierarchical model approach is efficient, we were limited in the data that we trained the subunits with. All language identification training and testing data were obtained from the parallel children's story dataset. We believe that if more diverse training data can be collected in low-resource languages, the hierarchical subunits will be performant across domains since the identified confusion will also be domain-independent. * Our dataset covers over 350 languages and we build high-quality language identification models for these languages. However, we restrict ourselves to text-based language identification and translation. Out of the 7000 languages in the world, many are primarily spoken languages and do not have online or offline textual presence in the form of articles, textbooks, stories etc. Therefore understanding and studying speech is crucial and we plan on tackling speech-based language identification/recognition and machine translation in future work. * Language identification performance varies with domain, length of text, and language. We acknowledge that our system, like other state-of-the-art systems, is not perfect and may make classification errors due to such factors. We hope that readers will understand this risk well and its potential downstream effects before using our dataset, language identification, or machine translation results in their work. * There are many more off-the-shelf systems other than the ones we used such as HeLi-OTS (Jauhianen et al., 2022) and fastText (Joulin et al., 2016), methods to transform the feature space (Brown, 2014), and techniques to improve dataset precision for low-resource languages for better crawls (Caswell et al., 2020) that we hope to include in the future to produce a stronger benchmark for the community. ## Ethics Statement Data used, compiled, and preprocessed in this project is freely available online under Creative Commons licenses (CC BY 4.0). Stories from the African Storybooks Initiative (ASI) are openly licensed, can be used without asking for permission, and without paying any fees. We acknowledge the writers, authors, translators, illustrators of each of the books and the ASI team for creating such a valuable repository of parallel storybooks in African languages. Stories from the Pratham Storybooks' Storyweaver portal are available under open licensing as well, and we preserve metadata for the author, illustrator, translator (where applicable), publisher, copyright information, and donor/funder for each book, in accordance with Storyweaver's guidelines. Since stories hosted on African Storybooks Initiative and Pratham Books' Storyweaver are intended for children and most of them are vetted or human-verified we do not explicitly check for offensive content. Our language identification models, by design, are meant to provide an alternative to training resource-hungry large-scale multilingual models that require a lot of training data. Such models are inaccessible to many researchers since they require access to specialized computing hardware. Our models are built with sustainability and equity in mind, and can be trained in a matter of minutes on CPU on standard laptops. ## Acknowledgments This work was generously supported by the National Endowment for the Humanities under award PR-276810-21 and by the National Science Foundation under award FAI-2040926. Computational resources for experiments were provided by the Office of Research Computing at George Mason University (URL: [https://orc.gmu.edu](https://orc.gmu.edu)) and funded in part by grants from the National Science Foundation (Awards Number 1625039 and 2018631).
2301.06390
Metrics for Software Process Simulation Modeling
Background: Software Process Simulation (SPS) has become an effective tool for software process management and improvement. However, its adoption in industry is less than what the research community expected due to the burden of measurement cost and the high demand for domain knowledge. The difficulty of extracting appropriate metrics with real data from process enactment is one of the great challenges. Objective: We aim to provide evidence-based support of the process metrics for software process (simulation) modeling. Method: A systematic literature review was performed by extending our previous review series to draw a comprehensive understanding of the metrics for process modeling following a meta-model of ontology of metrics in SPS. Results: We identified 145 process modeling studies that collectively involve 2130 metrics and classified them using the coding technique. Two diagrams which illustrate the high frequency causal relationships used between metrics are proposed in terms of two hierarchical levels of modeling purposes. We revisited the data issues encountered in SPS data preparing phases, as well as identified the corresponding strategies. Conclusion: The results of this study provide process modelers with an evidence-based reference of the identification and the use of metrics in SPS modeling, and further contribute to the development of the body of knowledge on software metrics in the context of process modeling. Furthermore, this study is not limited to process simulation but can be extended to software process modeling, in general. Taking simulation metrics as standards and references can further motivate and guide software developers to improve the collection, governance, and application of process data in practice.
Bohan Liu, He Zhang, Liming Dong, Zhiqi Wang, Shanshan Li
2023-01-16T12:15:12Z
http://arxiv.org/abs/2301.06390v1
# Metrics for Software Process Simulation Modeling ###### Abstract _Background_: Software Process Simulation (SPS) has become an effective tool for software process management and improvement. However, its adoption in industry is less than what the research community expected due to the burden of measurement cost and the high demand for domain knowledge. The difficulty of extracting appropriate metrics with real data from process enactment is one of the great challenges. _Objective_: We aim to provide evidence-based support of the process metrics for software process (simulation) modeling. _Method_: A systematic literature review was performed by extending our previous review series to draw a comprehensive understanding of the metrics for process modeling following a meta-model of ontology of metrics in SPS. _Results_: We identified 145 process modeling studies that collectively involve 2130 metrics and classified them using the coding technique. Two diagrams which illustrate the high frequency causal relationships used between metrics are proposed in terms of two hierarchical levels of modeling purposes. The specific metrics of different paradigms are compared, and the main difference is that Discrete-Event Simulation (DES) and Agent-Based Simulation (ABS) can provide more detailed simulations from the perspective of development activities and individual developers whilst System Dynamics (SD) tends to use the mean value as an alternative. We revisited the data issues encountered in SPS data preparing phases, as well as identified the corresponding strategies. _Conclusion_: The results of this study provide process modelers with an evidence-based reference of the identification and the use of metrics in SPS modeling, and further contribute to the development of the body of knowledge on software metrics in the context of process modeling. Furthermore, this study is not limited to process simulation but can be extended to software process modeling, in general. Taking simulation metrics as standards and references can further motivate and guide software developers to improve the collection, governance, and application of process data in practice. software metric, software process model, process simulation, systematic literature review ## 1 Introduction Software process models are built to gain insights into software processes so that we can predict, modify or control them [1]. Software process model can be either a static (descriptive) model or a dynamic (simulation) model whose behavior changes over time. The simulation requires more information and knowledge, but it can simulate the real world in more detail. It has been widely claimed and accepted that SPS is an effective tool in support of software process management and improvement. Since Abdellamid and Madnick [2] introduced Software Process Simulation (SPS) to Software Engineering (SE) in the 1980s, there have been a large number of studies published in the community, including quite a few industrial cases. Ahmed et al. [3] conducted a survey to investigate the state-of-practice of simulation practice in SPS, nearly half of the respondents (8/17) are from industry. Furthermore, researchers have applied the SPS technique within the integration of the capacity maturity model (CMMI) for process optimization [4, 5, 6], the SPS was recognized as the key to achieving levels 4 and 5 [7]. In addition to CMMI, Mishra et al. [8] built a system dynamics model to understand the global software development of the Indian software industry. The system dynamics technology is also applied to choose the best gate timing strategy in new product development projects [Van 17]1. Footnote 1: We use a distinct citation format to distinguish the reviewed studies from other references. Zhang et.al [9] highlight benefits of SPS for various purposes such as prediction, process investigation, technology evaluation, and risk management. To achieve the modeling purposes, an SPS model may involve a number of (sometimes even hundreds of) metrics. The identification of the metrics and their relationships needed in a specific process model is a challenging task, particularly for novice modelers, and the collection of the quality data on these metrics is even effort-consuming. The panel of domain experts in SPS indicates that a prerequisite for building SPS models in the industry is that companies are able to analyze their information needs [10]. It accounts for most of the cost to identify the appropriate metrics and gather the corresponding data. The analysis of information, as well as the identification of metrics, requires considerable knowledge and skills. Hence, the software process community encourages the development of the knowledge base and the model library with the common set of process metrics as the key component to unleash reusability [10]. There are a number of SPS studies indicated the problems on metrics, some of the evidence are presented as follows: _"For some of the relevant data it is hardly possible to determine the necessary information in real-life projects... we are elaborating approaches to take such human attributes into account, which are not directly observable, and to consider them in the quantitative logic of the model."_ [22]. _"Obtaining the quantitative data is another difficulty regardless of developing the simulation model."_ [15] The challenging task on modeling metrics can be twofold: 1) identifying key metrics from real-process based on the domain knowledge and the data available; 2) collecting and mining the required data for measurement. Knowing what metrics were used in SPS modeling is a prerequisite for studying these two challenges. To the best of our knowledge, no secondary study that investigates metrics is dedicated to SPS modeling yet, although many papers and books [12, 13, 14, 15, 16, 17] have been published on the topic of software metrics over the past decades. It motivated us to create an evidence-based view of the metrics adopted in SPS models to contribute to the development of the body of knowledge on software metrics in the context of process modeling. Therefore, the objective of this study and its follow-ups are to relieve the high burden and cost of SPS modeling by systematically identifying the modeling metrics of the exemplar process models and the experiences extracted from the relevant literature available. Although this research takes the metrics in SPS modeling as the research object, it is also applicable to general software process modeling as well as general software process measurement. The process of building a simulation model usually consists of the following steps. In the process of building an executable simulation model, a static model is often an essential intermediate product. From an evaluation point of view, we require that the descriptive model is semantically correct, while we need to assess the appropriateness and fidelity of the simulation model, as its output is a distribution of values for a specific project [18, 19]. From a modeling perspective, the building of descriptive model is to abstract, collect and transform fragments of the real world based on the incomplete knowledge we have gained so far [20]. Simulation models require detailed understanding of the processes they simulate, as well as the reliable data for their initial construction. For example, a set of variables needs to be specified that represents a continuously differentiable function of time [22, 23]. Hence, SPS modeling associates with a higher standard of metric than static process modeling. The metrics used in SPS models also apply to static models in most cases. Moreover, it is able to examine which metrics are needed to describe the real world driven by modeling. To achieve this, we developed a meta-model of the ontology of metrics for modeling as the guidance of the research. Following the meta-model, we conducted a Systematic Literature Review (SLR) to aggregate, classify, and synthesize the metrics used in the SPS models. As a result, 145 studies that report SPS models are identified from the pool of SPS related papers until 2021. From the included studies, we extracted 2130 metrics. Although software metrics classification schemes have been proposed in the software measurement area [21, 22], they are not adaptive to SPS modeling since the focus of SPS and software measurement researches are different. Under the guidance of the meta-model, we investigated metrics and their directly related elements from four research questions. We developed a new classification framework based on the extracted metrics and referred to the high level of categories (entities and attributes) suggested by the study [22] (RQ1. metrics). Causal relationships between metrics in SPS models were discussed. We studied the considerations of metrics in terms of modeling purposes and paradigms of SPS models respectively (RQ2. causal relationships between metrics). Causal relationship diagrams that illustrate similarities and differences of modeling models at the cognitive level and models at tactical & strategic levels are proposed and differences of metrics used in different paradigms are discussed (RQ3. selection of metrics). We provide a mapping on the relationships between data issues for measurement and solution strategies (RQ4. data for metrics). This study contributes to both the research community and practitioners in industry. * It addresses the first challenge (identification of key metrics) in modeling; meanwhile, it provides researchers the necessary foundation for conducting research to solve the second challenge (data acquisition for measurement). * This work serves as a knowledge base, enabling practitioners and researchers to gain a comprehensive understanding of the metrics and their relationships involved in the SPS modeling. * The classification framework and the considerations of metrics from modeling purposes, paradigms, and data issues provide a reference for practitioners to reuse existing knowledge; at the same time, it provides a clue to which metrics and causal relationships researchers need to focus on. * The second challenge is discussed based on evidence as a set of data issues, coping strategies, and available data sources for hard-to-get metrics are identified. Note that we use a distinct citation format (author's surname & year, e.g., [14]) to distinguish the reviewed studies from other references. The complete list of references of the included studies is shown in the APPENDIX. ## 2 Related Work This section introduces the process simulation and the software metrics that have been studied for decades in the SE community. ### _Software Process Simulation_ Kellner et al. [1] offered an overview of the SPS area to answer three fundamental questions, i.e. _why_, _what_, and _how_. They identified the reasons for conducting an SPS study, defined the scope of SPS models, and discussed the relationships among purposes, scope, and metrics. They also provided a framework to support decisions about simulation approaches, techniques, and required metrics. Zhang et al. [23, 24, 25] conducted a series of SLRs on SPS modeling. They identify ten purposes for SPS research that are classified into three levels and two dimensions of the scope of the model [23]. Several modeling paradigms and simulation tools are summarized in [23, 25]. They indicate five trends of process modeling based on the findings [24]. The impact of SPS research on practice was also reported in another study [26] with an impact roadmap that traces the successful SPS industrial application cases to their origins. The impact of SPS has gradually expanded and it has become more and more mature in the last decade. The adoption of SPS in SE education is studied in [27], which confirms that education is an important application area of SPS with continuous research interests and shows that the SPS game appears more attractive to educators than the other forms of SPS. Integration strategies and recommendations for constructing hybrid SPS models are developed since software processes become more and more complicated [28]. The Verification and Validation (V&V) take a critical role in securing the quality of SPS models, a mapping of quality aspects for V&V and the possible V&V methods is presented in [29]. However, there are still debates over the impact and usefulness of SPS research. Franca et al. [30] present a quasi-systematic review of 108 studies to investigate the reliability of SPS studies in SE. As a result, they identified a few problems and indicated that SPS studies lack the necessary information for replication. Ali et al. [31] aggregate all the points of view on the usefulness of SPS but find that no conclusion on these conflicting claims can be made based on the secondary studies. They conducted an SLR and evaluated 87 SPS studies. The results show that there is still a lack of conclusive evidence on it, as a few studies report the cost of developing an SPS model. Pfahl [32] also argues that it still lacks the evidence that SPS has become an accepted and regularly used tool for software project managers and its high cost is the main reason. On the other hand, the panel of domain experts on SPS collectively offered a different perspective that the impacts of SPS on practice cannot be ignored compared to many other SE technologies, although its high cost is still a major barrier against its wide application, and indicated that the consequences of waiving simulation should be considered whilst simulation is regarded as a cost saver rather than as a cost driver in other engineering disciplines [10]. ### _Software (Process) Metrics_ Software metrics play a crucial role in quantitative software engineering and have been researched for many years from different perspectives. We discuss the previous secondary studies of software metrics and present the overview in Table I. Gomez et al. [11] performed a systematic literature review whose objective is to answer the questions of how, when and what to measure. They adopted the classification of concepts defined in the Software Measurement Ontology proposed by [33] which aims to contribute to the harmonization of the different software measurement proposals and standards, providing a coherent set of common concepts used in software measurement. Bellini et al. [12] conducted an SLR to investigate five key conceptual and methodological issues, i.e. how to apply measurement theory to software, how to frame software metrics, how to develop metrics, how to collect core measures and how to analyze measures. They adopted Fenton's classification framework [22]. They finally provide methods for collecting and analyzing the measurements and suggest that attention is increasingly being paid to multidimensional metrics. Kitchenham et al. [13] developed a preliminary mapping of software metrics studies that focus on identifying influential studies from 2000 to 2005 on software metrics. They conclude that empirical studies are of major importance to the software metrics research community. Although software metrics have been studied by a considerable number of researchers in various dimensions, the empirical methodology adopted needs to be refined. Ability et al. [14] presented an SLR to identify metrics associated with software maintainability and proposed for feature-oriented and aspect-oriented technologies from 11 primary papers. The metrics are classified according to the software attribute they measure, which ranges from architectural, parallel development, debugging, to quality attributes. Kupiainen et al. [15] undertook an SLR on using metrics in industrial Lean and Agile software development. The authors classify the metric based on Fenton's classification framework [22]. They identify the degree of influence and popularity of metrics in agile development and present a mapping between metrics and agile principles. Nunez-Varela et al. [16] conducted a mapping study to investigate the trend of research on code metrics. They classified code metrics into four categories, including object-oriented programming, aspect-oriented programming, feature-oriented programming, and procedural programming. Meidan et al. [17] conducted a mapping study to create a classification scheme of studies on the measurement of software development process in terms of source type, publication year, research type, contribution type, proposal type, validation type, entity/abstraction and study context. They identified 13 process attributes, 3 developer attributes, 4 project attributes, and 2 organization attributes. This study have different research scopes, and we focus more on process metrics, which can help SPS researchers understand using metrics in software process (simulation) modeling comprehensively. ## 3 Ontology To systematically study metrics for SPS modeling, we proposed an ontology to illustrate the relationship between metrics and simulation modeling, as well as their related concepts. Our research revolves around this ontology. Olsina and Martin [34] proposed an ontology for software metrics and indicators based on different software-related ISO standards and research articles. To adapt to SPS, we proposed an adjusted meta-model of the ontology referred to the metrics section of their ontology and introduced the concept of SPS modeling as presented in Fig. 1. The descriptions of the concepts and relationships presented in the meta-model are shown in Table II and 3 respectively. All models are simplified representations (abstractions) of the real world, as well as a conceptual model is the abstraction of a simulation model. All simulation modeling involves conceptual modeling [35]. Conceptual modeling is to specify a model that represents those parts of the problem domain that are included in the simulation model. According to the model development process suggested by Kitchenham et al. [19], domain analysis and the definition of model requirements are needed before specifying a model. Domain analysis is to confirm the information needs in the real world. Not all information needs should be included in the model requirements since they should be valid, credible, feasible, and useful [36], in other words, they are calculable concepts. A simulation model represents the conceptual model in a specific computer code, which means that simulation modeling is to quantify the input and output of the conceptual model. From the perspective of software metrics, it is to measure attributes of entities and quantify the causal relationships between attributes. ## 4 Research Method This section describes the SLR process on metrics in SPS modeling that followed the SLR guidelines [38]. Four researchers and their supervisor were involved in this study. Strictly speaking, this study is not an SLR research. We use the SLR research method to study metrics, which are usually not the research focus of the primary studies we included. ### _Research Questions_ This study focuses on metrics and elements shown in the meta-model (Fig. 1) that are related to metrics. RQ1 studies metrics and their corresponding attributes from different perspectives. RQ2 studies the causal relationships between metrics presented in SPS models. RQ3 studies the purposes and paradigms of SPS models, which affects the selection of metrics. RQ4 studies issues and solution strategies of data for measurement. As an instantiated executable model of the conceptual model, the simulation model can cover the relevant information of the conceptual model. Hence, RQs are not specifically addressed to conceptual model, information need, and calculable concept that are involved in the meta-model. To achieve the research objective, four research questions are defined to drive this study following the meta-model (as follows). **RQ1: _What metrics have been used and studied in the SPS studies?_** RQ1 aims to discuss what to measure in depth as well as build a classification framework of software metrics used and studied in the SPS models. As shown in the meta-model, metrics quantify attribute that is associated with entity. It will also be a multi-level classification of metrics that group them based on the attribute they measured and the entity corresponding to the attribute. In addition, the unit and scale type of each metric would be presented to show how to measure more specifically. **RQ2: _What are the causal relationships between metrics used in different SPS studies?_** RQ2 helps to identify the common causal relationships among software metrics used in existing SPS models. The simulation model consists of causal relationships among metrics. The causal relationships between the metrics are the basis for understanding the structure of SPS models. **RQ3: _What are the considerations for selecting metrics in terms of different modeling purposes?_** It is almost impossible to simulate all the factors and details in one model. Modelers select the most relevant metrics and ignore others as they concentrate on different purposes. Besides, modelers would adopt different modeling paradigms for different modeling granularities, which would affect the measurement of metrics. The goal of RQ3 Fig. 1: Meta-model of the ontology for software metrics in software process simulation modeling \begin{table} \begin{tabular}{l l l l} \hline \hline **Studies** & **Year** & **Metrics’ Classification** & **Research Focus** \\ \hline Gómez et al. [11] & 2006 & Software Measurement Ontology [33] & What, when, and how to measure \\ Bellini et al. [12] & 2008 & Fenton’s categorization [22] & Measurement theory; Alternative methods to collect and analyze core \\ Kitchenham et al. [13] & 2010 & OO metrics, web-metrics, and other code metrics & Identify trend in influential software metrics studies \\ Abilio et al. [14] & 2012 & Software attribute they measure in paper & Software maintainability metrics \\ Kupiainen et al. [15] & 2015 & Fenton’s categorization [22] & Using metrics in Agile software \\ Nuñez-Varela et al. [16] & 2017 & Object oriented programming, aspect oriented programming, feature oriented programming, and procedural programming & The trend of source code metrics \\ Meidan et al. [17] & 2018 & Process, developer, project, and organization & Understand the measurement of the software development process \\ This work & 2022 & A detailed framework based on Fenton et al.’s categorization [22] & Specific to metrics in software process modeling, including classification, causal relationships between metrics, selection of metrics in modeling, and data issues for measurement. \end{table} TABLE I: An overview of the comparison between this work and previous studies on software metrics is to investigate the considerations of metrics in terms of modeling purposes and paradigms, which are the two main properties of different simulation models as indicated in the meta-model. **RQ4: _What are the issues and strategies for obtaining data for metrics_. The lack of data remains a problem in SPS modeling, since Raffo and Kellner [39] have analyzed different situations and solutions. RQ4 aims to revisit the status quo of data issues based on evidence, as well as investigate existing solution strategies and specific attributes they can measure. ### _Selection Criteria_ The inclusion and exclusion criteria for the relevant studies are shown in Table IV. The included paper should be published in English and we retrieved the time span up to 2021. Our research included the studies that applied simulation modeling paradigms for software process research, software education, and software practice. Since the metric is our review focus, the selected studies should clearly claim the metrics used in their model. Regular papers can give us more comprehensive information to help us collect significant evidence, so we would not select the studies which have no more than 5 pages. Studies should be published as journal articles or conference papers rather than other forms that presented in C5 and C6. We extracted data from primary studies that could provide the first-hand metrics used in SPS research rather than from relevant secondary studies. Furthermore, evidence from primary studies could help us to summarize the relationships among different metrics in different SPS research. In addition, no secondary study was found that investigated the metrics used in SPS. To be specific, although Pfahl [40] presents the example set of metrics used in the SD model which aims to support the analysis of the effectiveness of key SPI in the automotive industry (satisfy C1, C2, C3), the paper is excluded because it is a short paper (meet C4). In another example, Zhang et al. [41] mapped four typical simulation paradigms to the appropriate maturity levels of CMMI to adopt them; however, no specific model is presented in the paper (does not satisfy C2, C3). \begin{table} \begin{tabular}{l l} \hline \hline **Inclusion criteria** & \\ \hline C1. Published before 2021 and written in English. \\ C2. Primary studies on employing simulation modeling paradigms for software process research, education and practice. \\ C3. Primary studies that claimed the metrics used in the simulation model. \\ \hline **Exclusion criteria** & \\ \hline C4. Short papers (no more than ’5 pages). \\ C5. In the forms of editorial, abstract, keynote, poster, and book. \\ C6. Opinion pieces, comments, corrections, notes, slides alone or position papers. \\ C7. Secondary studies summarizing the outcomes of the existing research work, e.g. road-map, review, survey, etc. \\ \hline \hline \end{tabular} \end{table} TABLE IV: Selection criteria \begin{table} \begin{tabular}{l l l} \hline \hline **Concept** & **Definition** & **Attribute** _Description_ \\ \hline Entity & Object that is to be characterised by measuring its attributes [37]. & Name: _Name of an entity._ \\ & A measureable physical or abstract property & Description: _An unambiguous description of the entity meaning._ \\ Attribute & of an entity. & Theitrion: _An unambiguous description of the attribute meaning [37]._ \\ & A metric is the number or symbol assigned to an entity by this mapping in order to characterize an attribute. [22] & Name: _Name of a entity._ \\ & Variable and parameter are synonyms of metric in an SPS model. & Unit: _Particular quantity defined and adapted by convention, with which other quantities of the same kind are compared in order to express their magnitude relative to that quantity._ \\ & A metric are compared in order to express their magnitude relative to that quantity. & Scale type: _The type of cells depends on the nature of the relationship between values of the scale._ \\ & Variable in order to express their magnitude relative to that quantity. & Scale type of cells are commonly defined: _minimal, ordinal (restricted or unrestricted), interval, ratio, and absolute._ \\ \hline Data & The data that need to be collected in the real world for measurement. & Source: _The source where the data can be obtained._ \\ & A non-software specific description of the computer simulation model (that will be, is or has been developed), describing the objectives, inputs, outputs, content, assumptions and simplifications of the model [36]. \\ & Information Need & Insight necessary for the specific modeling purpose [37] & Description: _An unambiguous textual statement describing the information needs._ \\ \hline Calculable Concept & Abstract relationship between attributes of entities and information needs [37], & Name: _Name of a calculable concept._ \\ \hline Simulation Model & A simulation model is a computerized model that represents some dynamic system or phenomenon [1]. & Name: _Name of a simulation model._ \\ \hline \hline \end{tabular} \end{table} TABLE II: Software metrics ontology: glossary of concepts \begin{table} \begin{tabular}{l l l} \hline \hline **Relationship** & **Description** \\ \hline Sub-Attribute & An attribute may be composed of none or several sub-attributes, which are in turn attributes. \\ Associated with & One or more measurable attributes are associated with one or more entities & one or more entities can quantify an attribute. \\ Quantities & One or more metrics can quantify an attribute. \\ Requires & It requires real-world data for measuring a metric. \\ Conbinnes & A calculable concept combines (associates) one or more & measurable attributes. \\ Consists of & A simulation model consists of a number of metrics. \\ Instances of & A simulation model is an instance of a conceptual \\ & model. \\ Relates to & One metric relates to another another metric. \\ \hline \hline \end{tabular} \end{table} TABLE III: Software metrics ontology: relationship description ### _Search & Selection Process_ Fig. 2 shows the search process of this study that consists of five stages. Stages I and II were completed in our initial review [28]. In stage I, the manual search and the automated search were performed by two research students. The venues for manual search, which include five conferences and six journals, are listed in Gao et al.'s work [28]. The search string is shown in Fig. 2. It was further coded into the equivalent forms to match the search syntax of different digital libraries. Four digital libraries (IEEE Xplore, ACM digital library, ScienceDirect, SpringerLink) were searched to retrieve as many SPS studies as possible. In stage II, the forward snowballing was applied as a supplementary [28]. As a result, a total of 331 candidate studies were identified by scanning the title, keywords, and abstract. The only difference in the selection criteria between this study and the study by Gao et al. [28] is C3, which identifies the studies reporting modeling metrics (instead of the hybrid simulation modeling in the previous study [28]) from all relevant SPS studies. Consequently, in stage III, the 331 candidate studies were checked against the selection criteria by further reading the introduction, conclusion, and even full text iteratively until the final consensus was reached. Candidate papers were assigned to four research students and each paper was assigned to at least two students to enable parallel review. All inconsistencies and disagreements were thoroughly discussed and resolved during weekly meetings with supervisors. In this study, only the latest versions of primary studies were selected for review if different versions were published based on the same model. In stage IV, we replicated the automated search using the same search string and extended the search scope to 2021. We identified 19 papers that met the selection criteria published between 2016 and 2021. In stage V, we conducted forward snowballing using Google Scholar. The seminal set for the forward snowballing include "Abdel91" [2], "Kellner99" [1], and "Zhang10" [25], which were cited by most relevant studies from the early stages of the review. We identified 5 studies after deduplication. ### _Data Extraction_ To answer the research questions, we defined a data extraction scheme to collect important information from the reviewed studies. Each research question is answered by at least one extraction item. As shown in Table V, the data extraction scheme includes the citation information (e.g., title, year and modeling paradigms) and the information specifics to the research questions. Software metrics (names, units and original descriptions) are extracted for answering RQ1,2,3,4. Causal relationships between (two) metrics and modeling purposes are identified for RQ2 and RQ3, respectively. Data sources and issues and solutions are identified for RQ4. We started with a pilot extraction, in which 35 randomly picked papers was allocated to all researchers. We noticed that some of the papers clearly introduced metrics used for their SPS modeling or presented the SPS model. For these cases, we can easily identify the software metrics and their causal relationships. Some papers did not present software metrics explicitly; the metrics and causal relationships might be identified from its context by iteratively reading the full text. As a result, we extracted 2130 metrics and identified 183 types of causal relationships from the 145 identified papers. These metrics, which are also called variables, comprise the SPS models. ### _Data Synthesis & Classification_ We applied the thematic synthesis method [42] to construct our findings in a systematic manner. As the metric \begin{table} \begin{tabular}{l l l} \hline \hline **Item** & **Description** & **RQ(s)** \\ \hline Title & The title of the study. & Info \\ Year & The published year of the study. & Info \\ Paradigms & System Dynamics; Discrete-Event Simulation; Agent-Based Simulation; Hybrid simulation, etc. & Info \\ Software metrics & The metrics involved in the SPS studies, including their names, units, and original descriptions. & RQ1-4 \\ Causal relationships between & the predecessor metric that output to or & RQ2 \\ ships between metrics & affects the current metric, and the successor metric that is inputted from or & \\ Modeling purposes & The reasons form SPS models. & RQ3 \\ Data issues & The issues of selecting metrics due to & RQ4 \\ Solutions of data issues & The strategies that solve the data issues. & RQ4 \\ Data source & The source of data related to solutions for & RQ4 \\ & measuring attributes, one study may use & \\ & multiple data sources. & \\ \hline \hline \end{tabular} \end{table} TABLE V: Data extraction scheme Fig. 2: Thorough literature search process descriptions vary significantly between process modelers, to answer RQ1, we synthesized the list of the extracted software metrics using the coding technique [43]. It is an iterative process to develop a consistent set of codes from the diverse descriptions in the reviewed studies. Table VI shows three examples of the evolution of codes. In the first iteration, two researchers coded all the metrics at a detailed level based on their descriptions independently, then we discussed the differences between the two sets of codes, and finally reached agreement. In the second iteration, we identified similar codes in the Code-II which developed in the first iteration and replaced them into the same code, for example, we coded 'issue', 'error', 'fault', 'flaw', 'bug' as 'defect'; coded 'fix', 'fixed', 'correct', 'correction' as 'fixing'. For the Example 1 (E1), the'residual' is replaced by'remaining' from Code-I1 to Code-I2. Some codes have no synonyms that could be replaced; then Code-I2 would be the same as Code-II, e.g., E2. We gradually made the code more abstract in the third and fourth iterations through discussion. Some codes did not change from I2 to I3 to replace few similar metrics, e.g., E3. We developed Code-I4 based on Code-I3 and partially referred to the Fenton et al's software metric taxonomy [22] which has been widely accepted in the community. As a result, all metrics can be classified into 29 different categories according to Code-I4, which can be grouped by product internal/external, process internal/external and resource internal/external as suggested by the study [22]. The categories were classified into sub-categories according to Code-I3. In the process of coding-based classification of metrics, metrics that measure time, effort, and cost, etc. can be easily identified and classified. But we have also encountered some thorny problems, which are not covered in Fenton et al's work [22]. There are a variety of factors and multipliers that affect other diverse variables in the model. We grouped the factors into two major categories, i.e. Process factor and Manpower factor, based on the variables that are affected by them. For example, various policies were modeled in existing SPS models; we classified _pair programming policy_ which is a boolean variable that determines whether to apply pair programming in the Process factor; and classified _staffing policy_ which determines the number of person months assigned to the Manpower factor. In Fenton et al.'s framework [22], _defect_ is the only category related to defects, and they suggested that _defect_ should be the Process attribute. In our opinion, although defects are generated and fixed in the process, defects that exist at a certain point in time exist objectively as part of the product. Therefore, we distinguish _defect activity_ from _defect_ and classify _defect_ into Product category as it is more reasonable. We classified metrics such as _defect fixing rate_ into _defect activity_ which belongs to Process and classified metrics such as _# fixed defects_ into _defect_. Similarly, we perform the classification based on the results of coding (i.e. the abstract meaning of metrics) on the one hand and on the other hand, based on the context of the metrics. We refer to existing classifications, but we are not bound by them. The content (frequency) analysis was performed throughout the study to analyze the nominal data across groups of variables. To identify the most common causal relationships between metrics, the percentage of the occurrence of causal relationships was counted when answering RQ2. For RQ3, we counted the frequencies of different categories of metrics used for two hierarchical levels of modeling purposes as well as for different paradigms to locate the evidence that leads to the difference so that we could conduct an in-depth analysis. ## 5 Results This section presents the distribution of included studies from three aspects, i.e. publication years, paradigms used in studies and modeling purposes. ### _Years_ The review identifies 145 papers after the five stages (as presented in Fig. 2). The selected papers for review were published from 1997 to 2021. The complete list of references of the included studies is shown in the APPENDIX. ### _Paradigms_ We classified the modeling paradigms applied in the SPS studies as shown in Fig. 4. System Dynamics (SD), 43%) is the most popular modeling paradigm in SPS. The rest includes Discrete-Event Simulation (DES, 19%), Hybrid Simulation (Hybrid, 12%) and Agent-Based Simulation (ABS, 8%). A hybrid model may adopt two or more modeling paradigms \begin{table} \begin{tabular}{l l l l l} \hline \hline **Metric** & **Code-II** & **Code-I2** & **Code-I3** & **Code-I4** \\ \hline _E1_: Residual defect density (i.e. actual reported defects that were not corrected after 1094 days) & Defect, Density, Residual & Defect, Density, Remaining & Defect, Density & Defect \\ _E2_: Number of tasks completed & Task, Number, Completed & Task, Number, Completed & Task, Size & Task \\ _E3_: Delay from the completion of this until next release is delivered to users & Delay, Release & Delay, Release & Delay, Release & Time \\ \hline \hline \end{tabular} \end{table} TABLE VI: Example of coding Fig. 3: Study distribution per year together. Other studies use a variety of modeling paradigms that simulate software processes at different abstraction levels distinct from the above, e.g., Qualitative Simulation (QSIM), Parametric Estimating (PE), etc. ### _Modeling Purposes_ Kellner et al. [1] identified the reasons for SPS models and clustered them into six specific modeling purposes to perform the simulation of software processes. Zhang et al. [25] extended it to ten purposes based on the systematic review of published SPS studies. In their study, these purposes were grouped into three levels, i.e. cognitive, tactical & strategic. The cognitive level includes understanding, communication, process investigation, and education. The rest of the purposes are at both tactical & strategic levels. We grouped the modeling purposes of the studies into eleven purposes (we identified a new modeling purpose). Different from the other ten purposes, the new purpose, paradigm comparison, is related to the modeling paradigm, but not for studying the software process. The purpose is to compare the strengths and weaknesses of different paradigms. In the study whose goal is to compare paradigms, the paradigm comparison is the sole purpose of building the models. For example, a qualitative and a quantitative model of the typical software evolution process were built for comparing SD with qualitative modeling diagram and there is no redundant discussion about the value of the models themselves [Zhan 09]. The models built for paradigm comparison are simple but complete enough which are based on the degree to which the simulated behaviors interpret the process. From this point of view, these models are at the cognitive level. As shown in Fig. 5, we grouped 42 studies into the cognitive level and the remaining 103 into the tactical & strategic level. One study may have multiple modeling purposes, therefore, one study may have both cognitive and tactical & strategic modeling purposes. For such research, we consider its modeling granularity to be at the tactical & strategic level. Understanding and process investigation are the most common modeling purposes at the cognitive level. Prediction & planning and process improvement are the most common modeling purposes at tactical & strategic level. ## 6 Findings Kaner et al. [44] collected the definitions of measurement, a concise definition they recommend is provided by Fenton and Pfleeger as below [22]. _Formally, we define measurement as a mapping from the empirical world to the formal, relational world. Consequently, a metric is the number or symbol assigned to an entity by this mapping to characterize an attribute._ In SPS modeling, a metric is also called variable sometimes, in our opinion, metric is a more appropriate name since it implies the mapping from real software process to the SPS model. However, the boundary between attribute and metric is not strict in SPS studies, e.g., _size of code_ and _number of defects_ are the common names of two variables, however, the former is an attribute and the latter is a metric according to the definitions in the software measurement area. The metric of _size of code_ should be _lines of code_. This kind of ambiguity will cause trouble when we classify them. Hence, we treat all variables as metric and keep their original representation as much as possible. Although there exists a body of knowledge on software metrics [22], no systematic and comprehensive research on metrics for process modeling has been reported in the SE community. The selection of metrics in SPS models turns out to be more challenging than static models because of the dynamic nature and the extra requirements for executability. Therefore, this study concentrates and reports on the metrics used and studied in SPS modeling only. ### _Metrics and Classifications (RQ1)_ We extracted a total of 2130 metrics used (or studied) in SPS models from the 145 reviewed studies. There are many identical or similar metrics in the data set; hence, we classified them for analysis. Based on the meta-model, our classification framework would contain four levels of categories, from abstract to concrete, these are entities, attributes, and metrics. Fig. 4: Study distribution per modeling paradigm Fig. 5: Study distribution per modeling purpose (One study may have multiple modeling purposes) #### 6.1.1 Categories of Entities We refer to categories and definitions in Fenton et al.'s classification [22]. We classify software metrics into three categories as _products_, _processes_, and _resources_ (aligned with a Goal-Based framework for software measurement), where _processes_ are activities that evolve over time and have to be completed in sequence during development, _products_ are generated from process activities including artifacts, deliverables, and documents, and _resources_ are the entities needed in performing activities. In each category, we further distinguished two types of metrics, i.e. _internal_ and _external_ metrics. An _internal_ metric can be measured only by the intrinsic properties of _product_, _process_, or _resource_ on its own without considering their observable behaviors. On the contrary, an _external_ metric is measured purely by taking impacts that it may make on the _product_, _process_, or _resource_ into account. #### 6.1.2 Categories of Attributes There is a large amount of studies in software metrics/measurement, however, the roles of metrics in SPS studies are different from the roles in them, which would easily make modelers get confused. For example, the software measurement related studies are full of research on coupling and cohesion whilst these are rarely used in existing SPS models. It implicates that existing classification frameworks, which were built based on software measurement research and knowledge, are not suitable for SPS modeling. Furthermore, one of the most recognized frameworks, Fenton et al.'s classification framework, does not fully cover the metrics used in modeling. In the study [22], only examples of attributes are presented for each entity, which is far from enough to guide modeling. In this study, we built a new classification framework of metrics used in SPS models following the thematic synthesis method described in Section 4.5. The framework is divided into four figures, i.e. Fig. 6-9, due to the space constraints of one page. In these figures, the first column shows whether the measured attributes belong to an internal or external entity. The numbers in brackets indicate the number of metrics that measure these attributes. The second and third columns show the attributes and their sub-categories. The last column lists typical metrics with their units. Each metric consists of two parts; the upper rectangle shows the name of the metric and the lower rectangle shows its unit. For example, the first metric in Fig. 6 is _size of code_ and its unit can be LOC, DSI or Function Points (FPs). The height of the rectangle of a metric is within the scope of the sub-category it belongs to, e.g, _size of code_ measures _artefact size_, specifically, code size. It should be noted that _size of code_ should be a name of attribute from the semantic point of view, however, these are the minimum units in SPS models. We name these metrics following the coding technique and comply with the original expression of them in reviewed studies. To achieve the goal of modeling, various means (indicate by units) of measuring a specific metric were used in different studies. It is largely determined by the modeling object and the method of measurement that may lead to different units of the metric. Separating all similar metrics (e.g., both LOC and DSI are the metrics of code size) will make the results and discussion too trivial. Hence, we leave out some of the non-essential results. The unit also implies the scale type of a metric, especially for numerical data. As shown in Table II. Scale types are nominal, ordinal (restricted or unrestricted), interval, ratio, and absolute according to ISO/IEC 15939. The absolute and ratio are numerical data, which can be indicated by units. For instance, # _defects_ is absolute and _defects_ % is ratio. Besides, the type name of boolean, interval, and ordinal types are presented instead of presenting the unit since these metrics are dimensionless (i.e. there is no specific unit). For those metrics with a clear ordinal set or range, the ordinal set or range is also presented, e.g., _expertise of developer_ commonly have three levels 0,1,2 and _knowledge level_ have a finer level of granularity ranged from 0 to 100. The classification framework presented in the figures is elaborated from _product_, _process_, and _resource_ categories as follows. #### 6.1.3 Product Metrics As shown in Fig. 6, the internal metrics are further divided into three sub-categories, i.e. _artefact size_, _artefact property_ and _defect_. _Artefact Size_ measures the amount of work product to be produced from the process. Generally, size is used to measure the scale of a project and further compute indirect attributes. Metrics for various types of artefacts were used in different studies, the collected metrics can be classified into five sub-categories as shown in Fig. 6. _Size of code_ that measures program size is most used metric in all the size metrics and is associated with the effort of development and maintenance and the faults generated [22]. Meanwhile, _size of requirement document_, _size of design document_, and _number of (#) test cases_ are widely used in SPS models. In detail, test cases are generated by either manual or automated approaches, while the latter is in consideration of the tools supporting model-based testing and made by software specifications written in more formal notation or structure [1]. In some studies such as [17, 22], the artefact size is abstracted into a number of arbitrary-sized "units" in order to represent some suitable measure to the size of the system in reality. The # _user stories_ and #_features_ are used in different project contexts. _Artefact Property_ is used to measure products at a more detailed level; e.g., code can be further measured by # _decision statements in the code_, # _local and global variables_, document can be measured by _words per page_, etc. To be specific, # _control flows identified in the skeleton_ is used to estimate the effort required to perform the requirements analysis activity in a DES model [1]. _Coupling degree_ and _code complexity_ also belong to this category. The former is used only in one study and is measured by the probability that one component will be affected by another [1]. The latter can be measured in different ways. Smith et al. [16] used McCabe complexity to measure code complexity. Raffo et al.s [1] suggest that _number of decision statements_ in the code can be the metric of code complexity, _number of local and global variables_ and _level of control-flow nesting_ are the alternative metric. _Flesch-Kincaid Grade Level Score_ and _Gunning Fog Index_ are suggested as metrics of document complexity. _Defect Size_ measures the amount of defects that are injected, detected, and may remain throughout the development process. According to the life cycle of defects, they roughly include injected, detected, fixed, and remaining defects. From the perspective of measurement, the metrics are classified into size, density, percentage, and property sub-categories. The _# defects_ is the most frequently used type. The _defect density_ and _percentage of fixed defects_ evaluate the quality of the product from two different angles, the former evaluates from the product itself and the latter evaluates from the progress of fixing defects. _Defect Property_ is used to measure defects at a more detailed level. Properties such as _type_ and _severity_ are introduced in several studies [12, 13, 14, 15] to make the models behave closer to reality. The _type_ can be the ordinal data that indicates the phase of the defect injected [12]. The _severity_ can be ordinal data with three to four levels, e.g., four levels of defect severity are modeled (i.e. "easy", "medium", "hard", and "very hard") by Zhang et al. [14]. The _external metrics_ of the product were not commonly used in SPS. They measure the _quality attributes_ of product such as _reusability_, _maintainability_, _quality_, _etc_. While _quality_ is the commonly used metrics in this sector, the means to measure _quality_ of code and document vary from model to model. Pfahl and Lebsanft adopted an SD model to analyze the impact of software requirement volatility, the _quality of the system_ is measured by the number of replacing requirements [13]. From business value concerns, Madachy [15] measured the _quality_ by the number of defects. Only one study used the _maintainability_ metric and measured it using the equation \(Maintainability=13.12*Complexity+0.17*Effort+3.87*Size\)[1]. The _reusability_ and _reliability_ are commonly based on COCO II, they are used as cost drivers with a nominal value of 1 [14, 15]. #### 6.1.4 Process Metrics With the focus on process modeling, Fig. 7, 8 indicate that it is a large set of process-related metrics that can be further classified into a number of categories such as _time_, _task_, _effort_, etc. As a distinction between dynamic process models and static process models, there are four main types of _time metrics_. The first is _duration_ which denotes the time spent on a single phase or the entire process. The model simulates a real process over a period of time, no matter whether _duration_ is an explicit metric indicated in an SPS model. _Calendar date_ could be an alternative metric to _duration_ when the model has many overlapped duration metrics. In a stochastic simulation model for risk management, _the start dates and end dates of every risk_ is measured [16]. The third type is _work time_ which is often used as a multiplier in the model to simulate _# hours work per day_[12]. _Delay_ is the last type of time metrics such as _hiring delay_, _delay from completion to release_, _new requirements feedback delay_, etc. As an alternative metric of artefact size in SPS, _task size_ runs through phases of the development process as an indicator of the job size of the corresponding process (phase). _Change_ occurs during requirement, design, implementation, or throughout the entire process. It is one of the major reasons for _rework_. From the reviewed studies, this kind of metric is not widely used in SPS models. _Increment_ represents the process that the new requirements or increments are adding into the development process. Both _# new requirements_ added and the _new requirement generation rate_ are metrics that measure this process. _Rework_ is the feedback of _change_ and _defect activity_. The _amount and percentage of task that required rework or were reworked_ are _rework_ metrics. We classified the _rate of rework_ into the _work rate_ category to emphasize the activity and the'speed' of the process. We introduced the _defect_ in product entity. There are also some defect related metrics should be process metrics, we classified them into _defect activity_. Metrics in _defect_ measures the static attributes of product defect, such as defect density. Fig. 6: Classification of Product Metrics [MISSING_PAGE_POST] Metrics in _defect activity_ measures the attributes of dynamic activities, e.g., _defect injection rate during coding_. The rate can be measured by the number of defects injected per time unit, also can be measured by the number of defects per artefact size (LOC). _Work rate_ represents the'speed' of activities are executed, which covers the entire life-cycle, including the processes of design, implementation, review, testing, releases, etc. _Development rate_ is the most common one, which measures the primary activity simulated in a model. Only using the _development rate_ would regard the process as a whole and will not distinguish the speed of different activities in the process. On the contrary, the speed of different activities can be measured respectively by _design rate_, _implementation rate_, etc. _Effort_ is used (or studied) in numerous models as it is a special interest of process modeling. Its metrics appear as for example _design effort_, _rework effort_, or even _total effort_, in corresponding phases except requirements. The effort can be typically measured with _person-hours_, _person-days_ and _person-months_ which depends on the resolution of the model. In software process research, _Effort_ is regarded as the major contributor to _cost_ or its alternative [12]. _Risk_ rarely appears in SPS models. In the model that aims to investigate the impact of risk, the number, impact and occurrence time of risks are used in the model [15]. _Overhead_ results from the situation where extra work beyond the tasks cannot be avoided. _Communication overhead_ is the most common type of overhead. We classify them into process category because it is generated from the communication activity (or other activities) in teamwork. Other kinds of overhead such as _task switch overhead_ and _training overhead_ are also considered in several studies [1, 10]. _Process factors_ that generally denote the managerial and organizational factors make impacts through the whole process. In SPS models, _policies_ are the most metrics in this subcategory but vary significantly among different models. _Schedule pressure_ is a common factor of _productivity_ or _work rate_. Various factors were used in studies, which are difficult to exhaust here. Various of _discovery factors_ which determines the proportion of elements which might possibly progress to the next stage can be calibrated using historical data [20]. The determination of what factors should be considered depends on specific project characteristics and organizational characteristics. _Personnel continuity_ relates to the training and turnover process and metrics measure the'speed' of these processes. For example, to model the human resource evolution process, the _hiring rate_, _dismissal rate_, _turnover rate_ can be considered [14]. _Process static status_ includes the metrics that are predefined and do not change during the modeled process, such as _stage of the development_ life-cycle, _# process increments_, etc. The _stage of the development_ can be combined with product features such as _system type_ to measure the _capacity_[13]. _Process external metrics_ are occasionally observed in the reviewed models. They include _effectiveness_, _cost_, and _process performance_, in which _effectiveness_ and _cost_ were used a lot. For software process, there are many concerns that can be defined as the indicators of _effectiveness_, such as _test requirement defect detection effectiveness_[1]. Likewise, _cost_ is commonly measured by money value as well as effort in SPS studies for various purposes, such as _coders cost_, _estimated budget_, _cost to repair defects_ and so forth. Fig. 8: Classification of Process Metrics (Part2) #### 6.1.5 Resource metrics Different from product and process, Fig. 9 shows that modelers pay more attention to external entity for resource attributes since only the size or role of the resource can be measured by itself. The resource properties such as manpower skill cannot be measured only directly by itself without any other external reference. _Manpower_ measures the number of human resources for development. According to the modeling granularity and the scale of the simulated project, studies may regard all the resource as a team or allocate individuals to different phases. _Environment_ Few studies considered _environment_ in SPS models. As an example, _test facility availability_ is introduced to model the constraint of resource on test phase [10]. Whether the project is _multi-site development_, _Multi-site development_ is a factor suggested by COCOO II. To quantify the effect or the value of the manpower in a process, five basic _manpower property_, i.e., _productivity_, _capability_, _expertise_, _experience_, and _skill_ can be measured. _Productivity_ forms the basis to contribute to the _development rate_. The rest four metrics are the factors that determine _productivity_. The rest four metrics are the factors that determine _productivity_. The _expertise_ and _knowledge_ are the same metric. Different from _expertise_, _experience_ already takes the domain knowledge into account, and were often simply measured by years without historical data. _Manpower factor_ consists of all the influencing factors as metrics related to resource with a great diversity, ranging from _self esteem_, _team cohesion_, to _native language_. Only the human resource _utilization_ is considered in existing studies, although both individual and team _utilizations_ were used, they were only occurred in a few studies. **Findings:** 1) For most reviewed studies only the metrics that are deemed to be significant are described in detail. 2) Product and process external metrics are not used frequently in process simulation modeling whilst resource external metrics are widely used. ### _Causal Relationships between Metrics (RQ2)_ The SPS models are built to simulate the process of the development team producing software products by carrying out a series of development-related activities under the constraints and support of the environment. In the process, activities will be organized together in a certain workflow and affect each other, and people are the specific actors of activities. People will be influenced by the environment and various other factors, and may also react to them. SPS models use different blocks (defined by paradigm) to depict different elements in the process, including people, activities, environments, etc. These blocks are instantiated implementations of the metrics discussed in this study in a specific simulation paradigm. At the implementation level, a SPS model is made up of blocks and their relationships to each other. The implementation of the model depicts different metrics and their interrelationships in reality. We discussed the metrics used in existing research in RQ1, and RQ2 will discuss causal relationships between metrics. We collected all the causal relationships (from metric A to metric B) can be identified in included papers. As a result, we identified 183 types of causal relationships. Table VII presents high-frequency (with more than 10 occurrences in all SPS models and a relationship may appear multiple times in a model.) casual relationships and we discuss them in more detail below. From _manpower property_ to _manpower property_. It is the most used causal relationship. The most direct relationship between software developers and software development activities is the metric _productivity_, which describes a person's ability to participate in activities. Productivity is the most common one among the metrics of manpower property. _Productivity_ can be affected by factors such as _experience_ and _knowledge_, which also belong to manpower property [11, 12, 13]. In addition, _productivity_ can be more refined. For example, _real-time productivity_ can be composed of _growth_ and _baseline/average productivity_[1, 10]. Metrics such as _experience_ can also be more refined. For example, the _experience of inspectors_ can Fig. 9: Classification of Resource Metrics be affected by _development experience_, _inspection experience_, and _domain experience_[25]. From _defect size_ to _defect size_. Defects can be in different states in the software life cycle. The change of state will be reflected in the change of quantity. The _total of defects_ can be the sum of _open defects_, _in progress defects_, _waiting to test defects_, _reopen defects_, and _resolved and closed defects_[26]. Besides, _The number of detected defects_ can be affected by the _number of generated defects_[26]. The _number of escaped defects_ can be the difference between the _number of detected defects_ and the _number of fixed defects_[11]. Furthermore, defects can also be described as different types. The _total of detected defects_ can be the sum of _detected passive defects_ and _detected active defects_[26]. From _change_ to _change_. This type of causal relationships was mainly found in study [26] and study [26]. In study [26], the _sprint change capacity_ is based on the _daily change capacity_. The rest of relationships were found in study [26], and the _change_ is also known as issue request in this work. The _change_ is modeled in detail based on the life-cycle of issues in the Issue Tracking System. For example, the _number of issues waiting for review_ is affected by the _number of sprint issues_. The _total number of issues_ is the sum of _duplicated and invalid issues_, _open issues_, _n progress issues_, _sprint issues_, _issues waiting for review_, _resolved and _closed issues_, and _reopen issues_. From _work rate_ to _work rate_. One type of _work rate_ may be a composite of many other rates. For example, the _nominal development rate_ can be the sum of _experienced employee development rate_ and _new employee development rate_[26]. Besides, any type of _actual work rate_ can be based on a type of _baseline work rate_[26]. From _artefact size_ to _artefact size_. Artefact can have different origins or be in different stages of development. The _project size_ can be the sum of _remaining size_ and _completed size_[26]. The _rate of generating requirements_ can be affected by feedback of the _number of generated new requirements_[26]. The _number of generated new requirements_ can be based on the _number of exogenous requirements_[26]. From _artefact property_ to _artefact property_. The refinement of _artefact property_ may not be so common, but it may be very fine-grained. Study [26] used 15 kinds of different _complexity_ metrics to quantify the _comprehensive complexity_ of software systems. Study [2] modeled the relationships among _number of control flows identified in the skeleton_, _number of control flows written in the use cases that required rework after inspection_, and _number of control flows written in the use cases_. Study [2] used _ambiguity and brittleness of test code_ to quantify the _smell of test code_. From _process factor_ to _process factor_. One _process factor_ can be broken down into a number of different factors. For example, the _relevance of factoring_ is based on a combination of multiple factors including _relevance productivity degree_, _relevance of sprit course_, _relevance of reviews_, _relevance of commitment-loyalty_, and _relevance of customer satisfaction_. From _time_ to _time_. _Time_ usually describes the duration or delay of an activity. One activity contain a start time and an end time, and the difference between the two is the duration [26]. The _duration or delay_ of an activity can be based on a _mean or baseline value_[27]. The _average productive time_ may be affected by the _time loss due to work partitioning_[27]. From _manpower_ to _manpower_. Manpower can be differentiated based on experience or other factors. The _total number of employees_ can be the sum of _number of trained employees_ and _number of experienced employees_[28]. The _experienced employees_ can be the sum of _experienced employees for development_ and _experienced employees for training Zhang2006Semi_. The _daily available workforce_ can be based on the _total workforce_[26]. From _work rate_ to _artefact size_. This type of relationship describes a common situation where the size of an artefact accumulates based on the rate of work. For example, the _specification units to be processed_ is based on the _specification unit completion rate_[28]. In addition, combining the _software development rate_ with other factors can also decide that _requirements not met correctly_. Under this type of relationships, the latter is not a simple accumulation of the former [26]. From _effort_ to _effort_. Effort can be differentiated based on different activities. For example, the integration effort can be the sum of _project assessment effort_, _project tailoring effort_, and _glue code effort_[26]. Furthermore, the _development effort_ can be calculated by the _expected development effort_ and _growth effort_[27]. From _environment_ to _manpower property_. The environment is the main factor that affects manpower. For example, the environmental factors such as _market salary_, _working environment_, _team management_ and _reward_ may affect the _motivation_ of developers [11]. From _manpower property_ to _work rate_. The rate of development (_work rate_) is based on human productivity (_manpower property_). For example, the _new employee development rate_ is based on the _new hired workforce productivity_[26]. **Findings:** 1) High-frequency causal relationships are mainly relationships between metrics of the same type. It indicates that SPS models typically refine the main types of metrics such as _manpower property_, _defect size_, _work rate_, _change_, _artefact size_, _artefact property_, _process factor_, _time_, _manpower_, etc. 2) Furthermore, even though the main line of the software life cycle is so clear, we do not see a relationship appearing in more than 10% of the models. This fully demonstrates the diversity of SPS models, which requires specific analysis of specific problems as well as even the same problem may be implemented differently. ### _Metrics Selection Based on Purposes and Paradigms (RQ3)_ As shown in the meta-model, the metrics selected for building the simulation model is determined by the information needs for building the corresponding conceptual model which depends on the modeling purposes. Kellner et al. [1] also indicated that many aspects of what to simulate are inter-related and driven based on the purpose. Modelers simulate process at different granularity levels since it is nearly impossible as well as not necessary in some cases for a modeler to simulate every detail of a process. Hence, the \begin{table} \begin{tabular}{l l l l} \hline \hline \multicolumn{2}{c}{**Causal Relationships**} & \multicolumn{1}{c}{**Frequency**} & \multicolumn{1}{c}{**Citations**} \\ \hline **From** & **To** & **Frequency** & **Citations** \\ \hline Manpower Property & Manpower Property & 44 & [Mada (07, Lehm 10, Zawe 13, Neu 02b, Neu 02a, Zhan 08a, Cher 10, Spas 12, Oors 18, Hana 98, Fate 18, Klun 18] \\ Defect Size & Defect Size & 22 & [Take 03, Mart 00, Zhan 08a, Zhan 08b, Zhan 18] \\ Change & Change & 22 & [Zhan 18, Klun 18] \\ Work Rate & Work Rate & 20 & [Zhan 06b, Wern 02, Oors 18, Wern 99, Zhan 18] \\ Artefact Size & Artefact Size & 19 & [Zhan 06b, Hall 05, Aran 08, Zhan 09, Wern 99, Zhan 18, Klun 18, Duga 21] \\ Artefact Property & Artefact Property & 18 & [Aran 08, Shuk 12, Aker 17] \\ Process Factor & Process Factor & 15 & [Oors 18, Zhan 18, Klun 18] \\ Time & Time & 15 & [Hisis 99, Zhou 12, Mish 16, Wake 04, Ruiz 02b, Zhan 18, Klun 18] \\ Manpower & Manpower & 12 & [Zawe 13, Mada 05, Zhan 06b, Ambr 11, Tram 16, Kahe 01, Zhan 18] \\ Work Rate & Artefact Size & 12 & [Lehm 10, Wern 02, Hall 05, Zhan 09, Wern 99, Zhan 18, Duga 21] \\ Effort & Effort & 10 & [Chou 06, Naun 07, Aran 08, Noll 16] \\ Environment & Manpower Property & 10 & [Hurt 15, Fate 18, Klun 18] \\ Manpower Property & Work Rate & 10 & [Lehm 10, Zhan 06b, Mada 00, Wern 02, Zhan 08a, Coc 11, Mish 16, Oors 18] \\ \hline \hline \end{tabular} \end{table} TABLE VII: High-frequency casual relationships between metrics Fig. 10: Diagram of causal relationships between metrics in cognitive models (frequency \(\geq 2\)). selection of the appropriate metrics is critical to meet the needs of different modeling granularity. To a great extent, it depends on the specific purpose of a study. Existing SPS models were classified into Cognitive Models (CMs) and Tactical & Strategic Models (TSMs) according to their modeling purposes as shown in Fig. 5. RQ3 discussed the commonalities and differences between CMs and TSMs from the perspective of causal relationships used. In addition, the diversity and complexity of software processes, which can be reflected in modeling purposes, determine the different capabilities of simulation paradigms needed [25]. RQ3 will also discuss the differences in modeling granularity between different paradigms. #### 6.3.1 Comparison between two levels of purposes in modeling We identified 217 causal relationships (the same type can be counted multiple times according to the frequency) in CMs and 462 causal relationships in TSMs. Fig. 10 and Fig. 11 present diagrams of causal relationships between metrics in CMs and TSMs respectively. If we included all the identified relationships, the diagram would be too complex to be understood. By compromise, we end up with a frequency greater than or equal to 2 times (about 1% of the total) as a threshold for CMs. Considering that the number of relationships identified in TSMs is approximately twice as many as those identified in CMs. We set the threshold as 4 times for TSMs. As a result, the Fig. 10 contains a total of 51 types of causal relationships (the same type is only counted 1 time) consisting of 15 types of metrics, and the fig. 11 contains a total of 39 types of causal relationships (the same type is only counted 1 time) consisting of 17 types of metrics. In diagrams, we use box, oval, and parallelogram to denote product, process, and resource metrics respectively. The darker the fill indicates the higher the frequency of the metric used. We used different colored arrow curves to indicate the frequency of the relationship. The green, blue, Fig. 11: Diagram of causal relationships between metrics in tactical & strategic models (frequency \(\geq 4\)). orange, and red colors indicate frequencies greater than or equal to 2 (4), 4 (8), and 8 (16) for CMs (TSMs) respectively. **From the perspective of metrics:** There is a clear overlap in the metrics of high frequencies in CMs and TMs, including _artefact size_, _defect size_, _work rate_, _task_, _time_, _process factor_, _manpower_, and _manpower property_. These all represent the basic elements of a software process, namely actors, artefacts, and activities. With these metrics, it is possible to describe a software development process at a macro level. This is probably the reason for their high frequency. The frequency of _effort_ is not high in TSMs but high in CMs. The frequencies of _change_ and _defect activity_ are high in TSMs but not high in CMs. Furthermore, there is _effectiveness_ in TSMs but not in CMs. **From the perspective of causal relationships:** There are four types of relationships that are high-frequency in both CMs and TMs, including _from artefact size to artefact size, from defect size to defect size_, and _from manpower to manpower_, _from manpower property to manpower property_. It shows that CMs and TMs have commonalities in the fineness of modeling artefact, manpower, and defect, that is, to distinguish different artefact, manpower and defect, as well as the properties of manpower would be further refined. It should be noted that although _from manpower property to manpower property_ is a high-frequency relationship in both CMs and TSMs, its frequency is as high as 38 (8.2%) in TSMs, but the frequency in CMs is only 6 (2.8%). Besides, the frequencies of _from defect size to defect size_ in TSMs and CMs are 18 (3.9%) and 4 (1.8%) respectively. Most of the high-frequency relationships in CMs are in the form of a relationship _from one metric to work rate_, whilst these are not high-frequency relationships in TSMs (except _from work rate to work rate_). Even so, the types of relationships around _work rate_ are the most diverse in both CMs and TSMs. _From time to process factor_ and _from manpower to time_ are high-frequency relationships in TSMs, whilst there is no such relationships in CMs. _From effort to effort_ is high-frequency relationship in CMs, but not in TSMs. _From artefact property to artefact property, from time to time, from process factor to process factor_ are high-frequency relationships in TSMs, but not in CMs. **Findings:** There are significantly more causal relations between metrics of the same type in TSMs, whilst relationships around _work rate_, _artefact size_, and _effort_ are more frequent in CMs. It indicates that CMs tend to directly model around the final result (e.g., effort, output artefact) at a macro-level granularity, whilst TSMs are more concerned with a micro-level granularity. TSMs tend to use multiple metrics within the same type to model an aspect (such as _process factor_, _manpower property_, _defect size_, etc.) in detail. #### 6.3.2 Comparison of different modeling paradigms in modeling In order to simulate processes with different granularity, it is need to adopt appropriate modeling paradigms. For example, ABS is able to simulate each developer as an agent to study the impact of _module complexity_ on _individual productivity_ and _motivation_. At this fine-grained level, where different developers need to be simulated separately, the model would be very complex using the SD paradigm. In the existing research, the DES and ABS paradigms have never been used in models that only stay at the cognitive level. Using different simulation paradigms mean that some of the metrics used will also change. Metrics that measure the same attribute need to change for different simulation paradigms, which is indirectly determined by the modeling granularity. Below are some examples to illustrate the main differences. Probability metrics which indicates the probability that an event or state may change only used in DES and statistical models. For example, a DES model used _the base probability_\(P(D_{k}^{i}(t))\) that specifies how likely it is that team \(i\) will finish component \(k\) after having worked on it for \(t\) time units [45]. If it is in an SD model, _# components per day_ may be used to indicate the work rate. For paradigms that are good at macro-processes, like SD, it would be complicated to simulate activities that different teams implement different components into separate events. For similar reasons, ABS models are better at modeling different developers in details. For example, Cherif and Davidsson [46] built an ABS model that simulates the difficulty of task \(j\) as _the difference between level of knowledge \(b_{ij}\) and the required level of knowledge \(\theta_{j}\)_, where \(b_{ij}\) denotes _developer \(i\)'s knowledge about activity \(j\)_. Cherif and Davidsson compared the ABS model with SD model and indicated that the main difference is that an SD model inputs average value of individual characteristics (_manpower property_) as an alternative. **Findings:** Compared with the SD model, DES and ABS can provide more detailed simulations from the perspective of development activities and individual developers, respectively, which is reflected in the more detailed metrics they use. In contrast, SD tends to use the mean as an alternative. This reduces the difficulty of obtaining metrics under the premise of less impact on the macro-process research. ### _RQ4: Revisit Data Issues and Coping Strategies_ As early as two decades ago, Raffo and Kellner et al. [1, 39] discussed several situations that might arise when measuring process metrics in simulations and the possible coping strategies, which laid the foundation for subsequent research on SPS models. SPS models have encountered many new challenges over the years. Data issues were undoubtedly one of the major challenges of SPS. As a result of reviewing 145 included papers, despite the fact that not every paper fully discussed their own data problems through modeling, we still summarized most of the data problems encountered during the data preparation stage in simulation modeling through retrieval. Furthermore, we included two related studies [39, 47] as a supplementary data source for RQ4. We identified these two studies in the literature selection stage. Although these two studies did not meet our selection criteria C1 and C2, they reported experiences related to data issues. The inclusion of these two studies can help us to discuss data issues more comprehensively. #### 6.4.1 Data issues encountered in SPS data preparing Specifically, we focused on the data issues encountered in the construction of SPS models. To facilitate understanding, we divided the data issues into three steps that align with the data preparation process of SPS, namely data provenance issues, data collection and processing issues, and data measurement issues. **Data Provenance Issues.** _1) Availability._ Software processes generate all sorts of complex data, and the availability of data has always been the greatest challenge in simulation modeling research. It was evident from all 17 studies that data availability was a problem. Not all activities are documented, and most of the data are not available through relevant literature. There are missing data records, for example, there are no records for _rework efforts_ on any activities [16]. In particular, quantitative data sources are unavailable (e.g., _rate of job size added by requirements volatility_[14]). _2) Scarcity._ Even if data are obtained, the data scarcity problem is a common issue for modelers. In most simulation studies, only a few data were available for simulation since the histories of selected projects are too short [16]. A general problem with the record data in projects is that it _does not provide adequate information_ to simulate the real variable change and interrelationship [15]. _3) Reliability of metadata._ Modelers would be concerned about the authenticity and reliability of the data when they come into contact with the real data source. Both quantitative and qualitative data is obtained from software repositories, manual record documents and expert estimation whilst these data sources are more or less error-prone. For example, software developers may incorrectly choose defect types, forget the defect location time, delay in updating the requirements and planning schedule documents [17],[18]. _4) Diversity._ As a result of the differences in data types, record forms, project processes, and organizational \begin{table} \begin{tabular}{p{56.9pt} p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt}} \hline \hline **Data** & **Preparing** & **Encountered Data Issues** & **Specific Issues** & **Metrics(E,g,)** & **Ref.** \\ \hline \multirow{6}{*}{**Data Provenance Steps**} & \multirow{3}{*}{**Availability**} & No readily available meta data in real-world projects & _Rework Effort_ & [18, Raiz 02a, 19] \\ & & Variables in the model required data not available in the literature & _Requisitions Volatility: Affected_ & [18, Raiz 02b, 19] \\ & & Records missing & _Not Special_ & _Not_ Special_ & [18, Chen 06, 19, 20, 21] \\ & & No quantitative information available & _How UpD Across Defects_ & _Not_ Special Numbers_ & [18, Raiz 17], [20] \\ \cline{2-5} & \multirow{3}{*}{**Sacarity**} & Sample used were limited to a few projects & _Not Specified_ & [18, Raiz 01, Raiz 02a, 19] \\ & & Scarcity of available information on samples & _\#Maintenance Requests_ & [18, Raiz 01, Raiz 02a, 19] \\ \cline{2-5} & \multirow{3}{*}{**Relikibility of metadata**} & Systematic biases are likely to exist in the repository data & _Task_’s State & [18, Raiz 02b, 19] \\ & & Expert estimates are impeded by a lack of intuition for some parameters & _Not Special_ & [18, Raiz 04, 19] \\ \cline{2-5} & \multirow{3}{*}{**Diversity**} & Data diversity due to different organizational structures and project environments & _Not Special_ & [18] \\ \cline{2-5} & \multirow{3}{*}{**Tracesability**} & Data "silos" between different artifacts and sources & _Requirements; Defects_ & [18] \\ \hline \multirow{6}{*}{**Data Collection \&Processing**} & \multirow{3}{*}{**Understanding**} & Misunderstanding of the real meaning of the data Difficult to find accurate descriptions for data fields because of distributed in various data sources & _Not Special_ & [18] \\ \cline{2-5} & \multirow{3}{*}{**Accessibility \&**Inability of various interfaces \& _Not Special_ & [18] \\ \cline{2-5} & & Difficulty in choice of that time-span of historical data & _Not Special_ & [18, Raiz 04, 19],[19] \\ \cline{2-5} & \multirow{3}{*}{**Processing**} & Time and effort consuming & _Software Decveloper Time \& Project_ & [18, Raiz 00, 19] \\ & & Costs and Effort & _Costs and Effort_ & [18, Raiz 04, 19] \\ & & Considerable variability, outliers and noise & _Not Special_ & [18, Raiz 04, 19] \\ \hline \multirow{6}{*}{**Data Measurement**} & \multirow{3}{*}{**Definition**} & Loosely defined & _Not Special_ & [18] \\ \cline{2-5} & \multirow{3}{*}{**Granularity**} & Simulation model is an abstraction of real-world process, the metrics are coarse measures than actual data records. & _Not Special_ & [18, Raiz 03, 19] \\ \cline{2-5} & \multirow{3}{*}{**Accuracy**} & Many of model’s parameters could be estimated onlythe accuracy of the measurements is questionable & _Not Special_ & [18, Raiz 04, 19] \\ \cline{2-5} & \multirow{3}{*}{**Completeness**} & Not possible to measure all variables and their relevance accurately & _Not Special_ & [18, Raiz 04, 19] \\ \cline{2-5} & \multirow{3}{*}{**Measurement Method**} & Existing studies rely on the use of industrial data averages, expert estimates, or values acquired from analytical models & _Not Special_ & [18] \\ \cline{2-5} & \multirow{3}{*}{**Quantifiability**} & Unquantifiable variables & Skill Levels, Manpower, Communication Overband, Review Support, Process Maturity, and Tool Support values & [18, Raiz 04, 19] \\ \cline{2-5} & & Unquantifiable variables’ relationships & _Instructions among developers- Maintenance efforts_ & [18, Raiz 04, 19] \\ \hline \multirow{2}{*}{**Dynamic**} & Do not know the empirical distributions of variables variations & _Growth and changes of developers’ behaviors; effort variations; developers’ allocation_ & [18, Raiz 04, 19] \\ \cline{2-5} & \multirow{3}{*}{**Relikibility of outcome**} & Does the above measure provide reliable & _Not Special_ & [18, Raiz 04, 19] \\ \cline{2-5} & & & & \\ \hline \hline \end{tabular} \end{table} TABLE VIII: Data issues encountered in SPS data preparing process (evidence from our included studies and related work). structures, there will naturally be a variety of process data [15]. Because of different organizational and project development environments, data sources can be distributed in a variety of online and offline sources without direct correlation. If data from multiple stages and activities of the software process are involved in the simulation model, it is challenging to ensure the consistency of the data. #### V-B5 **Traceability.** It is of interest to restore the true evolution of process data by building traceability between them. There is a lack of discussion of this aspect of the problem in the previous studies and how they deal with it. Ali et al. [47] noted that real-world projects have "silos" between process data information. The requirements repository stores process information about requirements, while defect reports are stored in another database, i.e. issue tracking system. As a result, despite the existence of both data points, we are unable to use them since their connections have been missed. **Data Collection & Processing Issues.** Even the data sources for modeling are available, the modeler still struggles with how to collect the data and how to process it to meet subsequent simulation modeling demands. _1) Understandability._ The process data may distributed in various data sources across various departments, therefore it might be difficult to understand accurate meanings of descriptions for data-fields in documentation [47]. Modelers must consult domain experts frequently about data fields, even if the name seems straightforward. The field names differ between organizations, teams, and databases. For instance, in the issue tracking system, the release version number may be defined as "baseline", whilst it might be defined as "version" in the requirements management repository. _2) Accessibility._ In order to export data from multiple data sources, diverse interfaces must be accessible. Accessing the interface still requires permissions and authorizations from various business departments, making the data collection process extremely complicated and time-consuming. A risk of data collection failure exists due to data sensitivity and permission concerns. _3) Time-span._ It has been recommended by Ali et al. [47] to discuss the selection of historical data time-span for the construction and calibration of SPS models with domain experts. In order to simulate the change distribution of variables and the interaction between them, a long enough time period is required [13]. When simulating the current reality with historical data, sufficient historical data is required so that we can understand potential changes adequately. For example, if the development process, programming language, or development platform changes, modelers need to be aware of these changes. _4) Processing._ Obtaining, collecting, processing and analyzing data is undoubtedly a complex process that requires considerable time and effort [14, 15, 16] due to issues related to data diversity, reliability, understandability, etc. The current data collection and processing activities in SPS research still relies on manual methods, e.g., data format conversion, outlier and noisy data processing. **Data Measurement Issues.** The primary purpose of data preparation should be to provide the data necessary for the simulation model to measure model metrics, which requires measuring the real values of model variables and their relationships. Due to this, SPS places a great deal of importance on the measurement of metrics. Based on a summary of measurement issues reported in previous works, we classified them into eight categories, which are definitions, granularity, accuracy, completeness, measurement method, quantiability, and dynamic of metrics, as well as the reliability of outcome of SPS models. _1) Definition._ The meta-model illustrates that the metrics used in SPS models are based on the information needs used in the construction of the conceptual model, which depends on the purpose of modeling. Due to cognition differences, modelers may not be able to comprehend the details of the development process. They might ignore the difficulty of measurement and define inappropriate metrics as a result. _2) Granularity._ Often a modeler cannot simulate every detail of a process because it is either impossible or not necessary to do so. Metrics should therefore be selected and measured based on the modeling granularity. However, in many cases, real-world data is too fine-grained to match the assumptions of the model [17]. As an example, SD model assumes no difference between the input variables (e.g., new requirements). However, there is a huge difference between different requirements in actual circumstances, including the difficulty, priority, dependency, and the amount of developers should be invested in them [1]. Both of these factors may affect the size of the job. _3) Accuracy._ The accuracy of measurement is also a common problem in SPS modeling. The accuracy is often questioned due to the quality of the data and the choice of measurement methods. When available data sources are limited, many variables in the model have to be estimated by experts or referred to other literature. Expert estimates, however, are derived from subjective perceptions and always hampered by a lack of intuition [1, 15]. _4) Completeness._ Moreover, not all variables can be accurately defined and measured. Due to issues relating to data availability, quality, and metric measurement, modelers have to adjust the structure and scope of the model, which may also affect the completeness of SPS models. _5) Measurement methods._ Measurement methods typically include mean value estimation, probability estimation, distribution fitting, and stochastic simulation using Monte Carlo. Moreover, existing SPS studies use industry averages, expert empirical estimates, or values derived from analytical models [48]. These methods are far too simplistic and wild to meet the modeling demands of capturing real changes of variables and their reciprocal effects. _6) Quantifying._ Many process metrics, such as human aspect factors [13] (e.g., skill levels, interactions among developers), are difficult to quantify, but are extremely critical for the simulation model because they influence productivity greatly [14]. Quantifying the causal-relationships between variables accurately is a classic challenging task[47]. _7) Dynamic._ Despite improvements in data sources and quality, modelers still are not able to fit the true dynamic distribution of variables from historical data. For example, it is difficult to measure the growth of developers as well as the impact of the growth on their behavior [Lune 21]. #### 6.4.2 Reliability of outcome All of the data preparation work ultimately serves to construct and calibrate the simulation model. This in turn would affect the reliability of the simulation results directly. In other words, it is challenging to validate and ensure that the artefacts, activities, and actors modeled in an SPS model are reliable enough to be able to generate plausible results [Choi 06, Lune 21]. This is something we ultimately need to address to meet the modeling purpose. **Findings:** 1) Throughout the evolution of modeling work, the availability of data has been a major concern. 2) Additionally, modeling requirements for the model metrics (i.e. modelling changes of real process variables and the mutual influence relationships) present a big challenge to modelers. 3) There would be specific problems encountered at various stages of the preparation process, including problems with reliability, diversity, traceability, historical data time span, the data interface, the format of the data, and the processing time, etc. #### 6.4.3 The relationship between data issues and coping strategies We identified ten coping strategies (CS1-CS10) as shown in Table IX, of which most of them have already been discussed by Raffo and Kellner [39], CS3 and CS9 are newly discovered strategies. Figure 12 shows a visualization map that allows to assess the relationship between data issues, the strategies used to cope with them, and the number of evidence in SPS studies provided to support those conclusions. Since data availability has been a major challenge for modelers over the years, researchers have proposed a variety of solutions. When it is possible for a modeler to access multiple data sources, they could simultaneously use the source documents, repositories, and working with experienced practitioners, getting estimate values from experts, getting data from the literature, etc., to reduce bias. There three types of common data sources can be used when the available data for modeling is not sufficient as shown in Table X, which are experienced experts, literature, and organization's documents. _Experienced experts (CS4)_ is one of the main data sources for collecting and verifying metrics. For example, a manager in the distributed support department was conducting a process improvement program and provided us with an estimate of the range of potential productivity improvements [Pfah 04]. Furthermore, advice from experts is especially needed for qualitative data. _Literature (CS5)_. In some cases, If project data are not available statistical, metrics and typical functions from literature can be used initially. A significant part of the cause and effect relationships covered in the Quality Assurance model is quantitatively supported by the data provided in the study [51], and they provides data concerning errors, costs and duration. In the study [49], the effect of different integration starting points on delivery time, productivity and cost were analyzed. To calibrate GENSIM 2.0, Frost et al. [52] provided an example of a defect containment matrix from which values for the calibration fault injection rates and verification effectiveness can be derived. Wagner [53] provided much data on typical (average) verification and validation rates, and rework efforts for defects of various document types. _Organization's documents (CS1, CS2)_. Organizations maintain a wide variety of documents which contains a number of metrics. Different organizations adopt different processes and use different tools, hence, the available evidence could not cover all types of documents. If measurement data sources are not available, it is suggested to use an approximation value (CS6) or adjust variables' measurement methods (CS7). In case of all the above measures fail, modelers may need to simplify the scope and structure of the model again, or maybe just drop the variables. Studies[Pfah 99, Menz 02, Chen 06, Topi 08, Rus 14], [39] indicated that modelers work closely with experienced practitioners to fully understand software processes, so as to build models based on an in-depth understanding of the data. It is necessary to consult internal experts to determine whether the project has historical data of a certain scale within it and whether the necessary information has been recorded so as to avoid the issue of data scarcity in modeling. As a means of avoiding repetitive data collection and processing steps due to information asymmetry and cognitive differences, researchers recommend integrating software repositories, data documents, and internal practitioners' opinions together. Without those strategies, manpower and time would be wasted. There is little discussion of misunderstandings, data accessibility risks, or time spans for history data, even though these issues were raised by Ali et al. [47] and other modelers may face similar challenges. There are not many novel strategies provided by researchers for data measurement. Aside from that, we observed that the measurement problem still has a number of difficulties, such as matching modelling granularity, quantifying variable models, fitting dynamic distributions to variables, etc., for which there were few papers that gave appropriate countermeasures. Undoubtedly, this will result in modeling bottlenecks. As a result of this situation, the subsequent simulation work might be held to an even higher standard. A subsequent study is expected to examine \begin{table} \begin{tabular}{l l l} \hline \hline **ID** & **Coping Strategies** & **Ref.** \\ \hline CS1 & Directly go to the source documents and & [Baum 17, Raft 00, Menz 02] \\ CS2 & Look for data in other parts of the organization. & [Menz 02],[47] \\ CS3 & Develop a survey to collect the needed data. & [Menz 02, Ferr 09] \\ CS4 & Work with experienced process participants & [Prah 99, Menz 02, Chen 06, Topi 08, Rus 14, [39] \\ CS6 & Get data from the literature. & [Bert 03, Ruiz 04, Turn 06, Rus 14] \\ CS6 & Use approximate data or easily configurable & [Kous 07] \\ CS7 & Adjust model variables: another calculated method, using replacement metrics, adjust scope of variables. & [Baum 17], [39] \\ CS8 & Adjust model scope. & [39] \\ CS9 & Adjust the scale of historical data. & [Conc 13] \\ CS10 & Drop the variable from the model. & [39] \\ \hline \hline \end{tabular} \end{table} TABLE IX: Coping strategies these problems in more detail and provide comprehensive and detailed solutions, thereby serving as a useful guide and practice for modelers. **Findings:** 1) We found that relatively few SPS studies discussed appropriate strategies for coping with specific data quality problems from the data provenance aspect, such as reliability of metadata, diversity of process data, and data traceability, etc. 2) There are also a lack of details regarding effective communication with practitioners, minimizing data accessibility risks, and selecting the appropriate time-span for project history information. 3) In terms of data measurement, researchers have not provided many novel strategies. However, it is really critical to solve these issues in the construction and application of SPS models in real-life projects. Further studies are expected to provide comprehensive and detailed solutions to these issues. ## 7 Threats to Validity We check the validity from four aspects based on the mapping provided by Zhou et al. [54]. **Construct Validity:** Finding all relevant papers was a common threat for all SLR studies. We may have missed a few related studies for SPS in paper selection process To overcome this, we have done extensive research on SPS and using the search string has been validated in previous studies. To search the papers published after 2015, we also used Wohlin's forward snowballing strategy [55]. **Internal Validity:** Selection bias is also a standard threat to all SLRs. Our selection method was based on the QGS method [56] and snowballing in order to reduce bias. A forward snowballing strategy was employed using three seminal papers which were cited by most relevant studies with higher citations in SPS research in order to extend the pool of relevant studies until 2021. Another threat was how to minimize inaccuracies when extracting data. Study selection and data extraction are mainly carried out independently by two students, and they conduct a cross check before synthesizing the data. The supervisor and students met weekly to discuss all the inconsistencies and disagreements. One threat is that most studies did not provide a complete list of the metrics they used, much less present the details of whether a metric is calibrated or not. We only extracted the metrics and their relationships reported in the study that were considered as the important and general metrics by the authors. **External Validity:** We ensure that the primary study of the selection is high generalizability which may lead to the high generalizability of the study conclusion. **Conclusion Validity:** During the data synthesis phase of the SLR, grouping extracted metrics and data issues was another challenge. Based on Fenton et al.'s classification [22] Fig. 12: The relationship between data issues and coping strategies on metrics into three categories, we strictly followed the definitions of the different categories to ensure the correctness of metric classification. To construct a systematic set of findings, we applied the thematic synthesis method [42] and coding technique [43] in an iterative process. This allowed us to develop a consistent set of codes based on the diverse descriptions of metrics. By applying thematic synthesis [42], we summarized the data issues encountered in the construction of SPS models by aligning them with the data preparation process of SPS. There is also a common threat for us, which is the lack of details from primary studies. Most of our primary studies did not provide such details because they generally focused on building models and analyzing their results. Therefore, we were unable to extract and synthesize them comprehensively and completely. It is therefore expected that the subsequent study on SPS will examine more problems in more detail and provide comprehensive and detailed information regarding SPS metrics. This includes measurement methods, values, categories, data issues, coping strategies, and data sources, etc. This serves as a useful guide and practice for modelers. ## 8 Conclusion SPS modelers bear the burden of the high cost of selecting appropriate metrics, which is regarded by the community as one of the crucial barriers in adopting SPS in practice. This paper reports an SLR on the metrics and the associated attributes used in SPS models. The implication of this work can be extended to software process modeling in general considering the continuity and similarity between the static models and dynamic models and the higher standard required for dynamic models. The study is driven by a metamodel of the ontology of metrics so that we are able to comprehensively study the metrics in SPS models from the perspective of modeling. As the result of an exhaustive search, we identified a total of 145 papers report SPS models until 2021. We identified 2130 metrics from these models and classified metrics and their corresponding attributes into six entity categories from the measurement perspective. Coding technique is applied to build the classification framework. The framework is more suitable as a reference for modeling than existing software measurement frameworks since it includes all metrics limited to SPS and has multiple levels of categories to show the similarities and differences of metrics. Different from Fenton et al.'s framework [22], we distinguish _defect activity_ from _defect size_ and _defect property_ as well as classify _defect size_ and _defect property_ into product category since it is more reasonable. The choice of paradigm is constrained by both modeling granularity and available data. Therefore we further discussed data issues of measurement and coping strategies. This study helps to address one of the two major challenging tasks of modeling, i.e. identifying the key metrics (elements of the model) and their relationships (structure of the model) from real processes. Although we discussed data issues and coping strategies reported in existing studies, it is still a challenging task for modelers to collect the data from industrial settings. SPS studies usually assume that organizations can collect required information but it is recognized that this is not feasible in some cases. In recent years process mining turns to be an effective tool to distill process information from various software repositories. Many mining algorithms and tools aiming at different purposes and data formats are mapped in our previous study [57]. Investigating how to apply an appropriate mining technique to obtain the data on the required metrics can be an opportunity for future work to improve the practical adoption of SPS. Furthermore, the metrics and knowledge synthesized from SPS studies are not limited to process simulation but can be extended to software process modeling in general. Taking simulation metrics as standards and references can \begin{table} \begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt}} \hline \hline **Data sources** & **Metric** & **Ref.** \\ \hline **Experienced experts** & & \\ \hline Experienced programmers and testers & _Amount of incoming work; \# programmers; \# injected defects per day_ & [Berl 03] \\ Industrial project managers’ intuitions and experience & _noudral for the activity; developers’ abilities; development duration_ & [Hana 02] \\ A manager in the distributed support department & _range of potential productivity improvements_ & [Pah 04] \\ Survey with professionals of different areas & _effect_ & [Shuk 12] \\ Past organizational data and expert judgment & _estimated requirements volatility_ & [Mada 00] \\ Expert judgment & _review skill; global broker issue risk; global blocker issue suspend_ & [Baum 17] \\ & _time; review remark fs duration; issue assessment duration; planning_ & _duration; task switch overhead; review fs to task factor_, etc. & \\ \hline **Literature** & & \\ \hline Abts et al. [49] & _delivery time; productivity, cost_ & [Ruiz 04] \\ Lum et al. [50] & _cost_ & [Uzza 13a] \\ Jones et al. [51] & _4 defects; costs; duration covered in the QA model_ & [Drap 99] \\ Frost et al. [52] & _defect infection rate; verification effectiveness_ & [Garo 09] \\ Wagner et al. [53] & _typical (averaged) verification and validation rate, network effort for defects of various document types_ & \\ \hline **Organization’s documents** & & \\ \hline Project management documents & _schedule; effort; code size_ & [Pah 00a, Menz 02, Raff 02] \\ Individual inspection reports & _defect detection rate, inspection effectiveness_ & [Menz 02, Raff 02] \\ Holon history records & _source_ code/change history data; estimate of WIP & [Chut 00] \\ Company’s ticket system & _defect infection rate; task duration; review duration; issue fx overhead;_ & [Baum 17] \\ & _if_ _decippers_ & _ratio of new development to network_ & [Pah 04] \\ Time reporting system & _move-to-production statistics_ & [Pah 04] \\ Configuration management system & _attriain rate_, training statistics & [Pah 04] \\ IT recruiting and training department & _attriain rate_, training statistics & [Pah 04] \\ \hline \hline \end{tabular} \end{table} TABLE X: Available data sources further drive software developers to improve the collection, governance and application of process data in practice. ## Acknowledgments This work is supported by the National Natural Science Foundation of China (No.62072227, No.62202219), the National Key Research and Development Program of China (No.2019YFE0105500) jointly with the Research Council of Norway (No.309494), the Key Research and Development Program of Jiangsu Province (No.BE2021002-2), the Intergovernmental Bilateral Innovation Project of Jiangsu Province (No.BZ020017), as well as the Innovation Project and Overseas Open Project of State Key Laboratory for Novel Software Technology (Nanjing University) (ZZKT2022A25, KFKT2022A09). ## References * 105, 1999. * [2] T. Abdel-Hamid and S. E. Madnick, _Software Project Dynamics: An Integrated Approach_. Upper Saddle River, NJ, USA: Prentice-Hall, Inc., 1991. * [3] R. Ahmed, T. Hall, P. Wernick, S. Robinson, and M. Shah, "Software process simulation modelling: A survey of practice," _Journal of Simulation_, vol. 2, no. 2, pp. 91-102, 2008. * [4] T. Birkholzer, C. Dickmann, J. Vaupel, and L. Dantas, "An interactive software management simulator based on the cmmi framework," _Software Process Improvement & Practice_, vol. 10, no. 3, pp. 327-340, 2010. * [5] D. Crespo and M. Ruiz, "Decision making support in cmmi process areas using multiparadigm simulation modeling," in _Simulation Conference_, 2012, pp. 1-12. * [6] J. Li, M. Li, D. Wu, and H. Song, "An integrated risk measurement and optimization model for trustworthy software process management," _Information Sciences_, vol. 191, no. 9, pp. 47-60, 2012. * [7] D. M. Raffo, J. V. Vandeville, and R. H. Martin, "Software process simulation to achieve higher cmm levels," _Journal of Systems & Software_, vol. 46, no. 2-3, pp. 163-172, 1999. * [8] D. Mishra and B. Mahanty, "A study of software development project cost, schedule and quality by outsourcing to low cost destination," _J. Enterprise Inf. Management_, vol. 29, no. 3, pp. 454-478, 2016. * [9] H. Zhang, R. Jeffery, D. Houston, L. Huang, and L. Zhu, "Impact of process simulation on software practice: an initial report," in _2011 33rd International Conference on Software Engineering (ICSE)_, 2011. * at a crossroads?" _Journal of Software: Evolution and Process_, vol. 26, no. 10, pp. 923-928, 2014. * [11] O. Gomez, H. Oktaba, M. Piatrini, and F. Garcia, "A systematic review measurement in software engineering: State-of-the-art in measures," in _ICSOFT 2006, First International Conference on Software and Data Technologies, Setubal, Portugal, September 11-14, 2006_, 2006, pp. 224-231. * [12] C. G. Portobellini, R. D. C. D. Fariapereira, and J. Luizbecker, "Measurement in software engineering: From the roadmap to the crossroads," _International Journal of Software Engineering & Knowledge Engineering_, vol. 18, no. 01, pp. 37-64, 2008. * a preliminary mapping study," _Journal of Systems & Software_, vol. 83, no. 1, pp. 37-51, 2010. * [14] R. Abilio, P. Teles, H. Costa, and E. Figueiredo, "A systematic review of contemporary metrics for software maintainability," in _Sixth Brazilian Symposium on Software Components_, 2012. * a systematic literature review of industrial studies," _Inf. Softw. Technol._, vol. 62, no. C, pp. 143-163, Jun. 2015. * 197, 2017. * [17] A. Meidan, J. A. Garcia-Garcia, I. M. Ramos, and M. J. Escalona, "Measuring software process: A systematic mapping study," _ACM Computing Surveys_, vol. 51, no. 3, pp. 58:1-58:32, 2018. * [18] O. Lindland, G. Sindre, and A. Solvberg, "Understanding quality in conceptual modeling," _IEEE Software_, vol. 11, no. 2, pp. 42-49, 1994. * [19] B. A. Kitchenham, L. Pickard, S. G. Linkman, and P. Jones, "A framework for evaluating a software bidding model," _Information & Software Technology_, vol. 47, no. 11, pp. 747-760, 2005. * [20] B. Kuipers, "Qualitative reasoning: Modeling and simulation with incomplete knowledge," _Automatica_, vol. 25, no. 4, pp. 571-585, 1989. * [21] K. Akingbehin and B. Maxim, "A three-layer model for software engineering metrics," in _Proceedings of the Seventh ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing_, ser. SNPD-SAWN '06. Washington, DC, USA: IEEE Computer Society, 2006, pp. 17-20. * [22] N. E. Fenton, _Software Metrics: A Rigorous and Practical Approach_. International Thomson Computer Press, 2014. * [23] H. Zhang, B. A. Kitchenham, and D. Pfahl, "Reflections on 10 years of software process simulation modeling: A systematic review," in _Making Globally Distributed Software Development a Success Story, International Conference on Software Process, ICSP 2008, Leipzig, Germany, May 10-11, 2008, Proceedings_, 2008, pp. 345-356. * [24] H. Zhang, B. Kitchenham, and D. Pfahl, "Software process simulation modeling: Facts, trends and directions," in _15th Asia-Pacific Software Engineering Conference APSEC 2008, 3-5 December 2008, Beijing, China_, 2008, pp. 59-66. * [25] H. Zhang, B. A. Kitchenham, and D. Pfahl, "Software process simulation modeling: An extended systematic review," in _New Modeling Concepts for Today's Software Processes, International Conference on Software Process, ICSP 2010, Paderborn, Germany, July 8-9, 2010. Proceed ings_, 2010, pp. 309-320. * [26] H. Zhang, D. R. Jeffery, D. Houston, L. Huang, and L. Zhu, "Impact of process simulation on software practice: an initial report," in _Proceedings of the 33rd International Conference on Software Engineering, ICSE 2011, Waikiki, Honolulu, HI, USA, May 21-28, 2011_, 2011, pp. 1046-1056. * 26, 2015_, 2015, pp. 147-156. * 26, 2015_, 2015, pp. 157-166. * [29] H. Gong, H. Zhang, D. Yu, and B. Liu, "A systematic map on verifying and validating software process simulation models," in _Proceedings of the 2017 International Conference on Software and System Process, Paris, France, ICSSP 2017, July 5-7, 2017_, 2017, pp. 50-59. * [30] B. B. Franca and G. Travassos, "Are we prepared for simulation based studies in software engineering yet?" vol. 16, p. 8, 2013. * 85, 2014. * [32] D. Pfahl, _Process Simulation: A Tool for Software Project Managers?_ Springer Berlin Heidelberg, 2014. * [33] F. Garcia, M. F. Bertoa, C. Calero, A. Vallecillo, F. Ruiz, M. Piattini, and M. Genero, "Towards a consistent terminology for software measurement," _Information & Software Technology_, vol. 48, no. 8, pp. 631-644, 2006. * [34] L. Olsina and M. de los Angeles Martin, "Ontology for software metrics and indicators," _Journal of Web Engineering_, vol. 2, no. 4, pp. 262-281, 2003. * [35] S. Robinson, "Conceptual modeling for simulation," in _Proceedings of the 2013 Winter Simulation Conference on Simulation: Making Decisions in a Complex World_, 2013, pp. 377-388. * [36] ----, "Conceptual modelling for simulation part i: definition and requirements," _Journal of the Operational Research Society_, vol. 59, no. 3, pp. 278-290, 2008. * [37] I. Standard, "Adoption of iso/iec 15939:2007-- systems and software engineering-- measurement process," 2009. * [38] B. Kitchenham and S. Charters, "Guidelines for performing systematic literature reviews in software engineering version 2.3," Software Engineering Group, School of Computer Science and Mathematics, Keele University and Department of Computer Science University of Durham, Tech. Rep., 2007. * [39] D. Raffo and M. I. Kellner, "Empirical analysis in software process simulation modeling," _Journal of Systems and Software_, vol. 53, no. 1, pp. 31-41, 2000. * 26th International Conference on Software Engineering_, vol. 2004, 2004, pp. 149-158. * [41] H. Zhang, B. Kitchenham, and R. Jeffery, "A framework for adopting software process simulation in cmmi organizations," in _International Conference on Software Process_, 2007, pp. 320-331. * [42] D. S. Cruzes and T. Dyba, "Recommended steps for thematic synthesis in software engineering," in _Proceedings of the 5th International Symposium on Empirical Software Engineering and Measurement, ESEM 2011, Banff, AB, Canada, September 22-23, 2011_, 2011, pp. 275-284. * [43] J. M. Corbin and A. L. Strauss, "Basics of qualitative research: techniques and procedures for developing grounded theory," _Thousand Oaks Ca Sage Tashakkori A & Teddlie C_, vol. 36, no. 100, p. 129, 1998. * [44] C. Kaner and W. P. Bond, "Software engineering metrics: What do they measure and how do we know?" _Metrics IEEE Cs_, 2004. * [45] F. Padberg, "A discrete simulation model for assessing software project scheduling policies," _Software Process: Improvement and Practice_, vol. 7, pp. 127-139, 2002. * [46] R. Cherif and P. Davidsson, "Software development process simulation: Multi agent-based simulation versus system dynamics," in _Multi-Agent-Based Simulation X, International Workshop, MABS 2009, Budapest, Hungary, May 11-12, 2009 Revised Selected Papers_, 2009, pp. 73-85. * [47] N. Ali and K. Petersen, "A consolidated process for software process simulation: State of the art and industry experience," 09 2012, pp. 327-336. * [48] B. B. Franca and N. Ali, _The Role of Simulation-Based Studies in Software Engineering Research_, 08 2020, pp. 263-287. * [49] C. M. Abts, "Cots software integration cost modeling study," June 1997. * [50] K. Lum, J. Powell, and J. Hihn, "Validation of spacecraft software cost estimation models for flight and ground systems," 2002. * [51] C. Jones, "Applied software measurement: assuring productivity and quality," _McGraw-Hill software engineering series_, 1996. * [52] A. A. Frost and M. J. Campo, "Advancing defect containment to quantitative defect management," 2007. * [53] S. Wagner, "A literature survey of the quality economics of defect-detection techniques," in _Proceedings of the 2006 ACM/IEEE International Symposium on Empirical Software Engineering_, ser. ISESE '06, 2006. * [54] X. Zhou, Y. Jin, H. Zhang, S. Li, and X. Huang, "A map of threats to validity of systematic literature reviews in software engineering," in _23rd Asia-Pacific Software Engineering Conference, APSEC 2016, Hamilton, New Zealand, December 6-9, 2016_, 2016, pp. 153-160. * 03, 2016_, 2016, pp. 15:1-15:6. * [56] H. Zhang, M. A. Babar, and P. Tell, "Identifying relevant studies in software engineering," _Information & Software Technology_, vol. 53, no. 6, pp. 625-637, 2011. * [57] L. Dong, B. Liu, Z. Li, O. Wu, M. A. Babar, and B. Xue, "A mapping study on mining software process," in _24th Asia-Pacific Software Engineering Conference, APSEC 2017_, 2017.
2307.02278
Smooth Particle Mesh Ewald-integrated stochastic Lanczos Many-body Dispersion algorithm
We derive and implement an alternative formulation of the Stochastic Lanczos algorithm to be employed in connection with the Many-Body Dispersion model (MBD). Indeed, this formulation, which is only possible due to the Stochastic Lanczos' reliance on matrix-vector products, introduces generalized dipoles and fields. These key quantities allow for a state-of-the-art treatment of periodic boundary conditions via the O(Nlog(N)) Smooth Particle Mesh Ewald (SPME) approach which uses efficient fast Fourier transforms. This SPME-Lanczos algorithm drastically outperforms the standard replica method which is affected by a slow and conditionally convergence rate that limits an efficient and reliable inclusion of long-range periodic boundary conditions interactions in many-body dispersion modelling. The proposed algorithm inherits the embarrassingly parallelism of the original Stochastic Lanczos scheme, thus opening up for a fully converged and efficient periodic boundary condition treatment of MBD approaches.
Pier P. Poier, Louis Lagardère, Jean-Philip Piquemal
2023-07-05T13:24:51Z
http://arxiv.org/abs/2307.02278v2
# Smooth Particle Mesh Ewald-integrated stochastic Lanczos Many-body Dispersion algorithm ###### Abstract We derive and implement an alternative formulation of the Stochastic Lanczos algorithm to be employed in connection with the Many-Body Dispersion model (MBD). Indeed, this formulation, which is only possible due to the Stochastic Lanczos' reliance on matrix-vector products, introduces generalized dipoles and fields. These key quantities allow for a state-of-the-art treatment of periodic boundary conditions via the \(\mathcal{O}(N\log(N))\) Smooth Particle Mesh Ewald (SPME) approach which uses efficient fast Fourier transforms. This SPME-Lanczos algorithm drastically outperforms the standard replica method which is affected by a slow and conditionally convergence rate that limits an efficient and reliable inclusion of long-range periodic boundary conditions interactions in many-body dispersion modelling. The proposed algorithm inherits the embarrassingly parallelism of the original Stochastic Lanczos scheme, thus opening up for a fully converged and efficient periodic boundary conditions treatment of MBD approaches. + Footnote †: preprint: ## I Introduction Electron correlation is one of the most fascinating and difficult phenomenon to model. Dispersion in particular originates from the long-range electronic correlation among distant electron densities and represents the purely attractive contribution in van der Waals interactions. These are ubiquitous in nature: they can be for example observed in milk as they drive the formation of lipid droplets that, through light scattering, give to milk its typical white color. Geckos and spiders, on the other hand, also take advantage of dispersion for supporting their entire weight on smooth vertical surfaces. From the microscopic point of view, dispersion interactions are crucial in many processes driven by non-covalent phenomena such as protein folding, protein-protein interactions, supra molecular and inter-molecular interactions in general. An exact modelization of dispersion requires the analytical solution of the electronic Schrodinger equation, which is unfortunately impossible for practical cases. In the past decades, very accurate numerical wave function-based quantum chemical methods have been developed to tackle electron correlation, thus implicitly capable of describing dispersion and intermolecular interactions.[1; 2] These methodologies, however, can only be applied to molecules composed of very few atoms, thus preventing the study of chemically and biologically relevant systems. The advent of Density Functional Theory (DFT) represents a milestone in quantum chemistry as it provides a cheap way of including electronic correlation, as its computational cost is similar to that of the Hartree-Fock method. Nevertheless, the intrinsic local nature of common exchange-correlation functionals, makes DFT inadequate for describing long-range correlation effects, thus dispersion. To retain the DFT scaling benefits, extensive efforts have been spent in the past years in developing dispersion corrections able to improve the DFT capability of describing intermolecular interactions, crucial in material design and molecular modelling in general. Many of these correction techniques rely on simple empirical pairwise treatments of dispersion, similar to those embraced in force fields. Their simplicity, together with the negligible computational cost and the good accuracy improvement, made possible for these methods to be included in most of the quantum chemistry softwares.[3] Despite their large diffusion, these pairwise corrections completely neglect the many-body nature of dispersion interactions inherited from the long-range electronic correlation on the basis of these phenomena. In recent years, the interest towards Many-Body Dispersion correction models has risen[4]. In particular the MBD@rSCS model by Tkatchenko, Di Stasio and Ambrosetti, together with its variations, has become especially popular by virtue of its high accuracy obtained despite of the absence of empirical parameters except for a single range-separation parameter for the coupling between the long-range MBD energy and the chosen DFT functional.[5; 6; 7] The MBD@rsSCS model can be summarized as follows. First, a set of atomic dipole polarizabilities are obtained from the partitioning of the molecular electron density or, alternatively, retrieved from a deep-neuronal network as recently proposed.[8; 9] Secondly, the polarizabilies are made frequency-dependent via Pade approximation and subsequently a Dyson-like self-consistent screening linear equation is solved for a selected set of frequencies. Lastly, the set of screened frequency-dependent polarizabilities are used as key quantities in building the MBD interaction matrix which spectrum is used to express the final many-body dispersion energy. Compared to the \(\mathcal{O}(N^{4})\) scaling of Kohn-Sham equations' resolution, the MBD@rsSCS model involves a small additional computational cost. However, for increasingly large systems, the \(\mathcal{O}(N^{3})\) scaling of the diagonalization procedure becomes no longer negligible and, it can even become a burden if coupled to \(\mathcal{O}(N)\) DFT methods. Recently, we have proposed and implemented an alternative resolution of the MBD key equations that overcomes this scaling issue that is based on the state-of-art Stochastic Lanczos (SL) trace estimation.[10] Due to the the sparsity of the matrices involved, it exhibits linear-scaling with the system size. The proposed stochastic Lanczos MBD approach (SL-MBD) further benefits from an embarrassingly parallel implementation arising from its stochastic nature and this allows for reaching system sizes of hundred thousands atoms within a few minutes' time.[11] Compared to a simple pairwise description, this many-body treatment of dispersion interactions in systems such as solvated proteins has revealed a higher degree of delocalization as well as a collective solute-solvent character leading to remarkable long-range interactions.[12] The potentially longer-range of MBD interactions stresses the importance of the inclusion of a coherent full periodic boundary condition (PBC) treatment, especially in highly ordered and periodic systems. In this direction, recent efforts have been spent in past years. Bucko and co-workers have provided a method expanding over the Brillouin cell that introduced consistent improvements compared to the standard replica method used to include long-range periodic boundary conditions effects.[13] By virtue of the above mentioned long-range nature of MBD interactions, it is of broad interest to generalize the SL-MBD approach to a full PBC treatment. However, the quadratic-scaling approaches typically employed in connection to MBD models are clearly not suitable to be integrated in the the SL-MBD methodology for both memory requirements and computational efficiency due to the large systems targeted. A more sophisticated approach has therefore to be developed. In the context of long-range electrostatics modelling, this scaling limitation was addressed via Ewald summation techniques, as they formally scale as \(\mathcal{O}(N^{2})\) but a proper optimization lowers the factor to \(\mathcal{O}(N^{3/2})\). Ewald summation techniques replace the original conditionally convergent energy summation with a direct and reciprocal space absolutely convergent ones consisting of a real and reciprocal summations as well as a self interaction term. The Particle Mesh Ewald (PME) method proposed by Darden, York and Pedersen, drastically improved Ewald summation technique's associated performance.[14] Its idea relies on the efficient calculation of the reciprocal space energy contribution thanks to fast Fourier transforms scaling as \(\mathcal{O}(N\text{log}(N))\). The PME method with its different variants (especially the Smooth Particle Mesh Ewald (SPME)[15], has become the standard algorithm implemented in nearly all the most efficient Molecular Dynamics packages thanks to its scaling features although alternative but related methods also exist.[16] In this work, we derive and present a modification of the SL-MBD method based on a PME treatment of periodic boundary conditions. The resulting Smooth Particle Mesh Ewald stochastic Lanczocz (SPME-SL) MBD approach is suitable for large systems as it exhibits the typical \(\mathcal{O}(N\text{log}(N))\) scaling inherited from the PME method. In the next section, we review the MBD model as well as the stochastic Lanczos method in its standard form. A theory section is then dedicated to the derivation of the modified SPME-based Lanczos quadrature scheme followed by a section dedicated to numerical results where the computational performances of the method are discussed and compared to the ones of the standard replica method. ## II Review of the MBD and SL-MBD The MBD model is based on the idea that a molecule is described as a set of interacting quantum harmonic oscillators, which Hamiltonian is shown in eq.(1), \(\mathbf{d}_{i}=\sqrt{m_{i}}\xi_{i}\) being the mass-weighted dipole moment displaced by the vector \(\xi_{i}\) from its equilibrium position. \(\alpha_{i}(0)\) and \(\omega_{i}\) represent the model's key parameters and correspond to the static dipole polarizability and characteristic excitation frequency respectively. \[\begin{split}\hat{P}_{\text{MBD}}&=\frac{1}{2}\sum _{i=1}^{N}(-\hat{\nabla}_{\hat{\mathbf{d}}_{i}}^{2}+\mathbf{d}_{i}^{\dagger} \mathbf{V}_{ii}\mathbf{d}_{i})+\sum_{i>j}\mathbf{d}_{i}^{\dagger}\mathbf{V}_{ ij}\mathbf{d}_{j}\\ \mathbf{V}_{ij}&=\mathbf{I}_{3}\delta_{ij}\omega_{i }^{2}+(1-\delta_{ij})\omega_{i}\omega_{j}\sqrt{\alpha_{i}(0)\alpha_{j}(0)} \mathbf{T}^{\prime}_{ij}(\beta)\end{split} \tag{1}\] These parameters are obtained from _ab initio_ data as the atom-in-molecule (AIM) polarizability is typically retrieved via partitioning of the electron density while \(\omega_{i}\) is defined in terms of accurate free atoms quantities.[17; 18; 19] We note that these AIM polarizability parameters can be screened by solving a Dyson-like equation[5] that can be solved extremely efficiently[11], however, we will not discuss this in the present work as the presented algorithm is general and does not depend neither on the choice of AIM polarizabilities nor on their screening. We further mention that recently Johnson and coworkers have analyzed the sensitivity of the screening procedure for selected systems.[20] The \(\mathbf{T}^{\prime}_{ij}(\beta)\) term is built from the pure point dipole-dipole interaction tensor for the \(ij\) atom pair that is screened via a damping function \(s(R_{ij};\beta)\) depending on the interatomic distance \(R_{ij}\) and the single range-separation parameter \(\beta\) typically optimized for the correspondent DFT functional to be dispersion-corrected, \(\mathbf{T}^{\prime}{}_{ij}(\beta)=s(R_{ij};\beta)\mathbf{T}_{ij}\). Recently the MBD model was generalized to higher than dipole interactions[9; 21], however, here we will only consider the dipole-dipole interaction case. For the explicit expression of \(\mathbf{T}^{\prime}\) we refer to the work in reference.[11] The eigenvalues (\(\lambda_{i}\)) of the MBD interaction matrix \(\mathbf{V}\), shown for the \(ij\) block in eq.(1), are required to obtain the MBD energy \(\mathcal{E}_{\text{MBD}}\) via the plasmonic formula shown in eq.(2) that represents the correlation energy of the interacting fluctuating dipoles. \[\mathcal{E}_{\text{MBD}}=\frac{1}{2}\sum_{i=1}^{3N}\sqrt{\lambda_{i}}-\frac{3} {2}\sum_{i=1}^{N}\omega_{i} \tag{2}\] The solution of eq.(2) is bound to the \(\mathcal{O}(N^{3})\) scaling of the diagonalization step that, as mentioned earlier, strongly limits the applicability of the method to large systems. The SL-MBD method bypasses the diagonalization of \(\mathbf{V}\) by exploiting the alternative but equivalent expression of the plasmonic formula, eq.(3), where the sum over the whole spectrum of \(\mathbf{V}\) is rewritten in term of its trace, that is invariant under any change of basis, namely \(\sum_{i=1}^{3N}\sqrt{\lambda_{i}}=\text{Tr}(\sqrt{\mathbf{\Lambda}})=\text{Tr}( \sqrt{\mathbf{\nabla}})\) where \(\mathbf{\Lambda}\) is the diagonal form of \(\mathbf{V}\) obtained via the unitary transformation \(\mathbf{\Lambda}=\mathbf{W}^{\dagger}\mathbf{V}\mathbf{W}\). \[\mathcal{E}_{\text{MBD}}=\frac{1}{2}\text{Tr}(\sqrt{\mathbf{V}})-\frac{3}{2} \sum_{i=1}^{N}\omega_{i} \tag{3}\] The evaluation of the trace of a symmetric matrix function such as \(\text{Tr}[\sqrt{\mathbf{\nabla}}]\) is, in the proposed SL-MBD, based on two main assumptions. First, the stochastic Hutchinson trace estimator (HTE) [22] is invoked, Eq.(4), \(\mathbf{v}_{l}\) being one of the \(R\) normalized random vectors of dimension \(D\) (in our case \(D=3N\)), which entries follow a Rademacher distribution, _i.e._ they can assume values of either \(1\) or \(-1\) with the same probability. \[\text{Tr}[\sqrt{\mathbf{V}}]\approx\frac{D}{R}\sum_{l=1}^{R}\mathbf{v}_{l}^{ \dagger}\sqrt{\mathbf{V}}\mathbf{v}_{l} \tag{4}\] \[\begin{split}\mathbf{v}_{l}&=\frac{\mathbf{u}_{l}}{ \|\mathbf{u}_{l}\|}\\ u_{l,i}&=\begin{cases}1,&\mathbf{Pr}=1/2\\ -1,&\mathbf{Pr}=1/2\end{cases}\end{split} \tag{5}\] Second, each of the \(R\) scalar expectation values in Eq.(4) can be expressed in terms of \(\text{Tr}[\sqrt{\mathbf{\Lambda}}]\) and the unitary transformation \(\mathbf{W}\) as reported in Eq.(6) where we introduced \(\boldsymbol{\mu}_{l}=\mathbf{W}^{\dagger}\mathbf{v}_{l}\). \[\mathbf{v}_{l}^{\dagger}\sqrt{\mathbf{V}}\mathbf{v}_{l}=\mathbf{v}_{l}^{ \dagger}\mathbf{W}\sqrt{\mathbf{\Lambda}}\mathbf{W}^{\dagger}\mathbf{v}_{l}= \sum_{l}^{D}\mu_{l,i}^{2}\sqrt{\lambda_{i}} \tag{6}\] The last equality in Eq.(6) corresponds to the Riemann-Stieltjes integral [23] defined in Eq.(7) which is approximated via the general \((M+1)\)-points quadrature shown in eq.(8), \(\{\tau_{k}\}\) and \(\{\theta_{k}\}\) representing the unknown weights and nodes respectively. \[\begin{split}\sum_{i}^{D}\mu_{l,i}^{2}\sqrt{\lambda_{i}}& =\int_{a}^{b}\sqrt{i}d\mu(t)\\ \mu(t)&=\begin{cases}0&,\ \ \ t<a=\lambda_{1}\\ \sum_{j=1}^{j-1}\mu_{j}^{2}&,\ \ \ \lambda_{i-1}\leq t<\lambda_{i}\\ \sum_{j=1}^{D}\mu_{j}^{2}&,\ \ \ b=\lambda_{n}<t\end{cases}\end{split} \tag{7}\] \[\mathbf{v}_{l}^{\dagger}\sqrt{\mathbf{V}}\mathbf{v}_{l}=\int_{a}^{b}\sqrt{i} d\mu(t)\approx\sum_{k=1}^{M+1}\tau_{k}^{(l)}\sqrt{\theta_{k}} \tag{8}\] By inserting Eq.(8) in Eq.(4), one can identify the complete expression for the stochastic trace estimation, Eq.(9). \[\text{Tr}[\sqrt{\mathbf{V}}]\approx\frac{D}{R}\sum_{l=1}^{R}\sum_{k=1}^{M+1} \tau_{k}^{(l)}\sqrt{\theta_{k}^{(l)}} \tag{9}\] In the stochastic Lanczos algorithm, the nodes and weights for the quadrature relative to each of the l-th terms in the first summation, are identified as the eigenvalues \(\{\widetilde{\lambda}_{k}^{(l)}\}\) and the first entry (squared) of the eigenvectors \(\{[U_{1,k}^{(l)}]^{2}\}\) of the tridiagonal \(\mathbf{\Delta}^{(l)}\) matrix which is the representation of the original MBD potential matrix \(\mathbf{V}\) in the \(M+1\) Krylov subspace \(\mathcal{K}_{M+1}=\{\mathbf{y}_{1},\mathbf{y}_{2},\ldots,\mathbf{y}_{M+1}\}\) where the basis vectors are gathered as the \(\mathbf{Y}^{(l)}\) matrix's columns. \[\mathbf{\Delta}^{(l)}=\mathbf{Y}^{\dagger(l)}\mathbf{V}\mathbf{Y}^{(l)} \tag{10}\] \[\widetilde{\mathbf{\Lambda}}^{(l)}=\mathbf{U}^{(l)\dagger}\mathbf{\Delta}^{(l )}\mathbf{U}^{(l)} \tag{11}\] The solution of eq.(10) represents the crucial part of the algorithm in terms of efficiency while eq.(11), by virtue of the small matrices involved (Krylov subspace dimension rarely exceeding \(15\)), is inexpensive and it is solved by means of standard libraries. Eq.(10) is practically solved as follows: For each of the R terms employed in the HTE, \(\mathbf{v}\) (from now on the upperscript \((l)\) is dropped for simplicity) is taken as the first basis vector of the Krylov subspace (\(\mathbf{y}_{1}\)) while the remaining basis vectors \(\{\mathbf{y}_{k}\}\) (columns of \(\mathbf{Y}\)) and the diagonal (\(\Delta_{kk}\)) and out-of diagonal (\(\Delta_{(k-1)k}=\Delta_{k(k-1)}\)) elements of \(\mathbf{\Delta}\) are retrieved recursively as shown in eq.(12) where the asterisk denotes the unnormalized k-th basis vector. \[\begin{split}\mathbf{y}_{1}&=\mathbf{v}\\ b_{k}\mathbf{y}_{k}&=\mathbf{y}^{*}{}_{k}=\mathbf{l}_{k-1 }-a_{k-1}\mathbf{y}_{k-1}-b_{k-1}\mathbf{y}_{k-2}\\ \mathbf{l}_{k}&=\mathbf{V}\mathbf{y}_{k}\\ b_{k}&=\sqrt{\mathbf{y}^{*}{}_{k}\mathbf{y}^{*}{}_{k}}= \Delta_{(k-1)k}=\Delta_{k(k-1)}\\ a_{k}&=\mathbf{y}^{\dagger}_{k}\mathbf{V}\mathbf{y}_{k}= \mathbf{y}^{\dagger}_{k}\mathbf{l}_{k}=\Delta_{kk}\end{split} \tag{12}\] \[\mathbf{\Delta}^{(l)}=\begin{pmatrix}\Delta_{11}^{(l)}&\Delta_{12}^{(l)}&0&0&0\\ \Delta_{21}^{(l)}&\ddots&\ddots&0&0\\ 0&\ddots&\Delta_{kk}^{(l)}&\ddots&0\\ 0&0&\ddots&\ddots&\Delta_{(M)(M+1)}^{(l)}\\ 0&0&0&\Delta_{(M+1)(M)}^{(l)}&\Delta_{(M+1)(M+1)}^{(l)}\end{pmatrix}\] In general the k-th iteration retrieves the \(\Delta_{kk}\) diagonal element as well as the contiguous upper/lower \(\Delta_{(k-1)k}\) and \(\Delta_{k(k-1)}\) ones. In the next section, expressions for \(\mathbf{y}\), \(a_{k}\) and \(b_{k}\) in the case of full PBC enforced via PME method will be derived. ## III Theory The easiest strategy for including PBC in the MBD model consists in looping over a selected number of cell vectors \(\mathbf{n}\), each of which denoting the periodic image of the central simulation cell \(U\) defined by its edges \((\mathbf{a}_{1},\mathbf{a}_{2},\mathbf{a}_{3})\) and with volume \(V=\mathbf{a}_{1}\cdot(\mathbf{a}_{2}\times\mathbf{a}_{3})\). This would result in the modified dipole-dipole interaction matrix \(\mathbf{T}^{\mathrm{pbc}}\) shown in eq.(13) where \(\mathbf{T}^{\prime}{}_{ij}(j\in\mathbf{0})\) represents the \(ij\) interaction block belonging to the central simulation cell while \(\mathbf{T}^{\prime}{}_{ij}(j\in\mathbf{n})\) the interaction between the particle \(i\) and the particle \(j\) this time belonging to the cell's periodic replica identified with \(\mathbf{n}\). In particular, the list of cells (and therefore their associated \(\mathbf{n}\) vectors) are chosen according to a cutoff radius as pictorially represented in Fig.1 \[\begin{split}\mathbf{T}^{\mathrm{pbc}}_{ij}&= \mathbf{T}^{\prime}{}_{ij}(j\in\mathbf{0})+\sum_{\mathbf{n}\neq\mathbf{0}} \mathbf{T}^{\prime}{}_{ij}(j\in\mathbf{n})\\ \mathbf{n}&=n_{1}\mathbf{a}_{1}+n_{2}\mathbf{a}_{2 }+n_{3}\mathbf{a}_{3}\quad n_{1},n_{2},n_{3}\in\mathbb{Z}^{3}\end{split} \tag{13}\] The substitution of \(\mathbf{T}^{\prime}{}_{ij}\) with \(\mathbf{T}^{\mathrm{pbc}}_{ij}\) inside \(\mathbf{V}\) (often referred as to the replica method) and the subsequent use of its eigenvalues in eq.(2) was discussed in reference.[24] However, the use of truncated methos based on eq.(13) involves the problematics listed and discussed below. First, the summation in eq.(13) represents a slowly and conditionally convergent series that characterizes not only dipole-dipole interactions, but also charge-charge, charge-dipole and charge-quadrupole Coulomb interactions kernels.[25] Consequently, the slow convergence of eq.(13) strongly limits the applicability of the SL-MBD algorithm where the efficient "on-the-fly" computation of each \(\mathbf{V}_{ij}\) block is crucial for the evaluation of the \(\mathbf{V}\mathbf{y}_{k}\) products discussed in connection to eq.(12). The Ewald summation (ES) method, as well as its more efficient PME variants, was design to improve over eq.(13) since the conditionally convergent features of long-range electrostatic interactions of periodic systems are replaced by an absolutely convergent treatment. Let's consider a set of \(N\) interacting dipoles belonging to the central simulation cell \(U\) and gathered into the \(3N\)-dimensional array \(\mathbf{d}\). The correspondent electric field array \(\mathbf{E}=\mathbf{T}^{\mathrm{pbc}}\mathbf{d}\) arising from the dipoles in both the central simulation cell and all its periodic images is, in the ES method, expressed as the sum of three component, eq.(14). \[\mathbf{E}=\mathbf{T}^{\mathrm{pbc}}\mathbf{d}\longrightarrow\mathbf{E}^{ \star}=\mathbf{E}^{\mathrm{dir}}+\mathbf{E}^{\mathrm{rec}}+\mathbf{E}^{ \mathrm{self}} \tag{14}\] \(\mathbf{E}^{\mathrm{dir}}\) represents the direct space contribution to the Ewald electric field, the \(\mathbf{E}^{\mathrm{rec}}\) is the long-range term computed in Fourier (reciprocal) space while \(\mathbf{E}^{\mathrm{self}}\) represents the so called self-interaction term. The explicit expressions for each of these terms will be given later in the discussion, however, it is important to stress that each of these field components consist of absolutely convergent contributions as the resulting \(\mathbf{E}^{\star}\) field. Our strategy is thus to identify and isolate from the SL-MBD equations, eq.(12), an electric field-like term that can be then evaluated according to the three absolute convergent contributions in eq.(14), thus allowing us to include PBC in a robust and efficient manner. To do so, we will now start by partitioning \(\mathbf{V}\) into its diagonal and out-of-diagonal contributions given below, where \(\mathbf{I}_{3}\) is a (3,3) identity matrix. \[\begin{split}\mathbf{V}_{ij}&=\omega_{i}\omega_{j} \sqrt{\alpha_{i}(0)\alpha_{j}(0)}\mathbf{T}^{\prime}{}_{ij}\\ \mathbf{V}_{ii}&=\mathbf{I}_{3}\omega_{i}^{2}\end{split} \tag{15}\] Due to the fact that the diagonal blocks \(\mathbf{V}_{ii}\) are themselves diagonal, we introduce the identity in eq.(16), where \(\mathbf{\Omega}\) is the diagonal matrix defined below and \(\widetilde{\mathbf{V}}\) is the hollow matrix composed of the off-diagonal entries of \(\mathbf{V}\). These quantities will turn useful later in the discussion. \[\begin{split}\mathbf{V}=&\mathbf{\Omega}+\widetilde{ \mathbf{V}}\\ \mathbf{\Omega}&=\bigoplus_{i}^{N}\mathbf{V}_{ii} \end{split} \tag{16}\] We further introduce the \(\mathbf{g}\) vector (of dimension \(3N\)) defined as the concatenation of \(N\) three-dimensional vectors-of-ones (\(\mathbf{I}_{3}\)) as shown in Eq.(17). \[\mathbf{g}=\bigoplus_{i}^{N}\omega_{i}\sqrt{\alpha_{i}(0)}\mathbf{I}_{3} \tag{17}\] At this point, we use the newly introduced quantities defined in eq.(16) to rewrite the diagonal \(a_{k}\) term as shown in eq.(18). \[a_{k}=\mathbf{y}_{k}^{\dagger}\mathbf{\Omega}\mathbf{y}_{k}+\mathbf{y}_{k}^{ \dagger}\widetilde{\mathbf{V}}\mathbf{y}_{k} \tag{18}\] One can now easily prove that the second term on the right hand side of eq.(18) can be rewritten in terms of the \(\mathbf{g}\), eq.(19), where \(\odot\) denotes the Hadamard product. \[\mathbf{y}_{k}^{\dagger}\widetilde{\mathbf{V}}\mathbf{y}_{k}=(\mathbf{y}_{k} \odot\mathbf{g})^{\dagger}\mathbf{T}^{\prime}(\mathbf{g}\odot\mathbf{y}_{k}) \tag{19}\] Figure 1: Pictorial representation of the replica method for a 2-D squared box of side \(L\) where the chosen cutoff radius is \(R_{\mathrm{cut}}\). Given the ratio \(x=R_{\mathrm{cut}}/L\), a supercell (yellow ochre delimited by red boundary) with vertices identified from all the four possible pairs of integers \((\pm n_{\mathrm{max}},\pm n_{\mathrm{max}})\) is built. \(n_{\mathrm{max}}\) represents the smallest integer value that is greater than or equal to \(x\). A given particle belonging to the central cell (\(\mathbf{n}=\mathbf{0}\)) will therefore interact with other particles in supercell placed at a distance smaller than \(R_{\mathrm{cut}}\), i.e. within the blue circle. By inserting Eq.(19) into (18), we obtain an expression for \(a_{k}\) which will soon turn crucial for the discussion. \[a_{k}=\mathbf{y}_{k}^{\dagger}\Omega\mathbf{y}_{k}+(\mathbf{y}_{k}\odot\mathbf{g })^{\dagger}\mathbf{T}^{\prime}(\mathbf{g}\odot\mathbf{y}_{k}) \tag{20}\] The \(3N\)-dimensional term \((\mathbf{g}\odot\mathbf{y}_{k})\) can be thought as a generalized dipole array \(\mathbf{d}_{k}\) that, via the interaction tensor \(\mathbf{T}\) originates the generalized field \(\mathbf{E}_{k}=\mathbf{T}\mathbf{d}_{k}\) that can be then eventually computed according to eq.(14). \[a_{k}=\mathbf{y}_{k}^{\dagger}\mathbf{\Omega}\mathbf{y}_{k}+\mathbf{d}_{k}^{ \dagger}\mathbf{E}_{k}^{\star} \tag{21}\] We note in passing that the introduction of this generalized field can be used in different situations as it allows us to couple our system with en external perturbation that, as discussed in references, could arise from implicit solvent contribution.[26; 27] At this point we note from eq.(12) (last equality) that \(a_{k}\) is related to \(\mathbf{l}_{k}\) via a differentiation with respect to the basis vector \(\mathbf{y}_{k}\). We can therefore differentiate eq.(20) to finally obtain eq.(23) where the rule for the differentiation of a commuting Hadamard product has been applied, eq.(22). We note that a similar approach based on differentiation was adopted by Stamm and co-workers in deriving Ewald summation for arbitrary orders of multipoles with particular emphasis on the self term, for which different expressions can be found in literature.[28] \[\frac{\partial(\mathbf{y}_{k}\odot\mathbf{g})}{\partial\mathbf{y}_{k}}=\frac{ \partial\text{Diag}(\mathbf{g})}{\partial\mathbf{y}_{k}}\mathbf{y}_{k}+\text{ Diag}(\mathbf{g})\frac{\partial\mathbf{y}_{k}}{\partial\mathbf{y}_{k}}=\text{ Diag}(\mathbf{g}) \tag{22}\] \[\mathbf{l}_{k}=\frac{1}{2}\frac{\partial a_{k}}{\partial\mathbf{y}_{k}}= \mathbf{\Omega}\mathbf{y}_{k}+\text{Diag}(\mathbf{g})\mathbf{T}^{\prime}( \mathbf{g}\odot\mathbf{y}_{k}) \tag{23}\] Once again we use the definition of the generalized dipole and field to finally obtain eq.(24). \[\mathbf{l}_{k}= \mathbf{\Omega}\mathbf{y}_{k}+\text{Diag}(\mathbf{g})\mathbf{T} ^{\prime}\mathbf{d}_{k} \tag{24}\] \[= \mathbf{\Omega}\mathbf{y}_{k}+\text{Diag}(\mathbf{g})\mathbf{E}_ {k}^{\star}\] Eq.(12) can therefore be rewritten in terms of the generalized electric field \(\mathbf{E}_{k}^{\star}\) through the above derived quantities, eq.(25). \[\mathbf{y}_{1} =\mathbf{v} \tag{25}\] \[b_{k}\mathbf{y}_{k} =\mathbf{y}^{\star}_{k}=\mathbf{l}_{k-1}-a_{k-1}\mathbf{y}_{k-1}- b_{k-1}\mathbf{y}_{k-2}\] \[\mathbf{l}_{k} =\mathbf{\Omega}\mathbf{y}_{k}+\text{Diag}(\mathbf{g})\mathbf{E }_{k}^{\star}\] \[b_{k} =\sqrt{\mathbf{y}^{\star}_{k}\mathbf{y}^{\star}_{k}}=\Delta_{(k-1 )k}=\Delta_{k(k-1)}\] \[a_{k} =\mathbf{y}_{k}^{\dagger}\mathbf{\Omega}\mathbf{y}_{k}+\mathbf{d}_ {k}^{\dagger}\mathbf{E}_{k}^{\star}=\Delta_{kk}\] \(\mathbf{E}_{k}^{\star}\) can be evaluated by ES and the explicit expressions for \(\mathbf{E}^{\text{dir}}\), \(\mathbf{E}^{\text{self}}\) and \(\mathbf{E}^{\text{rec}}\) are shown below, however, for a broader discussion and derivation we refer to the following references.[25; 29; 30] Starting from the direct component, we identify the three dimensional electric field \(\mathbf{\widetilde{E}}_{i,k}^{\text{dir}}\) at the atomic position \(\mathbf{R}_{i}\) arising from the generalized dipole array \(\mathbf{d}_{k}\), where its three-dimensional contribution related to the \(j\)-th atom is denoted \(\mathbf{\vec{d}}_{j,k}\), as shown in eq.(26). \[L_{j,k} =\mathbf{\vec{d}}_{j,k}\nabla_{j} \tag{26}\] \[\mathbf{\widetilde{E}}_{i,k}^{\text{dir}} =-\sum_{\mathbf{n}}\sum_{j=1}^{N}L_{j,k}\frac{\partial}{\partial \mathbf{R}_{i}}\bigg{(}\frac{\text{erfc}(\tau\mid\mathbf{R}_{j}-\mathbf{R}_ {i}+\mathbf{n}\mid)}{\mid\mathbf{R}_{j}-\mathbf{R}_{i}+\mathbf{n}\mid}\] \[\quad+\sum_{j=1}^{N}L_{j,k}(1-s_{ij})\mathbf{T}_{ij}\mathbf{ \vec{d}}_{j,k}\bigg{)}\] In the above, \(\tau\) represents a real parameter governing the balance between the direct and reciprocal contributions. For a cubic cell of side \(h\), it is typically taken to be \(5/h\).[31]\(\tau\) is commonly chosen so that the direct term convergence is fast as the reciprocal contribution can be efficiently computed via FFT. This makes the summation over \(\mathbf{n}\) fastly converging, and only particles belonging to neighboring periodic images are therefore usually considered. \(\mathbf{E}^{\text{dir}}\) is practically computed by means of neighbor lists based on the choice of \(\tau\) determining the suitable cutoff and this ensures an efficient and linear-scaling evaluation. The self term \(\mathbf{\widetilde{E}}_{i,k}^{\text{self}}\) consists in the single term shown in eq.(27) which evaluation involves a negligible computational effort. \[\mathbf{\widetilde{E}}_{i,k}^{\text{self}}=\frac{2\tau^{3}}{3\sqrt{\pi}} \mathbf{\vec{d}}_{i,k} \tag{27}\] From a computational point of view, with standard \(\tau\) parameters, the most expensive and thus crucial term to evaluate is represented by the \(\mathbf{\widetilde{E}}_{i,k}^{\text{rec}}\) contribution. In order to discuss its explicit expression, we introduce the reciprocal conjugate vectors \((\mathbf{a}_{1}^{\star},\mathbf{a}_{2}^{\star},\mathbf{a}_{3}^{\star})\) which are related to their dual set by \(\mathbf{a}_{\alpha}^{\star}\cdot\mathbf{a}_{\beta}=\delta_{\alpha\beta}\), with \(\alpha,\beta=\{1,2,3\}\) and \(\delta_{\alpha\beta}\) being the Kronecker delta. In analogy to what done for \(\mathbf{n}\), we define \(\mathbf{m}\). \[\mathbf{m}=m_{1}\mathbf{a}_{1}^{\star}+m_{2}\mathbf{a}_{2}^{\star}+m_{3} \mathbf{a}_{3}^{\star}\quad m_{1},m_{2},m_{3}\in\mathbb{Z}^{3} \tag{28}\] We further introduce the structure factor \(S(\mathbf{m})\), defined in eq (29) for a given \(\mathbf{m}\) is defined in. \[S(\mathbf{m})=\sum_{j=1}^{N}\mathbf{\vec{d}}_{j,k}\cdot\mathbf{m}\exp\left(2i \pi\mathbf{m}\cdot\mathbf{R}_{j}\right) \tag{29}\] In the Ewald summation method the reciprocal component of the field is finally given in eq.(30) \[\mathbf{\widetilde{E}}_{i,k}^{\text{rec}}=-\frac{1}{\pi V}\sum_{\mathbf{m} \neq\mathbf{\vec{0}}}\frac{\partial}{\partial\mathbf{R}_{i}}\bigg{(}\frac{\exp \left(-\pi^{2}\mathbf{m}^{2}/\tau^{2}\right)}{\mathbf{m}^{2}}S(\mathbf{m})\exp \left(-2i\pi\mathbf{m}\cdot\mathbf{R}_{i}\right)\bigg{)} \tag{30}\] The optimal choice of \(\tau\) makes the evaluation of eq.(30) (and therefore of the whole ES method) \(\mathcal{O}(N^{3/2})\) scaling, however, the PME method sensibly improves the scaling by approximating the complex exponentials via interpolation. In the Smooth PME method (SPME) in particular, the complex exponentials are first rewritten in terms of the scaled fractional coordinates \(u_{\alpha j}\), eq 31, and then interpolated by a \(p\)-degree B-spline function \(\theta_{p}(u_{\alpha j}-u_{\alpha})\) on a grid of size and the final contribution due to the reciprocal space is given in eq.(32) \[\begin{split}& u_{\alpha j}=K_{\alpha}\mathbf{a}_{\alpha}^{*} \cdot\mathbf{R}_{j}\hskip 42.679134pt\alpha=\{1,2,3\}\hskip 42.679134pt, \hskip 42.679134ptK_{\alpha}\in\mathbb{N}^{+}\\ &\exp\left(2i\mathbf{m}\cdot\mathbf{R}_{j}\right)=\prod_{\alpha=1 }^{3}\exp\left(2i\pi m_{\alpha}\frac{u_{\alpha j}}{K_{\alpha}}\right)\end{split} \tag{31}\] The \(\tilde{\mathbf{E}}_{i,k}^{\mathrm{rec}}\) is finally given by eq.(32). \[\tilde{\mathbf{E}}_{i,k}^{\mathrm{rec}}\approx-\frac{\partial}{\partial \mathbf{R}_{i}}\sum_{\mathbf{n}}\prod_{\alpha=1}^{3}\theta_{p}(u_{\alpha,i}-n_{ \alpha})(G^{R}*D^{R})(\mathbf{n}) \tag{32}\] The \((G^{R}*Q^{R})\) term is the convolution between the pair potential \(G^{R}\) discussed by Sagui _et al._ and the real space dipole array \(D^{R}\) defined below.[32] The use of fast Fourier transforms in the evaluation of (32) ensures an overall \(\mathcal{O}(N\log(N))\) scaling. \[\begin{split} D_{k}^{R}(k_{1},k_{2},k_{3})=\sum_{\mathbf{n}}\sum _{j}L_{j,k}\theta_{p}(u_{1,j-k_{1}-k_{1}n_{1}})\theta_{p}(u_{2,j-k_{2}-K_{2}n _{2}})\\ \theta_{p}(u_{3,j-k_{3}-K_{3}n_{3}})\end{split} \tag{33}\] ``` 1:SPME Grid allocation \((K_{1},K_{2},K_{3})\) and initialization 2:Neighbor list (direct space) generation 3:for\((l=1,R)\)do 4: Generate \(\mathbf{v}_{l}\) from a Rademacher distribution 5:\(\mathbf{y}_{1}^{(l)}=\mathbf{v}_{l}\) 6: call SPME-Lanczos (\(k=1\)): \(\Delta_{11}^{(l)}\) 7:for\((k=2,M+1)\)do 8: call SPME-Lanczos (general \(k\)): \(\mathbf{y}_{k}^{(l)}\), \(a_{k}^{(l)}\), \(b_{k}^{(l)}\) 9:endfor 10: Eigendecomposition of \(\boldsymbol{\Delta}^{(l)}:\mathbf{U}^{\dagger(l)}\boldsymbol{\Delta}^{(l)} \mathbf{U}^{(l)}=\widetilde{\boldsymbol{\Lambda}}^{(l)}\) 11:for\((k=1,M+1)\)do 12:\(E_{\mathrm{sum}}=\mathrm{E}_{\mathrm{sum}}+[U_{1,k}^{(l)}]^{2}\sqrt{ \widetilde{\boldsymbol{\lambda}}_{k}^{(l)}}\) 13:endfor 14:endfor 15:Calculate the average over samples: \(\mathrm{Tr}[\sqrt{\nabla}]\approx\frac{3N}{R}E_{\mathrm{sum}}\) ``` **Algorithm 1** Schematic general representation of the SPME-based stochastic Lanczos algorithm. The above algorithm was implemented in the Tinker-HP molecular dynamics package [33; 34] and will, in the following section, be numerically analyzed. The replica method (eq.(13)) has also been implemented and coupled to the SL-MBD method as this will allow us to perform a direct comparison for a few test cases with the newly proposed SPME version which numerical results will always refer to a fixed Ewald's \(\tau\) parameter (\(\tau=0.544590\)) corresponding to a real space cutoff of 7 Angstrom. ## IV Numerical results We start by considering results related to the simple replica method based on Eq.(13). In particular, for all the results we choose as a measure the first diagonal element of the \(\boldsymbol{\Delta}\) matrix calculated from the same fixed initial vector \(\mathbf{y}_{1}=\mathbf{v}\), chosen as usual from a Rademacher distribution. This choice will allow us to eliminate the stochastic noise from the computed \(\Delta_{11}\) values that otherwise would make harder the interpretation and comparison of the effects arising from long-range interactions introduced via both the replica and SPME methods. The first system analyzed is a small cubic box of dimension 18.64 Angstrom containing 216 water molecules in the liquid phase. Figure 2 shows the evolution of \(\Delta_{11}\) as a function of the cutoff radius \(R_{cut}\) that is used to determine the replicas identified by a set of \(\{\mathbf{n}\}\) vectors to be included in eq.(13). Even for a not highly symmetric system such as bulk water, the convergence is reached for a cutoff radius of nearly 30 Angstrom thus confirming the slow (and conditional) convergence rate that characterizes the replica method. The large cutoff radius required by the replica method, because of its consequent quadratic scaling, has a direct impact on the computational time as shown in Fig.3. In particular, for a 30 Angstrom cutoff the CPU-time required for the computation of the diagonal element chosen as observable reaches 1 second. The situation if quite different if the SPME-based algorithm is employed since in this case the overall convergence is determined by the number of grid points to be used in the solution of the reciprocal field contribution (\(K_{1},K_{2},K_{3}\) in eq.(31)) that also represents the computationally most Figure 2: First diagonal element of \(\boldsymbol{\Delta}\) computed via the replica method as a function of the cutoff radius for the cubic water box taken as test system. expensive part of the algorithm as the direct summation part is computed very efficiently in a linear-scaling fashion. Fig.(4) shows the convergence of our target quantity \(\Delta_{11}\) as a function of the number of grid points for the box of water undertaken as test system. We stress that, given the choice that we made to fix Ewald's \(\tau\) parameter, the only quantity governing the convergence is thus the grid size. We first note that the convergence has a monotonic behavior as a smaller grid size does not involve a physical truncation of the space and thus of the interactions as for the replica case that in fact shows an oscillatory behavior. It is now interesting to compare the computational cost required by the SPME-based approach to that of the replica method. In particular for a 18 point sized grid for which convergence is observed, the CPU time is \(10^{-2}\), therefore a factor 100 faster than the cumbersome replica method. The slow convergence rate observed for the replica method is further exacerbated when highly symmetrical systems are taken under consideration. Fig (5) shows the evolution of \(\Delta_{11}\) as a function of the cutoff radius, this time for a 14.2 Angstrom sided cubic box of diamond. In this case the cutoff radius reaches the extremely large value of 60 Angstrom before the convergence is reached, with a huge impact on outcoming computational cost as shown in Fig6. The system dependency affecting the choice for a proper cutoff radius observed for the replica method is not suffered by the SPME-Lanczos method as it can be seen in Fig.7 showing the \(\Delta_{11}\) convergence as a function of the number of grid points. Even in this case, convergence is observed starting from circa 20 points, similarly to that observed for water as both boxes have quite similar size. In fact, convergence is ensured when a certain density of grid points is provided, independently of the system. In general a density of 1.2 points/Angstrom (for each of the three box dimensions) is enough to ensure convergence, and this is the default value chosen in our implementation. For highly periodic systems for which the replica method is particularly slow to converge, the computational gain Figure 4: First diagonal element of \(\mathbf{\Delta}\) computed via the SPME-Lanczos method as a function of the grid points for the waterbox considered(only \(K_{1}\) is reported as the box is cubic). The initial Krylov subspace basis vector \(\mathbf{y}_{1}\) is the same as for the Fig.4 Figure 5: First diagonal element of \(\mathbf{\Delta}\) as a function of the cutoffr radius, computed via the replica method for a cubic box of diamond (14.2 Angstrom) Figure 3: CPU time as a function of the cutoff radius relative to Fig.2, i.e. waterbox. Figure 6: CPU time as a function of the cutoff radius for the replica method for diamond in a cubic box of side 14.2 Angstrom. provided by the SPME alternative becomes even more marked. For a 18 points grid, the \(\Delta_{11}\) computation via the SPME-Lanczos is nearly 350 times faster than its replica counterpart. Although our analysis focused, for the sake of clarity, on the \(\Delta_{11}\), the same results hold for the convergence of out of diagonal terms \(\Delta_{(k-1)(k)}\). Moreover, we note that the solution of the SPME-Lanczos equations, Eq.(12), does not spoil the orthogonality of the Krylov subspace basis vectors \(\vec{y}_{k}^{\dagger}\vec{y}_{k}=\delta_{jk}\) as the set of vectors remain orthogonal by construction as in the original algorithm. Furthermore, we stress that for an accurate resolution of eq.(7), the number of quadrature points i.e. the dimension \((M+1)\) of the Krylov subspace \(\mathcal{K}_{M+1}\), can be set to 15, regardless of the system size. This implies that the SPME-Lanczos algorithm does not suffer from the numerical instability (loss of orthogonality among basis vectors) of standard Lanczos algorithm [35], typically encountered in applications where very large Krylov subspaces and thus basis vectors are required. [36] Being the construction of the tridiagonal matrix \(\vec{\Delta}\) the bottleneck step of the overall algorithm, it is of interest to probe its scaling as a function of the system size as shown in Fig.8 for an increasingly large box of liquid water. The plot shows that the SPME-SL algorithm deviates from linearity for larger system sizes and this is explained by virtue of the \(N\log(N)\) scaling of the SPME method employed to compute the generalized field vectors which are key contributions in the construction of the \(\vec{\Delta}\) matrix. The deviation from linearity is, however, rather contained even for the largest system considered composed of approximately 100000 water molecules that is completely out of reach for the standard replica method discussed earlier. For one single core, the overall time necessary to compute the final energy) is equivalent to the time required to build \(\vec{\Delta}\) (Fig.8) multiplied by the number of random samples \(R\) involved in Hutchinson's trace estimator. For large systems in the order of \(10^{4}\) atoms or above, \(R\) can be taken to be around 300 with a resulting low relative standard deviation (0.5%) as analyzed in depth in reference. [11] However, the SPME-SL algorithm's strength is found in its embarrassingly parallel nature since the random samples can be divided among the available processes while a simple reduction is required before the final trace evaluation (eq.(9)). Since the parallelization scheme is essentially the same as the one discussed in the original SL-MBD algorithm, we refer to a previous work [11] for an in depth analysis of the scalability with respect to the number of processes as well as a detailed discussion of the parallelization strategy. ## V Conclusions We have derived, implemented an discussed the SPME-SL algorithm where the stochastic Lanczos trace estimation scheme is coupled to the state of the art Smooth Particle Mesh Ewald method. This was made possible by introducing the generalized field term contribution from the Lanczos iterative equations. Our combined approach allows for an embarrassingly parallel computation of many-body dispersion energies with the full inclusion of long-range interactions arising from all periodic images of the central simulations cell. The proposed algorithm undoubtedly outperforms truncation-based approaches such as the replica method that is affected by slow and conditionally convergence as well as by the employed quadratic-scaling double loops making the computation highly inefficient for large systems. The parallelism features of the SPME-SL algorithm together with the \(N\log(N)\) scaling with the system size allows for a fast Many-Body dispersion treatment of very large periodic systems composed of hundreds of thousands atoms and more. This work represents the natural extension to long-range PBC of our recent stochastic Lanczos MBD algorithm [11] and it focuses uniquely on the energy evaluation. Our focus will now Figure 8: CPU time in seconds as a function of the number of atoms of increasingly larger water boxes (black line). The time refers to the 10 iterations required to build \(\vec{\Delta}\) for a given random sample within a standard \(\mathcal{K}_{10}\) Krylov subspace with the implemented SPME-SL algorithm. The red line represents the ideal linear scaling. Figure 7: First diagonal element of \(\vec{\Delta}\) computed via the SPME-Lanczos method as a function of the grid points (only \(K_{1}\) is reported as the box is cubic) for diamond in a box of side 14.2 Ångstrom. be dedicated to the extension of these achievements to the energy nuclear gradients towards large scale condensed phase molecular dynamics simulations including many-body dispersion effects. Furthermore, alternative handling of PBC within the recently introduced ANKH linear scaling framework will pursued [37]. ###### Acknowledgements. This work has been funded by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant No 810367), project EMC2 (JPP). Computations have been performed at GENCI (IDRIS, Orsay, France and TGCC, Bruyeres le Chatel) on grant no A0070707671. ## Conflict of interest The authors have no conflicts to disclose.
2301.00939
Design and Control of a Novel Variable Stiffness Series Elastic Actuator
This paper expounds the design and control of a new Variable Stiffness Series Elastic Actuator (VSSEA). It is established by employing a modular mechanical design approach that allows us to effectively optimise the stiffness modulation characteristics and power density of the actuator. The proposed VSSEA possesses the following features: i) no limitation in the work-range of output link, ii) a wide range of stiffness modulation (~20Nm/rad to ~1KNm/rad), iii) low-energy-cost stiffness modulation at equilibrium and non-equilibrium positions, iv) compact design and high torque density (~36Nm/kg), and v) high-speed stiffness modulation (~3000Nm/rad/s). Such features can help boost the safety and performance of many advanced robotic systems, e.g., a cobot that physically interacts with unstructured environments and an exoskeleton that provides physical assistance to human users. These features can also enable us to utilise variable stiffness property to attain various regulation and trajectory tracking control tasks only by employing conventional controllers, eliminating the need for synthesising complex motion control systems in compliant actuation. To this end, it is experimentally demonstrated that the proposed VSSEA is capable of precisely tracking desired position and force control references through the use of conventional Proportional-Integral-Derivative (PID) controllers.
Emre Sariyildiz, Rahim Mutlu, Jon Roberts, Chin-Hsing Kuo, Barkan Ugurlu
2023-01-03T03:25:15Z
http://arxiv.org/abs/2301.00939v1
# Design and Control of a Novel Variable Stiffness Series Elastic Actuator ###### Abstract This paper expounds the design and control of a new Variable Stiffness Series Elastic Actuator (VSSEA). It is established by employing a modular mechanical design approach that allows us to effectively optimise the stiffness modulation characteristics and power density of the actuator. The proposed VSSEA possesses the following features: i) no limitation in the work-range of output link, ii) a wide range of stiffness modulation (-20Nm/rad to -1KNN/rad), iii) low-energy-cost stiffness modulation at equilibrium and non-equilibrium positions, iv) compact design and high torque density (-38nm/kg), and v) high-speed stiffness modulation (-3000Nm/rad/s). Such features can help boost the safety and performance of many advanced robotic systems, e.g., a cobot that physically interacts with unstructured environments and an exoskeleton that provides physical assistance to human users. These features can also enable us to utilise variable stiffness property to attain various regulation and trajectory tracking control tasks only by employing conventional controllers, eliminating the need for synthesising complex motion control systems in compliant actuation. To this end, it is experimentally demonstrated that the proposed VSSEA is capable of precisely tracking desired position and force control references through the use of conventional Proportional-Integral-Derivative (PID) controllers. Compliant robotics, safe robotics, series elastic actuators, variable stiffness actuators, physical robot-environment interaction. ## I Introduction To boost safety in physical-robot environment interaction, compliant actuators have been widely adopted by many different advanced robotic systems such as humanoids, cobots, quadrupeds, and exoskeletons [1, 2, 3, 4, 5]. A compliant actuation system could be developed by simply integrating an elastic element into the design of an actuator [5]. For example, Series Elastic Actuators (SEAs), one of the most popular compliant actuation systems in robotics, are developed by placing a spring between a conventional rigid actuator and link [6, 7, 8]. In addition to improving safety, the spring between the rigid actuator and link can provide several benefits such as low-cost and high-fidelity force control, mechanical energy storage, lower reflected inertia, higher tolerance to impact loads, and increased output power [8]. Despite the aforementioned benefits, the elastic elements integrated to compliant actuators introduce certain challenges and fundamental limitations in motion control [9]. For example, it is a well-known fact that the position control problem of a compliant actuator is more complicated than that of a conventional rigid actuator [10, 11, 12, 13, 14]. To suppress the vibrations and disturbances of a compliant actuator's link, researchers generally need to employ advanced motion controllers [14]. Another example is that although using a softer elastic element improves safety and transparency, the natural frequency of the compliant actuator decreases. This not only excites the vibrations at link side but also lowers the bandwidth of the actuator, thus limiting achievable position and force control performance in motion control applications [13, 14]. It is therefore essential to properly choose the stiffness of the elastic element of a compliant actuator based on the target control task [15]. However, this is mostly impractical for compliant actuators with fixed elastic elements, which often leads to compromise between safety and performance [10, 16]. A simple yet efficient solution for this fundamental problem could be achieved by integrating a compliant mechanical component with variable stiffness property to actuators in series or parallel [17, 18]. Adaptable compliance mechanisms that can alter the stiffness of actuators mechanically have been developed to meet the different compliance requirements of motion control tasks such as soft actuation in human-robot interaction and stiff actuation in trajectory tracking. Although no standard terminology exists, such actuators are generally called Variable Stiffness Actuators (VSAs) in the literature. A comprehensive survey on the design of VSAs can be found in [18, 19]. Among them, antagonistic actuation is one of the most well-known and widely used stiffness modulation methods in VSAs. Inspired by mammalian anatomy, this actuation method has been studied since early 1980s [20], and different antagonistic actuators have been developed and used in various robotic applications, such as legged locomotion and upper-limb rehabilitation, since late 1990s [21, 22]. The simplest antagonistic actuator can be designed by using an agonistic-antagonistic setup which connects two motors to a link via two nonlinear springs, similar to biceps and triceps in the human arm [19, 23]. While the equilibrium position can be adjusted by rotating two motors in the same direction, counter-rotation of motors alters the stiffness of the actuator by changing the spring preload. Despite its simplicity, this bioinspired actuation method has several drawbacks in practice, e.g., i) the control problems of stiffness modulation and equilibrium position are coupled, ii) the output torque of the actuator is limited by the maximum torque of each motor, iii) the potential energy capacity of nonlinear springs cannot be used entirely, and iv) the energy consumption is high because preloading nonlinear springs for stiffness modulation requires constant power drain, even when the actuator does not perform net mechanical work at equilibrium positions [24; 25]. Numerous different antagonistic and non-antagonistic VSAs have been proposed to tackle the drawbacks of the agonistic-antagonistic actuation setup in the last two decades [18; 19]. For example, while the output torque of antagonistic actuators is increased using bi-directional configuration in [26], a partially decoupled motion control problem is obtained using a quasi-antagonistic actuation mechanism in [27]. However, the aforementioned problems could not be entirely addressed using antagonistic actuation systems [18; 19]. This has motivated many researchers to build non-antagonistic VSAs. The control problems of the equilibrium position and compliance of the actuator's output link are decoupled using a new mechanical design approach in Maccepa [28]. The stiffness of the actuator is modulated by controlling the tension of a linear spring. A similar stiffness modulation approach is employed in DLR-VSJ, and a light-weight and compact VSA is designed by changing only the cam disk of the joint in [29]. Nevertheless, similar to antagonistic actuators, high-energy-cost stiffness modulation remains a challenging problem in these non-antagonistic VSA design approaches. Since the stiffness of the Maccepa and DLR-VSJ is modulated by pretensioning springs, they require constant power drain at equilibrium positions, thus leading to high energy consumption [28 - 30]. The energy-cost of stiffness modulation has been improved using different VSA design approaches in the last decade. The stiffness of the output link is modulated by changing the positions of the springs and pivot points on a lever arm in AWAS [31; 32]. Low energy cost stiffness modulation (e.g., theoretically zero power drain at equilibrium positions) could be achieved using this VSA design approach. The stiffness range, however, is limited by the size of the actuator [31; 33]. In vsatT, the stiffness of the actuator is modulated by controlling the effective length of a lever arm [34; 35]. This stiffness modulation approach allows us to attain not only zero power drain at equilibrium positions but also infinite-range stiffness modulation with compact actuators. However, the power drain by the motor dedicated to stiffness modulation becomes unbounded as the stiffness of the actuator approaches infinity [33; 34]. Variable length leaf spring mechanisms have also been employed to develop energy efficient VSAs [36; 37; 38; 39]. Compared to the other antagonistic and non-antagonistic VSAs, recent studies show that variable length leaf spring mechanisms can provide several benefits in practice, e.g., zero power drain at equilibrium positions, fast and infinite-range stiffness modulation, and bounded power drain for all stiffness ranges at equilibrium and non-equilibrium positions [30; 33]. These features could be very useful in biomedical engineering applications as shown in [30; 38]. Nevertheless, when it comes to building a compact VSA that can be integrated to different robotic systems such as cobots, the variable length leaf spring mechanisms may involve several drawbacks such as low torque/power density and work-space limitations [36; 37; 38; 39]. It is noted that a non-antagonistic VSA can be simply built by employing a discrete stiffness modulation method where multiple springs could be integrated to actuators in series or parallel [40; 41; 42]. The main drawback of this actuation method is the limited stiffness range which depends on the number of springs employed in the actuator design. Moreover, the discrete stiffness modulation method leads to several challenges in controller analysis and synthesis such as the stability problem of switching systems [40; 42]. Therefore, continuous stiffness modulation methods are mainly considered in this paper. The existing VSAs have their own merits and demerits. While they are highly functional in their own domain of use, they may however fall-short in complying with all the desirable technical specifications of an ideal compliant actuator for practical applications: i) a compact and simple mechanical design that allows to easily reconfigure a VSA for different applications, ii) no limitation in motion range, iii) a wide range of stiffness modulation, iv) rapid stiffness change, and v) energy efficient actuation [30]. In general, researchers manage a trade-off to target only few of the aforementioned qualities, leading to different compromises such as high energy consumption or limited motion control performance in robotic applications. Hence, despite many recent advances, more effort should be put into the development of VSAs [18; 19; 30]. This is summarised using the examples of existing VSAs in the literature in Table I. To this end, this paper proposes a new VSSEA which consists of three main components: i) a rigid actuator that independently controls the equilibrium position, ii) a novel Variable Stiffness Actuation Mechanism (VSAM), and iii) a direct drive motor that independently adjusts the stiffness. The proposed VSSEA \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|} \hline Actuators & Deflection & Motion Range & Stiffness range & Energy cost of Fixed Stiffness at Equilibrium, Energy Cost of Infinite-Range Stiffness \\ & Range [degree] & [degree] & [Nm/rad] & Energy cost of fixed stiffness at equilibrium & Energy cost of infinite-range stiffness & Torque density [Nm/kg] & Stiffness modulation \\ & & & & equilibrium & additional & speed [Nm/rad/s] \\ \hline VSSEA & \(\pm 25\) & no limitation & \(20-1000\)\({}^{\text{th}}\) & ZEC & BEC & 35.72 & 3000 \\ \hline Maccepa & \(\pm 60\) & \(\pm 90\)\({}^{\text{th}}\) & \(5-110\) & NZEC & NA & 20.83 & 40 \\ \hline DLR-VSJ & \(\pm 15\) & \(\pm 180\) & \(50-820\) & NZEC & NA & 22.27 & 2350 \\ \hline AWAS & \(\pm 12\) & \(\pm 120\) & \(30-1500\) & ZEC & NA & 31.1 & 420 \\ \hline AWAS-II & \(\pm 18\) & \(\pm 150\) & \(10-10000\) & ZEC & NA & 9.76 & 4000 \\ \hline vsuUT-II & \(\pm 45\) & \(\pm 180\)\({}^{\text{th}}\) & \(0.5\)N \(-100\) & ZEC & NBEC & 8.7 & 1000 \\ \hline VSA-I [36] & \(\pm 12\) & NA & \(250-3000\)\({}^{\text{th}}\) & ZEC & BEC & 6.1 & NA \\ \hline VSA-II [30] & \(\pm 75\) & \(\mp 75\) & \(10-8000\)\({}^{\text{th}}\) & ZEC & BEC & Passive (3kg) & 10000 \\ \hline Table I is prepared using experimental results given in the papers unless data is provided in the references. \({}^{\text{th}}\) Although the motion range of vastT is limited (e.g., vastT-II’s motion range is \(\pm 180\)”), mysuUT can perform continuous motions without any motion range limitations [46]. \({}^{\text{th}}\) Infinite-range stiffness modulation can be achieved using leaf spring based VSAs. \({}^{\text{th}}\)[28] states that the motion range of Maccepa can be increased. ZEC: Zero Energy Consumption; NZEC: Non-Zero Energy Consumption; BEC: Bounded Energy Consumption when performing infinite-range stiffness modulation, NBEC: Non-Bounded Energy Consumption when performing infinite-range stiffness modulation; and NA: Not Applicable because the actuators cannot provide infinite range stiffness modulation. is simply developed by integrating the VSAM to a rigid actuator. This modular design approach allows us to easily optimise the stiffness modulation characteristics and output power/torque of the VSSEA for different robotic applications. The stiffness of the actuator is modulated by changing the effective length of a group of leaf springs of the VSAM through a direct drive motor. This stiffness modulation technique provides several benefits: i) stiffness modulation over a large range, i.e., from near-zero to infinite stiffness theoretically, ii) high-speed stiffness modulation using a relatively slow motor, and iii) zero/near-zero energy consumption for holding/altering the stiffness at equilibrium positions, and low-energy-cost stiffness modulation at non-equilibrium positions. Moreover, the proposed VSSEA has no limitation in motion range. To the best of our knowledge, the existing VSAs have yet to combine all these desired features of our proposed VSSEA [30]. The rest of the paper is organised as follows. In Section II, the mechanical design of the VSSEA is presented. In Section III, the dynamic model of the actuator is derived by using the analogy of a mass-spring-damper system and Euler-Bernoulli beam theory. In Section IV, the performance of the VSSEA is experimentally verified. The paper ends with discussion and conclusion given in Sections V and VI. ## II Mechanical Design ### Variable Stiffness Series Elastic Actuator: Figure 1 illustrates the CAD model and the first prototype of the VSSEA. It comprises i) a conventional rigid actuator that includes a servo motor and a gearbox, illustrated by M1, ii) a novel variable stiffness actuation mechanism illustrated by VSAM, and ii) a direct drive servo motor illustrated by M2 in the figure. The conventional rigid actuator M1 is used to independently control the equilibrium position of the output link. The proposed modular design approach enables us to freely tune the output torque and speed of the VSSEA. For example, we used Maxon EC90 flat motor and a 1:100 ratio harmonic drive to achieve \(\sim\)100Nm output torque and \(\sim\)\(\pi\)/2rad/s output speed in the first prototype of the actuator. The output power of the VSSEA can be directly adjusted by employing a different servo motor and/or a gearbox in the design of the conventional rigid actuator M1. The second motor M2 is used to control the stiffness of the actuator via the VSAM independently. The low-energy-cost stiffness modulation feature, which is explained in Section III, of the VSAM allowed us to use the Maxon EC60 direct drive flat motor in the stiffness control of the actuator. As shown in Fig.1, the proposed VSSEA is simply designed by integrating the VSAM to the conventional rigid actuator M1. This modular design approach provides great flexibility in building a VSA for different robotic applications. Let us now present the mechanical design of the novel VSAM that provides important features, such as a wide range of stiffness modulation and energy efficiency, in compliant actuation. ### Variable Stiffness Actuation Mechanism: Figure 2 illustrates the CAD model and the first prototype of the VSAM. The design comprises i) eight radially distributed in-parallel leaf springs, ii) two rollers for each leaf spring to reduce friction and improve energy efficiency, and iii) a ball screw mechanism to move the rollers along the leaf springs as illustrated in the figure. The stiffness of the actuator is modulated by changing the position of the rollers which is controlled by a ball-screw mechanism that is driven by the second motor M2. The VSSEA is in the softest mode when the rollers are at the free ends of the leaf springs, and the stiffness of the actuator increases as the rollers move towards the fixed end. With the nonlinear dynamic behaviour of the VSAM, a wide range of stiffness modulation can be obtained by simply changing the effective length of the leaf springs through the position control of the rollers. This allows us to perform large stiffness modulations within a short time, and this feature can provide several benefits in robotic applications [27]. Another important feature of the proposed VSSEA is low-energy-cost stiffness modulation. For example, the VSAM does not require constant power drain to hold the stiffness constant at equilibrium positions. ## III Dynamic Model The dynamic model of the VSSEA is obtained by employing the analogy of a mass-spring-damper system and the Euler-Bernoulli beam theory. ### Variable Stiffness Series Elastic Actuator: The dynamic model of the VSSEA is illustrated in Fig. 3. In this figure, \(J\), represents the inertia of motor 1, gearbox, motor 2, and output link when \(\bullet\) is _m1_, \(g\), _m2_ and \(l\), respectively; \(r\), \(b\), and \(q\), similarly represent the torque, viscous friction coefficient, and angle of the motor 1, gearbox, motor 2, and Figure 1: CAD model and first prototype of the novel VSSEA. M1: Motor 1, M2: Motor 2, and VSAM: Variable Stiffness Actuation Mechanism. Figure 2: CAD model and first prototype of the VSAM. output link, respectively; \(\dot{q}_{*}\) and \(\ddot{q}_{*}\) represent the first and second order derivatives of \(q_{*}\), i.e., angular velocity and angular acceleration, respectively; and \(k\) represents the stiffness of the actuator, which can be defined as a nonlinear function of \(q_{an2}\) and \(q_{i}\) as shown in Section III.C. While the first motor denoted by \(m1\) in Fig. 3 is used to control the equilibrium position of the actuator, the second motor denoted by \(m2\) is used to independently modulate the stiffness of the output link. The dynamic model of the VSSEA can be derived from this figure as follows: \[\begin{array}{l}\ddot{J}_{m}\ddot{q}_{an}+\ddot{b}_{ai}\dot{q}_{an}=\tau_{ \rm{nl}}-N^{-1}\tau_{i}-\tau_{nl}^{dis}\\ J_{i}\ddot{q}_{i}+b\dot{q}_{i}=\tau_{i}-\tau_{i}^{dis}\\ J_{an2}\ddot{q}_{i}+b_{az}\dot{q}_{az}=\tau_{az}-\tau_{i}^{dis}-\tau_{az}^{dis} \end{array} \tag{1}\] where \(\tau_{*}^{dis}\) represents the unknown/unmodelled disturbances of motor 1, motor 2, and output link when \(\bullet\) is \(m1\), \(m2\) and \(l\), respectively; \(\tau_{i}\) represents the torque exerted by the nonlinear springs of the VSAM on motor 1 and output link; \(\tau_{i}^{dis}\) represents the disturbance torque exerted by the nonlinear springs of the VSAM on motor 2 in stiffness modulation; and \(\ddot{J}_{anl}=J_{anl}+N^{-2}J_{g}\) and \(\ddot{b}_{ai}=b_{ai}+N^{-2}\dot{b}_{g}\) where \(N\) is gear ratio. Equation (1) gives a simple yet useful dynamic model for the proposed VSSEA. However, more effort should be expended on understanding the nonlinear dynamics of the VSAM, i.e., deriving \(\tau_{i}\) and \(\tau_{i}^{dis}\) in Eq. (1). This will enable us to explain the important features such as the energy efficiency of the VSSEA. ### Variable Stiffness Actuation Mechanism: To derive the model of the VSAM, let us focus on the first leaf spring illustrated in Fig. 4a. In this figure, \(\textbf{F}_{*}\), \(\textbf{F}_{*}\), and \(\textbf{F}_{*}\) represent the forces exerted by the leaf spring on the roller along the \(\textbf{x}_{1}\), \(\textbf{y}_{1}\), and \(\textbf{z}_{1}\) axes of the local coordinate frame on the leaf spring, respectively. It is noted that the proposed analysis can be similarly applied to the other leaf springs, e.g., the second leaf spring illustrated in Fig 4b. When it is assumed that only the first leaf spring is used in the design of the VSAM, the kinematic and static equilibrium equations of the output link can be directly obtained from Fig. 4c and Fig. 4d as follows: \[\delta=2r\sin\left(q_{i}/2\right) \tag{2}\] \[\tau_{i}-\tau_{i}=F_{i}r-\tau_{i}=\sqrt{F_{*}^{2}+F_{*}^{2}}r-\tau_{i}=0 \tag{3}\] where \(\delta\) is a kinematic constraint that relates the angle of the output link to the deflection of the leaf spring; \(r\) represents the magnitude of the distance vector \(\textbf{r}_{i}\) illustrated in Fig. 4c; and \(F_{*}\) represents the magnitude of the force vector \(\textbf{F}_{*}\) in which \(\bullet\) can be \(r_{i}\), \(y_{1}\) and \(z_{l}\) as shown in Fig. 4d. When the other leaf springs illustrated in Fig. 2a are considered, the static equilibrium equation of the output link can be obtained by simply expanding Eq. (3) as follows: \[\tau_{i}-\tau_{i}=\sum_{i=1}^{s}F_{*}r-\tau_{i}=\sum_{i=1}^{s}\sqrt{F_{*}^{2}+ F_{*}^{2}}r-\tau_{i}=0 \tag{4}\] where \(F_{*}\), \(F_{*}\), \(F_{*}\), and \(F_{*}\) similarly represent the forces exerted by the \(i^{th}\) leaf spring, and \(r\) represents the magnitude of the distance vector \(\textbf{r}_{i}\). Equation (4) shows that the leaf spring forces should be identified to obtain the dynamic model of the VSSEA. ### Leaf Springs: The leaf springs of the proposed VSAM not only bend along the lateral axes \(\textbf{y}_{1}\) but also slightly rotate about the longitudinal axes \(\overline{\textbf{x}}\) at non-equilibrium positions [43]. For the sake of simplicity, the 3D deflection model of the leaf spring illustrated in Fig. 5 is numerically obtained using Finite Element Method Figure 4: Model of the Variable Stiffness Actuation Mechanism. Figure 5: 3D large deflection model of the first leaf spring using FEM. Figure 3: Dynamic model of the VSSEA. (FEM). While Fig. 5a shows the FEM model at an equilibrium position, Fig. 5b shows the maximum deflection of the leaf spring when the output link rotates for about \(20^{\circ}\). As shown in this figure, the largest deflection occurs along the lateral axis \(\mathbf{y_{1}}\). Figure 5c illustrates the static output torque of the actuator for different stiffness configurations when the deflection of the output link increases. In this figure, \(\dot{t}_{{}_{i}}\!=\!F_{{}_{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! positions. The disturbance torque exerted by the VSAM on the stiffness modulation motor can be calculated as follows: \[\text{Large Deflection Model:}\ \ \tau_{s}^{\text{\tiny{dis}}}=\sum_{i=1}^{8}F_{s}r \tag{10}\] \[\text{Small Deflection Model:}\ \ \tau_{s}^{\text{\tiny{dis}}}=\frac{48EI}{\eta^{3}q_{s 2}^{3}}r^{2}\sin\left(\frac{q_{t}}{2}\right)\tan\left(\varphi\right) \tag{11}\] where \(\varphi=F_{s}L^{2}/2EI\)[44]. Figure 8 illustrates the disturbance torque of the stiffness modulation motor \(\tau_{s}^{\text{\tiny{dis}}}\) and the static output torque of the VSSEA \(\tau_{s}=\tau_{l}\) when the deflection of the output link increases. Since \(\tau_{s}^{\text{\tiny{dis}}}\) is zero when \(q_{t}=q_{t}\), the VSAM does not consume energy to keep the stiffness constant at equilibrium positions. When the deflection is small, the energy consumption of the VSAM is negligible because \(\tau_{s}^{\text{\tiny{dis}}}\) is very low regardless of \(\tau_{s}\). However, the VSAM consumes higher energy as the deflection of the output link increases. It is clear from Fig. 8 that the disturbance torque of the stiffness modulation motor is always limited within the work-range of the VSSEA. This disturbance can be further supressed using novel mechanical designs, e.g., in [45]. ## IV Motion Control of the VSSEA In this section, the position and force control problems of the VSSEA are discussed. It is a well-known fact that the motion control problem of compliant actuators is more complicated than that of conventional stiff actuators, particularly in position control [10]. To precisely track link trajectories or interact with unstructured environments, internal and external disturbances should be compensated using adaptive and robust controllers [9 - 15]. This section experimentally shows that the proposed VSSEA allows us to conduct high-performance motion control tasks using conventional PID controllers. The experimental setup was built by using ESCON 50/5 motor drivers, 1000pp encoders at the motors and a 10000ppr encoder at the link. A PC with a Linux operating system was employed to perform the real-time motion control experiments with 1ms sampling time. ### _Position Control:_ Let us start with the position control problem of the VSSEA. Figure 9 illustrates the position control experiments when a PID controller is synthesised for controlling the angle of the first motor as follows: \[\tau_{s}=K_{p}\left(q_{s}^{\text{\tiny{off}}}-q_{t}\right)+K_{s}\left[\left(q _{s}^{\text{\tiny{off}}}-q_{t}\right)dt+K_{s}\left(\dot{q}_{t}^{\text{\tiny{ off}}}-\dot{q}_{t}\right)\right. \tag{12}\] where \(q_{t}^{\text{\tiny{off}}}\) and \(\dot{q}_{t}^{\text{\tiny{off}}}\) represent the position and velocity references of the gearbox, i.e., \(q_{s}\), respectively. External disturbances up to 5Nm and 15Nm were applied to the output link after 2 seconds when the VSSEA was in soft (21 Nm/rad) and stiff mode (985 Nm/rad), respectively. Although the transient response was good except a relatively high overshoot at link side, the link of the actuator was very sensitive to external disturbances, particularly when the actuator was in soft mode. This is expected because the angle of the link \(q_{t}\) is not used in the position controller synthesis. To improve the robustness against external load, let us use \(q_{t}\) in the feedback controller synthesis. In this experiment, the control signal was designed as follows: \[\tau_{m}=K_{p}\left(q_{t}^{\text{\tiny{off}}}-q_{t}\right)+K_{s}\left[\left(q _{t}^{\text{\tiny{off}}}-q_{t}\right)dt+K_{s}\left(\dot{q}_{t}^{\text{\tiny{ off}}}-\dot{q}_{t}\right)\right. \tag{13}\] where \(q_{t}^{\text{\tiny{off}}}\) and \(\dot{q}_{t}^{\text{\tiny{off}}}\) similarly represent the position and velocity references of the VSSEA's link, respectively. Figure 8: Disturbance torque of the stiffness modulation motor. Figure 7: Output torque and stiffness of the VSSEA. LDM: Large Deflection Model and SDM: Small Deflection Model. Figure 9: Position regulation control of motor 1 when external loads are applied to the link. \(K_{p}=15000\), \(K_{s}=500\), \(K_{s}=\)75 and \(q_{t}^{\text{\tiny{off}}}=0.5\pi\). External disturbances up to 10Nm were similarly applied when the actuator was in stiff mode, however only less than 1Nm external disturbances could be applied due to robust stability problems encountered in the soft mode of the actuator. Figure 10 shows that the robustness against external loads was improved by using \(q_{i}\)in the feedback controller synthesis. However, the performance of transient response was notably degraded by large vibrations when the actuator was in soft mode (see Fig. 9(b)). Moreover, only small external disturbances could be suppressed due to the robust stability problems. It is a well-known fact that more advanced controllers should be employed for the robust position control problem of compliant actuators [10]. By changing the stiffness of the actuator, this paper proposes a simple yet effective solution for this challenging problem as shown in Fig. 10. When we used the same PID controller in trajectory tracking control, we obtained similar results as illustrated in Fig. 11. Due to the well-known bandwidth limitations of compliant systems, increasing the speed of reference trajectory notably degraded the position control performance when the actuator was in soft mode [3].The VSSEA could not follow 1Hz reference trajectory as illustrated in Fig. 10(b). The performance of trajectory tracking control could be easily improved by increasing the stiffness of the actuator as illustrated in Figs. 10(c) and 10(d). ### _Force Control:_ By using Hooke's law, the force control problem of the compliant actuator was described as a position control problem, and force control experiments were performed by controlling the deflection of the output link at non-equilibrium positions. Similar to the position control experiments, a simple PID controller was synthesised by feeding back the deflection angle of the output link, i.e., \(\Delta_{q}=q_{i}-q_{q}\) where \(q_{i}\) and \(q_{q}=q_{ui}/N\) are the link and gear angles, respectively. Figure 12 illustrates the force control experiments of the VSSEA. The force control signal was designed using Eq. (14). \[\tau_{n}=K_{p}\left(\Delta_{q}^{\prime\prime}-\Delta_{q}\right)+K_{j}\left[ \left(\Delta_{q}^{\prime\prime}-\Delta_{q}\right)dt+K_{q}\left(\Delta_{q}^{ \prime\prime}-\Delta_{q}\right)\right. \tag{14}\] where \(\Delta_{q}^{\prime\prime}\) and \(\Delta_{q}^{\prime\prime}\) represent the position and velocity references of the link deflection, i.e., \(\Delta_{q}=q_{i}-q_{q}\), respectively. When the actuator was in soft mode, relatively large link deflections occurred for small output torques as shown in Figs. 11(a) and 11(b). This allows actuator to physically interact with different environments in a safe manner. To achieve higher Figure 11: Position trajectory tracking control of the output link when \(K_{p}=5000\), \(K_{d}=95\), \(K_{i}=35\), and \(q_{i}^{\prime\prime}=K\left(1-cos\left(2\pi f\left(t-1\right)\right)\right)\) where \(K\) is \(0.5\pi\) and \(2\pi\). Figure 10: Position regulation control of the output link when external loads are applied to the link. \(K_{p}=5000\), \(K_{d}=95\), \(K_{i}=35\), and \(q_{i}^{\prime\prime}=0.5\pi\). output torque, the stiffness of the actuator should be increased as illustrated in Figs. 12c and 12d. This, however, may degrade safety in contact motion. It is clear from Fig. 12 that the proposed VSSEA enables us to conduct high-performance force control applications using a conventional PID controller. ### _Stiffness Modulation:_ Figure 13 illustrates the position control experiments of motor 2 at different speeds when the actuator is at equilibrium positions. The motor did not draw current, i.e., the VSAM did not consume energy, to keep the stiffness of the actuator fixed at an equilibrium position as shown in Fig. 13. The current of motor 2 was negligible at low speeds of stiffness modulation, e.g., 0.1Hz in Fig. 13a. As the speed of stiffness modulation increased, the motor drew higher current due to the increased frictional and inertial disturbances of the VSAM (see Fig. 13b). This is expected because the energy consumptions of all variable stiffness actuators become higher as the speed of stiffness modulation is increased [33, 45]. Since the frictional and inertial disturbances were mitigated in the design of the VSAM, the stiffness modulation could be performed using low current drains at equilibrium positions as shown in Fig. 13. Figure 14 illustrates the stiffness modulation experiments when the actuator is at non-equilibrium positions. While the deflection angle of the output link was regulated, the stiffness of the actuator was modulated 10% in this experiment. Figures 14a and 14b show that the current of motor 2, \(I_{n2}\), was low when 10% stiffness modulation was performed in the stiff mode of the VSSEA. This is expected because, as shown in Eq. (11), the small deflections of leaf springs in stiff mode could lead to low disturbance torques exerted by the VSAM on motor 2. Compared to other energy efficient and infinite-range variable stiffness actuators, e.g., [34, 35], this feature of the proposed VSAM allows us to perform infinite-range stiffness modulation using bounded control signals [33]. Figures 14c and 14d show that the current drain of motor 2 was higher when 10% stiffness modulation was performed in the soft mode of the VSSEA. Since the deflection angle of the output link was larger in soft mode, the current drain of motor 2 increased by the higher disturbance torque of the VSAM as shown in Eq. (11). Figures 13 and 14 show that the control signal of motor 2 was always bounded when stiffness modulations were conducted at equilibrium and non-equilibrium positions. Figure 15 illustrates the energy cost of changing stiffness compared to the potential energy stored in the springs of the VSAM when 10% stiffness modulation was performed at stiff and soft modes of the actuator. It shows that the energy cost of stiffness modulation is weakly coupled to the deflection of the output link. While lower energy was consumed by the VSAM at fixed stiffness configurations in both stiff and soft modes, the energy cost of stiffness modulation increased when the stiffness of the actuator was changed. The larger deflection of the output link in soft mode led to higher energy cost of stiffness modulation as shown in Fig. 15b. This result is expected because the disturbance torque gets larger as the deflection of the output link increases as shown in Eq. (11). proposed VSSEA enables us to conduct high-performance position and force control applications using conventional PID controllers. This provides several benefits in many different advanced robotic applications. For example, while the stiff mode of the VSSEA allows a cobot to perform high-precision position control tasks in industry, the soft mode of the actuator can boost safety in physical robot-environment interaction. However, more advanced controllers should be synthesised for smooth transition between position and force control tasks, i.e., the stiff and soft mode of the actuator. In addition, more effort should be expended on optimal stiffness modulation. The experimental results show that a wide range of stiffness modulation can be achieved by simply controlling the position of the second motor. For example, while the stiffness of the output link is 21Nm/rad when the actuator is in soft mode in Fig. (a)a, it is \(\sim\)50 times higher with 985Nm/rad in Fig. (b)b. This is an important feature of the proposed VSSEA as it allows us to perform both precise position control and safe robot-environment interaction tasks using conventional PID controllers. Other important features of the VSSEA are fast and low-energy-cost stiffness modulation capabilities. The transition from the softest mode to the stiffest mode can be achieved within a second as illustrated in Fig. 13. Since the disturbance torque exerted by the VSAM on motor 2 is always bounded unless it is zero, a wide range of stiffness modulation can be performed using bounded control signals at equilibrium and non-equilibrium positions as shown in Figs. 13 and 14. Therefore, the energy cost of stiffness modulation is low as shown in Fig. 15. This can provide significant benefits to mobile robotic systems such as humanoids, quadrupeds, and exoskeletons. With the proposed modular design approach, the VSSEA can be easily modified to meet the requirements of different robotic applications. For instance, i) torque density can be increased using a higher gear ratio, ii) stiffness characteristic can be tuned using the different number, material, and shape of leaf springs, and iii) faster stiffness modulation can be achieved using a ball screw with higher lead. The proposed novel mechanical design eliminates the work-range limitation of variable stiffness actuators as shown in Figs. (a)a and (c)c)c. This allows us to apply the VSSEA to various robotic applications. By neglecting the small torsional motions of leaf springs, the dynamic model of the VSSEA is obtained using the analogy of a mass-spring-damper system and the Euler-Bernoulli beam theory in Section III [44]. While the exact dynamic model and nonlinear Euler-Bernoulli beam theory are computationally expensive and ineffective in controller synthesis [43, 44], the model derived through simple beam theory is inaccurate when the deflection of the output link is large. Thus, more effort should be expended to obtain an effective dynamic model for the VSSEA. ## VI Conclusion This paper proposes a new VSSEA that can perform fast and low-energy-cost stiffness modulation over a large range. These features can provide several benefits to robotic systems such as easier motion control problems, longer battery life, and safer physical robot environment interaction. It is experimentally shown in this paper that the proposed VSSEA allows us to conduct high-performance position and force control tasks using conventional PID controllers. It is also demonstrated that the stiffness of the actuator can be increased or decreased up to 50 times within a second while the energy cost of stiffness modulation is very low. However, further research should be conducted to clarify how the VSSEA can contribute to robotic applications. To this end, we will apply the proposed actuator to advanced robotic applications such as legged locomotion in the future studies.
2303.09802
TypeScript's Evolution: An Analysis of Feature Adoption Over Time
TypeScript is a quickly evolving superset of JavaScript with active development of new features. Our paper seeks to understand how quickly these features are adopted by the developer community. Existing work in JavaScript shows the adoption of dynamic language features can be a major hindrance to static analysis. As TypeScript evolves the addition of features makes the underlying standard more and more difficult to keep up with. In our work we present an analysis of 454 open source TypeScript repositories and study the adoption of 13 language features over the past three years. We show that while new versions of the TypeScript compiler are aggressively adopted by the community, the same cannot be said for language features. While some experience strong growth others are rarely adopted by projects. Our work serves as a starting point for future study of the adoption of features in TypeScript. We also release our analysis and data gathering software as open source in the hope it helps the programming languages community.
Joshua D. Scarsbrook, Mark Utting, Ryan K. L. Ko
2023-03-17T07:07:44Z
http://arxiv.org/abs/2303.09802v1
# TypeScript's Evolution: An Analysis of Feature Adoption Over Time ###### Abstract TypeScript is a quickly evolving superset of JavaScript with active development of new features. Our paper seeks to understand how quickly these features are adopted by the developer community. Existing work in JavaScript shows the adoption of dynamic language features can be a major hindrance to static analysis. As TypeScript evolves the addition of features makes the underlying standard more and more difficult to keep up with. In our work we present an analysis of 454 open source TypeScript repositories and study the adoption of 13 language features over the past three years. We show that while new versions of the TypeScript compiler are aggressively adopted by the community, the same cannot be said for language features. While some experience strong growth others are rarely adopted by projects. Our work serves as a starting point for future study of the adoption of features in TypeScript. We also release our analysis and data gathering software as open source in the hope it helps the programming languages community. TypeScript, JavaScript, Data Mining ## I Introduction TypeScript[1] is a fast evolving superset of JavaScript implementing static type checking. From 2020 to 2022, there have been ten releases each bringing additional features and most adding new syntax to the language. With the rapid pace of evolution the question becomes, how quickly are these features being picked up by the developer community? Are some features more popular than others? This question has already been asked about other programming languages such as JavaScript, Java, and Python. In JavaScript the work by Richards et al. [2] explores the use of dynamic language features in JavaScript and concludes that production applications often use dynamic features making static analysis challenging. Similar work has also been done in Java where Pamin et al. [3] discovered that most uses of generics were covered by a small number of classes but the usage varies between developers. TypeScript has an evolving standard without a formal specification. Our paper seeks to understand how quickly new features in TypeScript are adopted, to determine how important it is for tools to stay up to date with the latest release. We hypothesize that it is unnecessary for program analysis tools to support the entire language and a smaller subset is sufficient for most applications. In this paper, we focus specifically on syntactic features (features implemented in the Abstract Syntax Tree without modifying language semantics) introduced by TypeScript versions between 4.0 and 4.9 in popular TypeScript libraries and applications. TypeScript also sees regular improvements to type inference and language features that are expressed through the type checker. These features are not a focus for our study. In this paper, we aimed to answer three research questions about the adoption of TypeScript features: * **(RQ1)** What are the most popular features recently introduced in TypeScript? * **(RQ2)** How quickly are new TypeScript features adopted by projects that use TypeScript? * **(RQ3)** How quickly are new TypeScript language versions adopted by projects that use TypeScript? Results are presented in Section III. In this paper, we contribute: * A dataset of current popular TypeScript repositories collected from GitHub[4]. * A open source framework for TypeScript feature/version adoption studies. * The first study of the rate of language feature/version adoption for TypeScript. * Recommendations for how important it is for tools to adopt new language features in TypeScript. Our paper is organized into four further sections. We start with our methodology for analysis (Section II) before presenting our results (Section III). We then make a brief review of related work (Section IV) before concluding with a discussion including some future research directions (Section V). ## II Methodology We ran our study on top-starred/rated repositories containing TypeScript code on GitHub. We extracted all commits between 2020 and 2022 (inclusive) and extracted a series of boolean flags indicating the usage of each language feature. The analysis code and data sets used for this analysis are available in our repository.1 Footnote 1: See [https://github.com/Vblitz/jsdata_msr](https://github.com/Vblitz/jsdata_msr). ### _Dataset_ We started by downloading a list of the top \(500\) TypeScript repositories from GitHub. The repositories are sorted according to the number of users that have starred the repository. These repositories include languages besides TypeScript code but only TypeScript is considered in our paper. We collected the list of repositories on January 4 2023 and included the list as part of our dataset. Our analysis includes all commits attached to a given repository. These projects often use feature branches which may include a feature well before its released on the main branch. For all calculations we used the date the feature first turned up in the repository rather than the date it was included in a release. Of those \(500\) repositories \(23\) had no commits extracted and an additional \(23\) recorded no versions of TypeScript. Therefore there are \(454\) repositories with at least one version of TypeScript recorded. ### _Analysis_ Our pipeline consists of an open source program written in Go[5] that extracts every unique TypeScript file from every commit in each repository. This includes all branches and all tags. We only consider commits made between January 1st 2020 and December 31st 2022 inclusive. We filtered by dates and only selected TypeScript features released between 2020 and 2022 because including commits outside this time span will not yield useful results. In our dataset of \(454\) repositories, \(87\%\) contain less than \(1000\) TypeScript files. We consider \(1,325,810\) total commits in our analysis. Git commits contain multiple dates such as when the commit was authored versus when the commit was committed. For our analysis we choose the latest possible date included in the commit. Extracted TypeScript files are parsed by TypeScript and usage of different language features are detected according to their presence in the Abstract Syntax Tree (AST). With extensive caching and duplicate detection, the entire analysis takes approximately one hour. ### _Version Detection_ We parse the package.json file in the root of the repository to detect the TypeScript version from the installed dependencies. ### _Feature List_ We focused on syntactic features which are exposed in the AST exported by TypeScript. We chose to focus on features released in the last three years (between 2020 and 2022). TypeScript versions are released as Beta and a Release Candidate before they are formally released. In our paper, we consider the full release to be Day Zero, as listed below. Projects adopting betas will show up as adopting features or versions before they were formally released (a negative number of days relative to Day Zero). TypeScript 4.8 and 4.6 did not make syntactic changes to the language and only included semantic and inference changes. Table I lists the \(8\) versions and \(13\) features in our study. This is not an exhaustive list of features introduced since we excluded features requiring type inference or type checking to identify. Our dataset includes a few special repositories that have different characteristics to other projects: * TypeScript[1]: The source code of TypeScript is included as part of this analysis. * Babel[6]: Babel is a compiler for JavaScript. It includes both ECMAScript[7] features and some TypeScript features since it has support for TypeScript syntax. ## III Results To address our research questions, we started with the adoption of TypeScript versions before moving onto the adoption of TypeScript features. ### _RQ1: Feature Adoption Rating_ We can categorize the adoption of features into two major groups based on their adoption slopes. Group one contains \(f_{4}\), \(f_{9}\), \(f_{11}\), \(f_{7}\), \(f_{12}\), \(f_{10}\) and includes features that have more than \(20\) repositories adopting them within one year after release. The most popular feature in this dataset is \(f_{4}\) (type modifiers on import) and the second most popular is \(f_{9}\) (Template Literal Types). Both of these features were necessary to solve some missing gaps in the TypeScript language. type modifiers ensure imports are used only for type definitions, and are erased when the code is compiled. This allows including other libraries and files without including runtime dependencies. It can also help to break import loops in some cases where a module needs a type from a module that imports it. Template Literal Types similarly give more flexibility in how types are described and open up new avenues of meta programming. Group two contains \(f_{2}\), \(f_{1}\), \(f_{8}\), \(f_{3}\), \(f_{0}\), \(f_{5}\), \(f_{6}\) and includes features that have less than \(20\) repositories implementing them one year after release. \(f_{6}\) (static blocks in classes) has the lowest adoption rate among our dataset with only four repositories adopting it in a year. Unlike \(f_{4}\) and \(f_{9}\) static blocks have equivalents in existing code so they are only used in niche circumstances. ### _RQ2: Feature Adoption_ Figure 1 shows the adoption curve of each of the TypeScript feature we looked at. Unlike Figure 2 we can immediately see two major differences. Different features have significantly different adoption rates with some reaching high levels of adoption and some barely being adopted at all. Secondly all features have mostly linear adoption rates. Features were detected across any file ending with.ts that can successfully be parsed as TypeScript. This means features that are only used in unit tests are also included here. In addition we include every branch of the repository so some features are adopted first in a feature branch before being included in the main branch. Both Babel and TypeScript were major outliers in the feature adoption rates. Table II focuses on these two repositories. Babel adopted some features well before TypeScript introduced them and TypeScript adopted all features before they were released. The behavior of TypeScript is easy to explain. A high coverage rate for unit tests means TypeScript starts adopting features as soon as they are implemented into the repository. Some features are part of the ECMAScript standard rather than TypeScript so Babel may include these features before TypeScript adds support for them. That explains why Babel adopts some features well before TypeScript. ### _RQ3: TypeScript Versions_ Figure 2 shows the adoption curve of each of the TypeScript versions we pulled features from. We can see here that all versions follow a similar adoption curve, with an initial slow adoption of pre-release versions, then a rapid adoption in the three months after release, followed by slower late adoption by a small number of repositories. Roughly \(1/3\) of projects (\(160\) out of \(454\)) adopt the latest release within the first three months after release (except for TypeScript 4.9, which was released less than 50 days before data collection ended). These fast adoption curves are not surprising since JavaScript/TypeScript projects regularly update dependencies to the latest revision and TypeScript releases do not introduce significant breaking changes. Most adoption happens in the first three months after release (Roughly \(35\%\) of projects in our dataset) with a small tail at the end for projects that update after a new version is already released. At the time of writing, TypeScript releases new versions every three months so some projects may not have adopted a version before the new version is released. These are results aggregated over \(454\) different repositories and we do not see all repositories accounted for here. This comes down to two major reasons. While some repositories (Visual Studio Code for example) adopt new versions within a few days of release some take a few months to adopt new versions or do not adopt them at all. The adoption averages out to the same curve though. The other reason is we detected the TypeScript version using the package.json file. The maximum adoption for any version is \(185\) out of \(454\) (\(46\) have no TypeScript version recorded). The reason is not all repositories adopt all versions of TypeScript and most skip versions as they do not regularly update. A few repositories adopted versions before they were formally released. TypeScript depends on itself but overrides that with the local version. It therefore adopts new versions before they are released. ## IV Related Work Some existing work has already investigated adoption of language features in JavaScript [2, 8, 9], Java [3, 10, 11], and Python [12, 13]. JavaScript allows for self-modifying code and code generated and evaluated at runtime. These features make tracking the control flow over a programs execution difficult so some previous works exclude them from analysis. The work by Richards et al. [2] questions this approach by looking at the prevalence of these features in production code. Due to the nature of those features most analysis there is based on dynamic analysis rather than the static analysis we use in our work. A large amount of work in this area has been done in Java [3, 10, 11]. Firstly the work by Parnin et al. [3] discovered that most uses of generics were covered by a small number of classes but the usage varies between developers. The work by Dyer et al. [10] broadened this by looking at \(31,432\) Java projects on SourceForge [14] and studying the adoption of \(18\) language features introduced in three versions of Java. The work by Peng et al. [12] performs a similar study to our work focusing on Python projects instead of TypeScript projects. They perform a smaller study on \(35\) different projects across a range of sectors. They make the interesting observation that larger projects tend to use less involved language features like safety checks rather than more advanced features like diamond inheritance. This lines up with our outcome since the most popular features we observed increase safety and the least popular feature (static blocks in classes) can make control flow more difficult to read. Another work by Yang et al. [15] follows a similar direction to the work by Richards et al. [2] looking at the impact of dynamic features on static analysis of Python code. The work by Cristiani and Thiemann[16] includes a brief analysis of feature usage in DefinitelyTyped[17]. The work is limited to types in type declaration files rather than our study looking at TypeScript source code. The static analysis field leverages this study to inform the language features they implement support for. For instance the work by Rastogi et al.[18] seeks to improve the safety of TypeScript programs and uses a smaller subset of TypeScript called "Safe TypeScript". This work was done before prior to the release of TypeScript 1.1 (October 6, 2014) and lacks may of the features introduced after. In addition the work by Feldthaus and Moller[19] uses a version of the TypeScript language to detect faults in JavaScript interfaces. Like the work by Cristiani and Thiemann[16] it focuses on declaration files rather than TypeScript source code. Overall the related work covers two different kinds of study. Some work [2] uses dynamic analysis to study the prevalence of dynamic features. The other group of studies [12] look at the usage of features across different types of project. Another further field[16, 19, 18] uses static analysis to perform code analysis on TypeScript language features. Our work extends on the second field of work by looking at a series of different versions. ## V Discussion & Concluding Remarks The answer to **RQ1** is that the most popular new language features are type modifiers on imports and template literal types. While type modifiers solve an existing issue of unintended side effects from imported modules, template literal types give additional flexibility in how types are constructed. The answer to **RQ2** is more involved. Different features are adopted at different rates which is an expected outcome. Some features are very niche and are only used by a small number of libraries. The unexpected outcome is that adoption rates are static over time and no features sees a large initial peak as developers race to adopt them. Our interpretation of this is that very few projects need a new feature, so they are adopted as developers learn about them and gradually utilize them in new code and in code rewrites. Fig. 1: How quickly is each TypeScript feature adopted relative to one another. Note the release date of each feature as some features have not been released for all 800 days. Fig. 2: The adoption curves of different versions. Version 4.9 was released 50 days before the data collection ended (31st December 2022) so the data stops there. Adoption rates asymptote to 180 projects, which is around \(40\%\) of projects. Other projects jump versions, rather than adopting every version. Finally, the answer to **RQ3** is straightforward. Most projects adopt new versions of TypeScript quickly with an expected long tail as remaining projects update to new versions. ### _Conclusions_ We observed a simple adoption curve for language versions, with most adoption happening shortly after release with \(1/3\) of repositories updating before the next TypeScript version is released. However, the adoption of new language features into repositories is much more gradual. A project can update to a new version of TypeScript without changing their code at all, so without adopting any new features. So adopting a new language feature may require adopting a new TypeScript version, but not vice versa. We can draw the conclusion that while a project has a feature available it may not adopt it until much later. Returning to our overall goal of specifying a useful subset of TypeScript for program analysis tools we can see that although new language versions are adopted quickly by the ecosystem (\(1/3\) over \(3\) months) the adoption of new features is a lot more variable with some features never being adopted outside of a few projects. This shows that it is important for tools to keep up to date with language versions but it is less important to support all language features (e.g. Group 2 features are used by only a few projects). ### _Future Work_ Currently our analysis focuses on syntactic changes to TypeScript, which misses improvements made to type inference and to the developer experience. It would be useful in future research to expand the list of features and look at semantic changes. Our paper focuses on the features introduced in the 4.x versions of TypeScript to make timely analysis possible. Future work could look at additional TypeScript versions. Additionally it would be interesting to run our analysis on a wider body of repositories to see how the results change with less popular projects.
2304.13625
HDR-VDP-3: A multi-metric for predicting image differences, quality and contrast distortions in high dynamic range and regular content
High-Dynamic-Range Visual-Difference-Predictor version 3, or HDR-VDP-3, is a visual metric that can fulfill several tasks, such as full-reference image/video quality assessment, prediction of visual differences between a pair of images, or prediction of contrast distortions. Here we present a high-level overview of the metric, position it with respect to related work, explain the main differences compared to version 2.2, and describe how the metric was adapted for the HDR Video Quality Measurement Grand Challenge 2023.
Rafal K. Mantiuk, Dounia Hammou, Param Hanji
2023-04-26T15:32:04Z
http://arxiv.org/abs/2304.13625v1
HDR-VDP-3: A multi-metric for predicting image differences, quality and contrast distortions in high dynamic range and regular content ###### Abstract High-Dynamic-Range Visual-Difference-Predictor version 3, or HDR-VDP-3, is a visual metric that can fulfill several tasks, such as full-reference image/video quality assessment, prediction of visual differences between a pair of images, or prediction of contrast distortions. Here we present a high-level overview of the metric, position it with respect to related work, explain the main differences compared to version 2.2, and describe how the metric was adapted for the HDR Video Quality Measurement Grand Challenge 2023. Image Metric High Dynamic Range ## 1 Introduction High-Dynamic-Range Visual-Difference-Predictor version 3, or HDR-VDP-3 is an image metric that can address multiple applications: the prediction of image quality, visible differences, and contrast distortions. If we want to optimize for image quality, for example, by selecting the right resolution and compression configuration for video streaming, we want to use _a full-reference quality metric_ that compares a distorted image/video (e.g., decoded frame) to its reference and assesses the overall magnitude of introduced distortion. However, if we want to ensure that the introduced distortions are invisible, for example, for visually lossless compression, we may want to use _a visibility metric_, which predicts the probability of detecting differences in each part of the image. In other cases, we may want to test an image processing algorithm that modifies an image but should not introduce disturbing artifacts. An example is tone mapping, where an image tone mapped for a low-dynamic range display must be different from the input high-dynamic-range image, but it should preserve the general visibility of the contrast. For such applications, we want to use _a contrast distortion metric_. HDR-VDP-3 addresses all three applications using the same core visual model but uses different parameters and final processing stages to provide predictions for each application. This short paper is not meant to be a complete description of the metric but a rather high-level overview with references to the relevant papers, which provide further details. HDR-VDP-3 has the same processing pipeline as HDR-VDP-2, which is explained in detail in [10]. In this short paper, we first position HDR-VDP-3 with respect to other metrics (Section 2), explain the list of tasks it can perform (Section 3), give a high-level overview (Section 4), itemize the differences with respect to HDR-VDP-2 (Section 5) and finally explain how the metric was adapted to perform quality assessment for the WACV HDR Video Quality Measurement Grand Challenge (Section 6). ## 2 Related metrics HDR-VDP-3 is the third major iteration of the metric, which was originally inspired by seminal works on visibility and detection metrics by Daly 1993, Lubin 1995, and Watson 2000. The original HDR-VDP-1 [10] was an extension of the VDP by Daly 1993. The extension incorporated changes allowing to compare high dynamic range images: the models of glare, photoreceptor response [10] (precursor of the PQ function later used for HDR coding), and contrast sensitivity, which adapts to local luminance. Similar to the VDP, this metric focused on predicting visible difference maps, and it did not provide single-value quality predictions. This was addressed in HDR-VDP-2 (Mantiuk et al., 2011), which was a major redesign of the original VDP metric: it incorporated separate pathways for rod and cone vision, replaced cortex transform with steerable pyramids (for performance and accuracy), incorporated a new contrast masking model with intra- and inter-channel masking, and provided the predictions of both visual difference maps and image quality. But probably the most significant difference was that HDR-VDP-2 was extensively recalibrated and tested on a large range of basic psychophysical detection and discrimination data. When HDR-VDP-2 was released, its quality predictions could be calibrated only on standard dynamic range image datasets (TID2008 and LIVE) as no HDR quality datasets were available. This was rectified in HDR-VDP-2.2 (Narwaria et al., 2015), which used two new HDR datasets in addition to TID2008 and CSIQ to recalibrate quality predictions. A few important works led to the development of HDR-VDP-3. First, new components were added to simulate the effect of aging on the visual system (Mantiuk and Ramponi, 2018): the age-dependent model of glare, crystalline lens aging and senile miosis (reduced pupil dilation in an older eye). Second, a series of new measurements on an HDR display let us model the effect of adaptation to local luminance (Vangorp et al., 2015). Finally, our effort to combine multiple HDR and SDR datasets and bring them to the same quality scale (Perez-Ortiz et al., 2020) let us recalibrate the metric on the largest HDR image quality dataset (of over 4000 images) -- UPIQ (Mikhailiuk et al., 2022). Other major changes are discussed in Section 5. A critical component of the metric is the perceptually uniform encoding of luminance. Such encoding was shown to be an effective method of representing and compressing HDR video (Mantiuk et al., 2004, 2006), and its refined version was later standardized as a Perceptual Quantizer (SMPTE ST 2084) (Miller et al., 2013). However, we have also demonstrated that such perceptually uniform (PU) encoding can be used to adapt existing SDR quality metrics to HDR images (Aydin et al., 2008; Mantiuk and Azimi, 2021). Parallel to the work on visibility and quality predictions, we also worked on predicting contrast distortions caused by tone-mapping (Aydin et al., 2008). A modern implementation of this metric is one of the "tasks" of HDR-VDP-3 (Section 3). Both us (Wolski et al., 2018; Ye et al., 2019) and others (Banterle et al., 2020) made an attempt to replace existing HDR-VDP-2 and HDR-VDP-3 metrics with deep-learning architectures. Neural networks bring the advantage of potentially faster processing speeds, no-reference predictions (Banterle et al., 2020), higher accuracy, and easier re-calibration. Although deep-learning metrics show promising results in selected applications, such as visually lossless coding (Ye et al., 2019), they are not explainable and often suffer from over-fitting, as image and quality datasets are typically small in size, and the measurements tend to be noisy. More recently, we released Foveated Video VDP (FovVideoVDP) (Mantiuk et al., 2021), a metric intended to predict quality in video, assuming a gaze point (foveated viewing) or assuming that the user can look everywhere (as in traditional video quality metrics). FovVideoVDP is a simplified version of HDR-VDP-3, which adds temporal processing (sustained and transient visual channels) and a contrast sensitivity function that accounts for the distance from the gaze location (eccentricity). The new implementation runs on a GPU (both in Matlab and Python/PyTorch) and offers much faster processing speeds. However, because the new metric has not been calibrated on regular (non-foveated) videos and contains multiple simplifications, it provides slightly worse accuracy of predictions. ## 3 HDR-VDP-3 tasks HDR-VDP-3 acts as a predictor of different quantities, depending on the "task" parameter. The tasks include: * [noitemsep,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsepsep=0pt,topsepsep=0pt,topsep=0pt,topsepsep=0pt,topsep=0pt,topsepsep=0pt,topsepsep=0pt,topsepsep=0pt,topsepsep=0pt,topsep=0pt,topsepsep=0pt,topsep=0pt,topsepsep=0pt,topsepsep=0pt,topsepsep=0pt,topsep=0pt,topsepsep=0pt,topsepsep=0pt,topsepsepsep=0pt,topsepsep=0pt,topsepsep=0pt,topsepsep=0pt,topsepsepsep=0pt,topsepsepsep=pt,topsepsepsep=pt,top * -- the detection task predicts the probability of detecting the difference between two images (single-valued) and was calibrated on the same datasets as HDR-VDP-2 -- basic psychophysical detection and discrimination data for Gabor patches, sinusoidal gratings, and discs. This task should provide better accuracy for simple stimuli, but potentially lower accuracy for complex images. * -- a contrast distortion metric, which is a modern implementation of the dynamic-range-independent visual quality assessment (Aydin et al., 2008). It predicts the maps which indicate in which image parts the contrast will be lost and in which parts it will be (over-)enhanced. The predictions are different from those of (Aydin et al., 2008) as the original contrast-independent metric was based on HDR-VDP-1. ## 4 HDR-VDP-3 overview A high-level overview of the metric is shown in Figure 1. As any full-reference metric, HDR-VDP-3 takes as input a pair of test and reference images. However, those images need to be calibrated in absolute radiometric (or photometric) units by the _display model_ since HDR-VDP-3 relies on models of low-level human vision, which operate on photometric units. The _retinal and optical pathway_ simulates optics of the eye (glare), age-adaptive lens opacity, pupil, photoreceptor (cones and rods) spectral response, local adaptation and luminance masking. The resulting retinal images are then decomposed into multiple bands of spatial frequencies and orientations. The most important part of the metric is the model of neural contrast sensitivity and contrast masking, which predicts the ability of the visual system to detect and discriminate patterns. The result of that stage is then passed to one of the three "heads" of the metric: one that predicts visibility maps, one that predicts single-values quality and one that predicts contrast distortion maps. The description of those stages is beyond the scope of this short paper, but further details can be found in (Mantiuk et al., 2011; Aydin et al., 2008; Mantiuk and Ramponi, 2018). Figure 1: The processing diagram of the HDR-VDP-3. The metric requires the images to be calibrated in the absolute radiance or luminance quantity emitted from a display (HDR or SDR). The physical image representation (radiance map) is processed by the optical and retinal pathway which simulates the eye’s optics and photoreceptor responses. The resulting retinal images are then decomposed into multiple scales, each isolating a band of spatial frequencies and orientations. The core component of the metric is the model of contrast masking and (neural) contrast sensitivity, which predicts the visibility of the differences between a pair of images. The multi-band representation from those stages is then fed to one of the different “heads”, responsible for the prediction of visibility, quality and contrast distortions. ## 5 Differences with respect to HDR-VDP-2 Compared to version 2.2 of the metric, HDR-VDP-3 contains the following major changes: * It requires specifying a prediction task, as explained in Section 3. * It includes a contrast distortion metric, which is a modern implementation of [1]. * The contrast sensitivity function was refitted to newer data (from [12]). * The model of glare (MTF) can be disabled or switched to the CIE99 Glare Spread Function [21]. * The metric now accounts for the age-related effects, as described in [13]. * The metric includes a model of local adaptation from [22]. * The tasks "side-by-side" and "flicker" have been calibrated on large datasets from [23, 24]. * The task "quality" has been recalibrated using a new UPIQ dataset [14] with over 4000 SDR and HDR images, all scaled in JOD units. * The code now includes multiple examples of how to use the metric in different scenarios. * The code has been reorganised and tested to run on a recent version of Matlab (2022a) but also GNU Octave. * The code runs on a GPU (CUDA) in Matlab. ## 6 The submission for WACV HDR Video Quality Measurement Grand Challenge HDR-VDP-3 was submitted to the WACV HDR Video Quality Measurement Grand Challenge 2023. The organizers of the challenge provided a new HDR video quality dataset -- LIVE HDR [25]. For the challenge, we used the _quality_ task of HDR-VDP-3. We did not consider the temporal aspect of the videos. The metric was applied separately on selected frames, and the scores were averaged to obtain the final video quality score. The FFmpeg program [15] was used to decode every 30th frame in the video and stored as PNG files. Furthermore, a display model consisting of the inverse PQ function [14] was used to transform the display-encoded pixel values into radiometric (or photometric) units as: \[L(x,y)=PQ^{-1}(I(x,y))+E_{amb}\frac{k_{refl}}{\pi}\,, \tag{1}\] where \(I(x,y)\) is the PQ-encoded RGB frame, \(L(x,y)\) is the frame in absolute linear RGB (BT.2020) units (photometric), \(E_{amb}\) is the room ambient illumination, and \(k_{refl}\) is the display reflectivity. The room ambient illumination and the display reflectivity were set to \(200\) and \(0.005\) respectively. These terms model the effect of ambient light, which was reported for the experiment. It should be mentioned that the HDR-VDP-3 _quality_ task has not been (re-)calibrated on the LIVE HDR training dataset. We used the original model, calibrated on the UPIQ dataset. Regardless, the metric was able to correlate well with the mean-opinion scores. Although the metric was originally implemented in Matlab, we adapted the code so that it can be run in GNU Octave. The snippet of the code used to run the metric can be found in examples/hdr_video_pq_eotf.m in release 3.0.7 of the metric. ## Acknowledgments This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement N\({}^{\circ}\) 725253-EyeCode).
2303.00198
Convolutional Visual Prompt for Robust Visual Perception
Vision models are often vulnerable to out-of-distribution (OOD) samples without adapting. While visual prompts offer a lightweight method of input-space adaptation for large-scale vision models, they rely on a high-dimensional additive vector and labeled data. This leads to overfitting when adapting models in a self-supervised test-time setting without labels. We introduce convolutional visual prompts (CVP) for label-free test-time adaptation for robust visual perception. The structured nature of CVP demands fewer trainable parameters, less than 1\% compared to standard visual prompts, combating overfitting. Extensive experiments and analysis on a wide variety of OOD visual perception tasks show that our approach is effective, improving robustness by up to 5.87% over several large-scale models.
Yun-Yun Tsai, Chengzhi Mao, Junfeng Yang
2023-03-01T03:06:29Z
http://arxiv.org/abs/2303.00198v2
# Self-Supervised Convolutional Visual Prompts ###### Abstract Machine learning models often fail on out-of-distribution (OOD) samples. Visual prompts emerge as a light-weight adaptation method in input space for large-scale vision models. Existing vision prompts optimize a high-dimensional additive vector, and require labeled data on training. However, we find this paradigm fails on test-time adaptation when labeled data is unavailable, where the high-dimensional visual prompt overfits to the self-supervised objective. We present convolutional visual prompts for test-time adaptation without label. Our convolutional prompt is structured and requires fewer trainable parameters (less than 1% parameters of standard visual prompts). Extensive experiments on a wide variety of OOD recognition tasks show that our approach is effective, improving robustness by up to 5.87% over a number of large-scale model architectures. Machine Learning, ICML ## 1 Introduction Deep models surpass human performance when tested on in-distribution data, yet their performance drops hugely when encountering unforeseen out-of-distribution (OOD) data during test-time, such as unexpected and corruptions and shiftings (Hendrycks and Dietterich, 2019; Hendrycks et al., 2019; Hendrycks and Dietterich, 2019; Hendrycks et al., 2021; Mao et al., 2022). This vulnerability raises serious risks when those machine learning models are deployed into practice, especially for those safety-critical applications and high-stake tasks (Pei et al., 2017). Prior works have studied how to improve generalization to OOD data at training time (Saenko et al., 2010; Lu et al., 2020; Saito et al., 2018; Ganin and Lempitsky, 2015; Long et al., 2015; Liu et al., 2020; Mao et al., 2021; Li et al., 2021). Visual prompting emerges as an efficient and lightweight way to adapt the model at training time without modifying the model (Elsayed et al., 2018; Tsai et al., 2020; Bahng et al., 2022) (previously also called _adversarial reprogramming_). In contrast to finetuning the parameters of the model, prompting can modify the original task of model by providing context in the input space. It requires fewer OOD samples and simplifying model version management for practical applications. However, the required labeled OOD samples remain significant, therefore these methods cannot work with distribution shifts previously unseen at test time. Recent work (Mao et al., 2021) defends against unseen attacks at test time by repairing the adversarial inputs with "reversal vectors" - high-dimensional prompt vectors directly added to inputs. It generates the prompts by minimizing a self-supervised loss, requiring no prior labelled OOD samples. Unlike adversarial attacks that damage arbitrary pixels by arbitrary amounts (within a given bound), however, distribution shifts are often caused by natural factors such as weather and lighting conditions, so they are highly structured, a mismatch for the unstructured high-dimensional vector prompts. Our experiments show (S4) that unstructured prompts, when applied to reversing OOD samples, overfit to the self-supervised objective and improve Figure 1: Self-supervised convolutional prompt for adaptation. Clean image has a low self-supervision loss, denoted by the distance on the contrastive embedding. When an image is corrupted (e.g, stop sign in the fog weather), the self-supervised loss increases. After applying our convolutional prompt, we can instruct the model to adapt and produce a small self-supervision loss. performance minimally. This paper presents **Convolutional Visual Prompt (CVP)**, a novel prompting method for OOD samples. In CVP, prompts are convolutional kernels, highly structured with only a small number of tunable parameters - less than 1% of the number of tunable parameters in typical unstructured visual prompts (see Figure 1 for illustration). This structure makes CVPs extremely lightweight. We leverage CVPs to reverse distribution shifts at test time requiring no labelled OOD samples and avoid overfitting to the self-supervised objective. Visualizations and empirical experiments show that CVP significantly improves robust prediction for four established benchmarks. On ImageNet-Rendition, ImageNet-Sketch, and 15 types of unforeseen corruptions for both CIFAR-10 ImageNet at five different levels, our method improves up to 5.87% of robustness, including on top of the popular ResNet architecture and the state-of-the-art vision-language model CLIP (Radford et al., 2021). Since our method modifies the input space, it also complements established test-time weight adaptation methods. ## 2 Related Work Domain GeneralizationVarious types of Out of Distribution data have been widely studied in recent works, which can lead to a severe drop in performance for machine learning models (Hendrycks and Dietterich, 2019; Hendrycks et al., 2021; Recht et al., 2019; Mao et al., 2022b, 2021a, 2021b). Domain generalization (DG) aims at adapting the model with OOD samples without knowing the target domain data during training time. Existing adaptation methods (Zhou et al., 2021; Dou et al., 2019; Li et al., 2018; Zhou et al., 2020; Zhang et al., 2021; Sagawa et al., 2019; Mao et al., 2021a, 2021b; Wang et al., 2020; Sun et al., 2019) have shown large improvement on the robustness for OOD dataset. Test-time adaptation emerges as a new paradigm for distribution shifting robustness (Mao et al., 2021; Sun et al., 2019; Zhang et al., 2021), where most of them update the weights of the deep models. BN (Sagawa et al., 2019; Li et al., 2016) updates the model using batch normalization statistics, TENT (Sun et al., 2019) adapts the model weight by minimizing the conditional entropy on every batch. TTT (Sun et al., 2019) attempts to train the model with an auxiliary self-supervision model for rotation prediction and utilize the SSL loss to adapt the model. MEMO (Zhang et al., 2021) augments the single sample and adapts the model with the marginal entropy of those augmented samples. Test time transformation ensembling (TTE) (Perez et al., 2021) proposes to augment the image with a fixed set of transformations and ensembles the outputs through averaging. The only one that does not update the model is proposed by Mao et al. (2021b), which modifies the pixels of adversarial samples to minimize the contrastive loss, and not tested on distribution shifting. Visual PromptingPrompting was proposed in the natural language processing field to provide context to adapt the model for specific tasks (Brown et al., 2020). Leveraging this idea, visual prompts (Yao et al., 2021; Jia et al., 2022) adapts the model with a small number of trainable parameters in input space for vision tasks (Dosovitskiy et al., 2020) and foundation models (Radford et al., 2021). Some others proposed to prompt the samples with adversarial perturbations to re-purpose the model for target classification tasks, known as _adversarial reprogramming_(Elsayed et al., 2018; Kloberdamz et al., 2021; Tsai et al., 2020; Yang et al., 2021), sharing the same idea with Visual Prompting. Black-box adversarial reprogramming (Tsai et al., 2020) reprograms the black-box model for those downstream classification tasks with limited data. V2S (Yang et al., 2021) reprograms the speech recognition model for time-series data classification tasks. Robust visual prompts (Mao et al., 2022a) are tuned at training time to improve the adversarial robustness of the model under attack, yet this is not explored in domain generalization where distribution is naturally shifted. Self-supervised learning (SSL)SSL can learn effective representations from images without annotations (de Sa, 1994; Chen et al., 2020; Caron et al., 2020; Hendrycks et al., 2019). Prior works have shown that representations learned from different pretext tasks (e.g., jigsaw puzzles (Noroozi and Favaro, 2016), rotation prediction (Gidaris et al., 2018), image colorization (Larsson et al., 2016) and deep clustering (Ji et al., 2019), etc.) can be leveraged on several downstream tasks such as image classification (Chen et al., 2020), object detection (Doersch et al., 2015) and test-time domain adaptation (Sun et al., 2020). Another well-known branch of SSL is contrastive learning, which aims at grouping associated features for a transformation of samples and distancing from other samples with dissimilar features in the dataset (Chen et al., 2020; He et al., 2020; Park et al., 2020). Some of the methods (Hendrycks et al., 2019; Sehwag et al., 2021; Zeng et al., 2021) use SSL for outlier detection, which aims to learn generalizable out-of-distribution features and rejects them during the testing time. In contrast to those methods which require information from the targeted OOD data distribution during the training phase, our method generalizes to unseen test domains. ## 3 Test-Time Convolutional Prompting ### Learning Intrinsic Structure with Self-Supervision Task Standard ways to improve the robustness on out-of-distribution is through making the training robust, where the training algorithm anticipates the possible corruptions and distribution shifting at inference time and trains on them (Hendrycks et al., 2021). Anticipating the test-time shifting is a strong assumption, which is often not realistic in the real world. Instead, we adopt the test-time robustness approach, where the model can dynamically adapt to unforeseen corruptions and unknown shiftings at testing. The ideal case of adapting the model at inference time is to have the ground truth label for the target task, yet this is impossible given that the test data is not labeled. Motivated by the fact that self-supervised pre-training can significantly improve the performance of the downstream classification task, the right self-supervision task must share rich information with the target classification task. We propose to leverage the self-supervised task as inherent supervision to adapt the model. In our method, the self-supervised tasks are used as a proxy, which captures similar information and structures as the target task. There are a large number of literature study what is a good self-supervised task for representation learning at training time. For example, jigsaw puzzles (Noroozi and Favaro, 2016), rotation prediction (Gidaris et al., 2018), image colorization (Larsson et al., 2016) and deep clustering (Ji et al., 2019) can be leveraged on several downstream tasks such as image classification (Chen et al., 2020), object detection (Doersch et al., 2015) and test-time domain adaptation (Sun et al., 2020). **Our choice of self-supervision task:** For visual recognition, a popular self-supervised objective is contrastive learning, which learns a representation that maps the feature of the transformations of the same image into a nearby place. This is formally defined as: \[\mathcal{L}_{s}(x)=-\mathbb{E}_{i,j}\left[y_{i,j}^{s}\log\frac{\exp(\cos(z_{i },z_{j}))/\tau}{\sum_{k}\exp(\cos(z_{i},z_{k}))/\tau}\right], \tag{1}\] where \(z\) are the contrastive features of \(x\) extracted from pre-trained backbone. \(y_{i,j}^{s}\) is a 0-1 vector for indicating the positive pairs and negative pairs. If \(y_{i,j}^{s}\) is 1, the \(i\)-th feature \(z_{i}\) and \(j\)-th feature \(z_{j}\) are both from the \(x\) sample. Otherwise, they are from different \(x\). We denote \(\cos(\cdot,\cdot)\) the cosine similarity, \(\tau\) as temperature scaling value. We optimize the model parameters for SSL model \(\mathcal{C}\) by using the contrastive loss. The objective function for training is defined as \[\min_{\theta_{\mathcal{C}}}\mathop{\mathbb{E}}_{(x)\sim\mathcal{X}_{s}}\left[ \mathcal{L}_{s}(\cdot)\right]\] where \(\mathcal{X}_{s}\) is the source domain data for training. In our scenario, we use only the clean sample drawn from the non-corrupted dataset. Since we need to train the self-supervised task at training time, the self-supervised task will perform high on the data that is the same distribution as the training one. Our results find that the performance of the self-supervised task drops largely when the distribution is shifted at test time. See Figure 2. This suggests that the information that is useful for the self-supervised task is corrupted. Our hypothesis is that there will be shared information between the classification task and our defined self-supervised task, which is corrupted due to the distribution shifts. In this paper, we propose to adapt the model to minimize the self-supervised loss at inference time. In doing so, we could recover the information that was corrupted due to distribution shifts. ### Test-time Adaptation for Vision Models One of the key advantages of adapting the vision model at inference time is that they can adapt to the unique characters of the new distribution online. Thus, we would like to make a lightweight adjustment to the input so that the model can quickly update to the novel data points without too much computation and memory overhead. As vision models are continuously deployed in edge devices, this is extremely important, given the limited computational resources. Figure 2: We show the histogram of the contrastive loss distribution on different corruption types. The blue region represents the loss distribution of original sample. The yellow, green, and red regions represent the loss distribution of corrupted samples with different severity (1,3, and 5). Our plot shows the great variance in SSL loss distribution between original and corrupted samples. Several established methods exist to adapt the vision models, even foundation models. **Finetuning (FT)** is a standard way to adapt the deep model. It often optimizes all the parameters of the deep models or partially, which is often heavy and requires large space to save of copy of the model to re-initialize and update. Prior work shows that this method is effective when it is finetuned on supervised tasks, yet adapting with the self-supervised task on a few examples remain under-explored. We discuss the finetuning method using the self-supervised task. **Partial Finetuning (PFT)** is another way to adapt the model at inference time by only changing the statistics of the batch-normalization layer. This method assumes that the distribution drifts in the mean and standard deviation of the test data and can be recovered through test-time adaptation. The closest existing works are BN (Sagawa et al., 2019), Tent (Wang et al., 2020) and MEMO (Zhang et al., 2021). Tent updates the BN statistics but needs to continue training on the same distribution. MEMO only requires a single test data point, yet the algorithm is extremely slow due to the whole model update and heavy augmentations. Here, we adapt batch normalization through our proposed contrastive learning-based self-supervised loss. **Visual Prompts (VP)** have emerged as a lightweight way to adapt pre-trained models. There are two major ways to apply visual prompts to the vision model. Let the image be \(\mathbf{x}\), the first one add a vector \(\mathbf{v}\) to the input image: \[\mathbf{x}=\mathbf{x}+\mathbf{v}\] the second one append a vector \(\mathbf{v}\) to the input image: \[\mathbf{x}=[\mathbf{x};\mathbf{v}]\] Most visual prompts are studied in training setup, yet they are under-explored at inference time. The only work that adapts the adversarial samples with self-supervised visual prompts is proposed by (Mao et al., 2021), which optimizes an additive visual prompt on the image to repair the adversarial perturbation and improve the model robustness. In this paper, we study how those methods perform to adapt the model for the self-supervised task at inference time. Since the self-supervised task is a proxy for the target task, there is a gap between the self-supervised task and the target-supervised task, even though we choose the best self-supervised task available. In our empirical experiments, we found that the above method does not efficiently improve the robustness at inference time. We conjecture that the adaptation process may overfit to optimize the self-supervised task without restoring enough information for the target classification task. In the ablation study, we show the loss analysis on different visual prompts. ### Adapting via Convolutional Visual Prompts (CVP) Adding structure is an efficient way to avoid overfitting. By providing the right inductive bias to the model, convolutional neural networks achieve huge success in solving vision tasks. Convolution assumes the image is translation equivariant and the same operation should be applied to each patch in the image. This motivates us to add structure to the visual prompts to avoid the model overfit to the wrong pattern during test-time adaptation. We now introduce the self-supervised convolution visual prompts (CVP), which instruct the deep model to adapt to the test distribution through convolution. Our assumption is that the distribution shift in the visual data is often structured and satisfying translation equivariance, where we should apply the same context to every individual patch of the visual input. Our prompt is simple and is defined as: \[\mathbf{x}=conv(\mathbf{x},\mathbf{v}) \tag{2}\] One major advantage of the convolution prompt is that the number of parameters in the prompt is significantly lower (1%) than the adding prompt and appending prompt, even though the prompt is already lightweight compared to adapting the whole model. Our prompt method is not only lightweight in saving and online computation, but it also improves robustness due to the inherent structure. In Algorithm 1, we show the detailed algorithm of CVP. ``` 0: Pretrained classifier \(\mathcal{F}(\cdot)\), OOD images \(x\), Self-supervised objective function \(\mathcal{L}_{s}(\cdot)\), Convolutional operator \(Conv(\cdot)\), Convolutional kernel \(k\), Learning rate \(\eta\), Number of iteration \(\mathcal{T}\) 0: Class prediction \(\hat{y}\) for adapted sample of \(x\) 1:Inference # Initialize the kernel parameters \(k^{0}\sim\mathcal{U}\{(\alpha,\beta)\) # Calculate initial SSL loss \(loss^{0}=\mathcal{L}_{s}(x)\) for\(t\in\{1,...,T\}\)do 2:# Generate adapted samples \(x^{t}=Conv(x,k^{t})\) # Calculate SSL loss with adapted samples \(loss^{t}=\mathcal{L}_{s}(x^{t})\) # Update kernel parameters \(k^{t+1}=k^{t}+\eta\frac{\partial loss^{t}}{\partial k^{t}}\) # Get optimal kernel parameters \(k^{\star}\gets k^{T}\) if\(loss^{T}>loss^{0}\)then # Use the initial kernel parameters \(k^{\star}\gets k^{0}\) # Get final adapted samples \(x^{\star}=Conv(x,k^{\star})\) return\(\hat{y}\leftarrow\mathcal{F}(x^{\star})\) ``` **Algorithm 1** Convolutional Visual Prompts ## 4 Experiment This section demonstrates the detail of experiment settings and evaluates the performance of our method CVP, compared with Visual Prompts (VP) and existing test-time approaches. More analyses are shown in Section 5, including the analysis of the batch size and iter numbers for adaptation, distance measurement of adapted samples, different prompt designs, and Grad-CAM visualization. ### Experiment Setting **Dataset.** We evaluate our method on five kinds of OOD datasets, including CIFAR-10-C (Hendrycks and Dietterich, 2019), ImageNet-C (Michaelis et al., 2019), ImageNet-R (Hendrycks et al., 2021), ImageNet-Sketch (Wang et al., 2019), and ImageNet-A (Hendrycks et al., 2021). The following describes the details of all datasets. \(\bullet\)**Synthetic OOD Data.** The corruption data are synthesized with different types of transformations (e.g., snow, brightness, contrast) to simulate real-world corruption. The dataset contains CIFAR-10-C and ImageNet-C. Both of them are the corrupted versions of their original dataset, including 15 corruption types (e.g., gaussian noise, zoom blur, snow, jpeg compression,...etc.) and 5 severity levels. A larger severity level means more corruption is added to the data. The number of testing data size for CIFAR-10-C (Hendrycks and Dietterich, 2019) is 10K, and for ImageNet-C (Hendrycks and Dietterich, 2019) is 5K in each corruption type and severity level. To well evaluate our method, We generate the corruption samples with 5 larger severities based on the official github code1 for each 15 corruption types and show their parameters in the Appendix. Footnote 1: We generate all types of corruption data based on the github code: [https://github.com/bethgelab/imagecorruptions](https://github.com/bethgelab/imagecorruptions) \(\bullet\)**Natural OOD Data.** The natural OOD dataset contains real-world distribution shift data. The ImageNet-Rondition (Hendrycks et al., 2021) contains 30000 images collected from Flickr with specific types (e.g., cartoon, sculpture, embroidery...etc) of ImageNet's 200 object classes. ImageNet-Sketch (Wang et al., 2019) data set consists of 50000 images, 50 images for each of the 1000 ImageNet classes, constructed by searching the data set with Google Image queries "sketch of". The ImageNet-Adversarial (Hendrycks et al., 2021) contains 7500 images collected from the natural world that cause a huge degradation on the performance of the large-scale image classifier. Model.The backbone model architecture is pretrained on WideResNet18 (Zagoruyko and Komodakis, 2016) and ResNet26 (He et al., 2016) for CIFAR-10-C, ResNet50 (He et al., 2016) for ImageNet-C, Rendition, and Sketch. We extract the logit features before the fully connected layer of the backbone model for training the SSL model. The SSL model is a simple MLP with the final layer outputting the one-dimensional features for the contrastive learning task. We further extend our prompt method to the foundation model CLIP (Radford et al., 2021). By leveraging the features from the visual encoder of CLIP, we can also train the SSL model. During the inference time, we adapt the samples with CVP so that the representation extracted from the visual encoder can be aligned with the representation of the language encoder. Baseline DetailsWe compare several test-time adaptation benchmarks with CVP. \(\bullet\)**Standard**: The baseline uses the pre-trained model without adaptation. For CIFAR-10-C, the standard is trained with 50000 clean CIFAR-10 train dataset on WideResNet18 and ResNet. For ImageNet1K-C, the standard is trained with \(\sim\)1.2M clean ImageNet train dataset on ResNet50. \(\bullet\)**Finetune (FT)**: We adjust the whole model weight for every upcoming batch during the inference time with the self-supervised loss. In our experiments, after one-batch fine-tuning, the model will be restored to the initial weight status. \(\bullet\)**Partial Finetune (PFT)**: The partial fine-tune adapts batches of the samples to the model by only adjusting the batch normalization layers with self-supervised loss at every inference time. Same as Finetune baseline, the model will be restored to the initial weight status after the one-batch adaptation. \(\bullet\)**VP (Mao et al., 2021)**: The prompting method to reverse the adversarial attacks by modifying adversarial samples \begin{table} \begin{tabular}{l c c} \hline \hline \multirow{2}{*}{Method/Model} & WideResNet18 & ResNet26 \\ & Avg. Error (\%) & Avg. Error (\%) \\ \hline Standard & 58.24 & 62.12 \\ FT & 69.33 (+11.09) & 63.07 (+0.95) \\ PFT & 62.65. (+4.41) & 62.47 (+0.34) \\ VP (patch) & 57.94 (-0.3) & 62.82 (+0.7) \\ VP (padding) & 58.20 (-0.04) & 62.18 (+0.06) \\ CVP-F3 & 54.25 (-3.99) & 60.56 (-1.56) \\ CVP-F3\({}^{\dagger}\) & 53.67 (-4.57) & 59.52 (-2.6) \\ CVP-R3 & 53.77 (-4.46) & 60.61 (-1.51) \\ CVP-R3\({}^{\dagger}\) & **52.37 (-5.87)** & **59.32 (-2.80)** \\ \hline \hline \end{tabular} \end{table} Table 1: Corruption benchmarks on CIFAR-10-C. We show the average error rate on 15 corruption types and 5 severities for CVP, compared with the baselines. For the CVP results, we denote the two initialization settings: fixed/random as F / R, the number behind F / R means the kernel size (e.g., 3 or 5), and the symbol \({}^{\dagger}\) is denoted as the kernel with update setting. We set up the batch size of adaption as 16 for all methods. For CIFAR-10-C, when updating the 3*3 kernel with the random initialization, CVP-R3\({}^{\dagger}\) achieves the best robustness which reduces the error rate by 5.87% on WideResNet18 and 2.8% on ResNet26. with \(\ell_{p}\)-norm perturbations, where the perturbations are also optimized via contrastive loss. We extend this method with two different prompt settings: patch and padding. For the patch setup, we directly add a full size patch of perturbation into the input. For the padding setup, we embed a frame of the perturbation outside the input. More baseline detailed are shown in the Appendix A.1 Design of Convolutional Visual Prompts (CVP)We prompt the input samples by adding the convolutional kernels. Our kernels can be optimized under several different settings, including 1.) fixed or random kernel initialization 2.) 3*3 or 5*5 kernel sizes 3.) with or without updating the kernel parameters. We show a detailed evaluation of all kernel setups in the experimental results. For initialization, we can either random initialize the kernel \(k\) in a uniform distribution or initialize with fixed values. For the update setting, we set up the number of update iterations for the kernel from 1 to 5. To preserve the original structure, we combine the residual of input and convolved output with learnable parameters \(\lambda\). We jointly optimize the convolutional kernel \(k\) and \(\lambda\) with the self-supervised loss \(\mathcal{L}_{s}\). The range of \(\lambda\) is predefined and empirically set up in a fixed range. For CIFAR-10-C, we set up the range as [0.5, 3], and for ImageNet-C is set up as [0.5, 1]. We describe the detailed parameter settings in the Appendix A.2. ### Experimental Results Table 1 shows the evaluation results for CVP on CIFAR-10-C. We evaluate CVP on two different model architectures and compare them with five different baselines, including standard, VP (patch/padding), fine-tune (FT), and partially finetune (PFT) with SSL loss. Besides, we show four different kernel settings of CVP, including fixed/random initialization and with/without update kernel. Compared with five baselines, for WideResNet18, CVP reduces the most on the average error rate by 5.87% when updating 3*3 kernel with random initialization. For ResNet-26, SCVP consistently reduces the error rate by 2.8%. As we empirically discover updating the kernel parameters achieves better performance, in later experiments, we only evaluate the CVP with the kernel update setting. Table 2 shows the ImageNet-C, Rendition, Sketch, and Adversarial. The standard baseline is pre-trained on ResNet50 (He et al., 2016). For ImageNet-C, we report the mean corruption error (mCE), which is calculated with the associated rate of performance on ResNet50 and standard AlexNet following the benchmark (Hendrycks and Dietterich, 2019). Here, since the dimension of the ImageNet dataset becomes larger (224), a larger kernel size can effectively retrieve more information from the input. Thus, we set up the kernel size as 3*3 and 5*5. Our results show that CVP reduces the most error rate for ImageNet-R \begin{table} \begin{tabular}{l c c c c} \hline \hline & ImageNet-C & ImageNet-R & ImageNet-S & ImageNet-A \\ & mCE \(\downarrow\) & Error (\%) & Error (\%) & Error (\%) \\ \hline ResNet50 & 76.87 & 63.83 & 75.90 & 100.0 \\ FT & 77.10 (+0.26) & 64.38(+0.55) & 76.38 (+0.48) & 99.95 (-0.05) \\ PFT & 76.74 (-0.13) & 69.63 (+5.8) & 80.43 (+4.53) & 99.89 (-0.11) \\ VP (patch) & 76.74 (-0.13) & 68.86 (+5.03) & 75.93 (+0.03) & 99.94 (-0.06) \\ VP (padding) & 80.07 (+3.23) & 63.84 (+0.01) & 75.92 (+0.02) & 99.91 (-0.01) \\ CVP-F3\({}^{\dagger}\) & 75.88 (-0.95) & 63.56 (-0.27) & 75.32 (-0.58) & 99.2 (-0.8) \\ CVP-R3\({}^{\dagger}\) & **75.34 (-1.49)** & 63.49 (-0.34) & **75.30 (-0.60)** & 99.2 (-0.8) \\ CVP-F5\({}^{\dagger}\) & 75.74 (-1.09) & 63.18 (-0.65) & 75.35 (-0.55) & 98.67 (-1.33) \\ CVP-R5\({}^{\dagger}\) & 75.77(-1.06) & **63.06 (-0.77)** & 75.33 (-0.57) & **98.4 (-1.6)** \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison on CVP with other baselines on ImageNet-C, ImageNet-R, ImageNet-S, and ImageNet-A. The standard model is ResNet50 (He et al., 2016). For ImageNet-C, we calculate the mean corruption error (mCE). Same as Table 1, we evaluate the CVP under multiple kernel setups and show the results. For ImageNet-C and Sketch, CVP-F3\({}^{\dagger}\) achieves the best robustness, which reduces the error rate by 1.49% and 0.6%. Other datasets, such as ImageNet-R and ImageNet-A, outperform all the baselines when updating 5*5 kernels with random initialization. \begin{table} \begin{tabular}{l c c c c} \hline \hline & **CIFAR-10-C** & **ImageNet-C** & **ImageNet-R** & **ImageNet-S** \\ & Avg. Error (\%) & mCE \(\downarrow\) & Error (\%) & Error (\%) \\ \hline **CLIP(ViT/32)** & 58.39 & 77.93 & 32.09 & 60.55 \\ VP (patch) & 58.43 (+0.04) & 77.81 (-0.12) & 32.12 (+0.03) & 60.53 (-0.02) \\ CVP-F3\({}^{\dagger}\) & 57.94 (-0.45) & 77.43 (-0.50) & 31.16 (-0.93) & 59.47 (-1.08) \\ CVP-R3\({}^{\dagger}\) & 57.91 (-0.48) & 77.71 (-0.22) & 31.31 (-0.78) & 59.62 (-0.93) \\ CVP-F5\({}^{\dagger}\) & 57.98(-0.41) & 77.25 (-0.68) & 31.14 (-0.95) & 59.83 (-0.72) \\ CVP-R5\({}^{\dagger}\) & **57.79 (-0.60)** & **76.67 (-1.26)** & **30.43 (-1.66)** & **59.43 (-1.12)** \\ \hline \hline \end{tabular} \end{table} Table 3: Evaluation on the CLIP model. We compare CVP with other prompting baselines on CIFAR-10-C, ImageNet-C, ImageNet-R, and ImageNet-S. Overall, the CVP-R5\({}^{\dagger}\) achieves the best performance, which reduces the error rate by 1.16% on average. and A by 0.77% and 1.6% when updating with 5*5 kernel using random initialization. For ImageNet-C and Sketch, CVP consistently reduces the error rate and outperforms all baselines. We discover VP (patch) and VP (padding) degrade the performance on most of the dataset, which shows that the unstructured perturbation with more training parameters is not effective in adapting the natural OOD data with general distribution shift. In Table 3, we further evaluate the performance of CVP on the large-scale foundation model, CLIP (Radford et al., 2018). Compared with two baselines VP (padding) and VP (patch), when updating the 5*5 kernel under random initialization, CVP achieves the best performance on all datasets. ## 5 Ablation Study ### The Effect of Batch Size for Adaptation In Figure 2(a), we empirically demonstrate that the batch size affects the performance of different prompt methods. Compared with all prompt methods, our CVP has better performance when in small batch sizes. We set up the batch sizes from 2,4,8,16 to 32 and compare the accuracy of CVP with Finetune, VP (padding), and VP (patch). When batch size is set as 2, the performance of Fine-tune is only 36.95%, which is worse than CVP 34.58%. When increasing the batch size to 4 and 8, the performance of Finetune, VP slightly improves, yet still worse than CVP. Overall, CVP has an advantage in adaptation under the small batch setting. ### CVP Complements Other Test-Time Adaptation Methods Since our method modifies the input space and aligns the representation of adapted samples to the original manifold, it also complements established test-time adaptation approaches which adjusts the model weights. Thus, we combine the CVP with several existing methods, including MEMO (Zhang et al., 2021), BN (Sagawa et al., 2019), and TENT (Wang et al., 2020). For a fair comparison, we set up the batch size as 16 for all experiments. In Table 4, we show the result for CIFAR-10-C, ImageNet-C, R, S, and A. For CIFAR-10-C, CVP improves 1.83 points on top of TENT and reduces the error rate by 21.55% compared with the standard one. For the other datasets, CVP achieves the lowest error rate on top of the TENT method. However, the BN method degrades the performance under the small batch setting. Due to the page limit, we further show TTT (Sun et al., 2019) results in Appendix A.3. ### The Effect of Different Prompt Designs We do analysis on different prompting methods, including original visual prompts with different norm-bound (\(\ell_{2}\), \(\ell_{\infty}\)), convolutional prompts, and their combinations (\(\ell_{2}+conv.\), \(\ell_{\infty}+conv.\)). We show the error rate on different numbers of adapt iters for every prompting method from 0, 1, 5, to 10. To compare the results, we set up other parameters such as the epsilon \(\epsilon\) as \(8/255\) for \(\ell_{\infty}\), 1 for \(\ell_{2}\). As Figure 2(b) shows the error rate for different prompting methods, the convolutional prompt \(conv.\) and its combination with \(\ell_{2}\) reduce the error rate, and the former one reduces more from 40.32% to 36.08% when increasing the adapt iters. However, other prompting methods increase the error rate after prompting. To understand the risk of over-fitting for different prompting methods, Figure 2(c) shows the SSL loss curve v.s. performance on different prompting methods. ### Measurement of Adapted Samples. In addition to showing the results on the accuracy, we do the quantitative measurement on CVP by using the Sliced Wasserstein Distance (SWD). We aim to measure the two input distributions: source domain distribution and target domain distribution (before/after adaptation). As Table 5 shows the result of SWD on CIFAR-10-C with severity 1. On average, CVP achieves lower SWD after adaptation, \begin{table} \begin{tabular}{c c c c c c} \hline \hline & CIFAR-10-C & ImageNet-C & ImageNet-R & ImageNet-S & ImageNet-A \\ & Avg. Error (\%) & mCE \(\downarrow\) & Error (\%) & Error (\%) & Error (\%) \\ \hline Standard & 58.24 & 76.87 & 63.83 & 75.90 & 100.0 \\ Mao et al. (Mao et al., 2021) & 57.94 (-0.3) & 76.74 (-0.13) & 68.86 (+5.03) & 75.93 (+0.03) & 99.94 (-0.06) \\ CVP (ours) & 52.37 (-5.87) & 75.43 (-1.49) & 63.06 (-0.77) & 75.30 (-0.6) & 98.4 (-1.6) \\ \hline MEMO (Zhang et al., 2021) & 56.14 & 73.45 & 60.73 & 73.43 & 99.1 \\ MEMO + CVP & 54.84 (-1.3) & 72.02 (-1.43) & 60.23 (-0.5) & 72.67 (-0.76) & 98.64 (-0.46) \\ \hline BN (Sagawa et al., 2019) & 38.51 & 76.20 & 67.29 & 77.98 & 99.8 \\ BN + CVP & 37.39 (-1.12) & 76.16 (-0.04) & 67.21 (-0.08) & 77.92 (-0.06) & 98.67 (-1.13) \\ \hline TENT (Wang et al., 2020) & 38.52 & 70.45 & 58.45 & 73.88 & 99.7 \\ TENT + CVP & **36.69 (-1.83)** & **70.34 (-0.11)** & **58.42 (-0.03)** & **73.83 (-0.05)** & **98.54 (-1.16)** \\ \hline \hline \end{tabular} \end{table} Table 4: Our prompt method complements other test-time adaptation approaches that update model weight, including MEMO, TENT, and BN. We show complements gain on every baseline when combined with CVP. Here, the _Standard_ for CIFAR-10-C is WideResNet18 and other dataset is ResNet50. For CIFAR-10-C, on top of the TENT (Wang et al., 2020) method, we achieve the best gain on the performance, which reduces 1.83% error rate. which means the target distribution is closer to the source one after adaptation. The average SWD reduce by 0.7% after prompting. We further visualize the distance measurement with violin plots and show the detailed SWD of every corruption in Appendix A.4. ### Visualization of Saliency Map To better understand how CVP adapts to the corrupted inputs, we visualize the saliency map of different types of corruption. As Figure 4 shows, from left to right, the first row is the original, corrupted, and adapted samples; the second row shows their corresponding Grad-CAM with respect to the predicted labels. The red region in Grad-CAM highlights where the model focuses on target input. We empirically discover the heap map defocus on the target object for corrupted samples. However, after prompting, the red region of the adapted sample's heap map is re-targeted on a similar part as the original image, demonstrating that the self-supervised visual prompts indeed improve the input adaptation and make the model refocus back on the correct areas. We provide more visualization in Appendix A.6. ## 6 Conclusion Self-supervised convolutional visual prompt (CVP) is a novel method for test-time adaptation of OOD samples. In contrast to prior works training the visual prompts with the label, CVP is label-free and lightweight. It reduces the trainable parameters to less than 1% parameters of previous visual prompts and avoids the risk of overfitting when adapting for self-supervised objectives at test time. Results on five state-of-the-art benchmarks show that CVP increases model robustness by 5.87% and complements existing weight-adaptation methods. This result suggests that distribution shifts are actually structured; therefore CVP can capture the structures better than VP during the adaptation. Figure 4: Visualization. From left to right we show three kinds of corruption types, including contrast, fog, and frost, on ImageNet examples. By applying our convolutional prompt on the corrupted images, our method can partially remove the corruptions and make the image easier to recognize. In addition, the saliency map calculated from Grad-Cam also shows that our approach instructs the model to look at a similar region as the original one. This highlights why our convolutional can adapt the input for robustness. \begin{table} \begin{tabular}{l l l l} \hline \hline & \begin{tabular}{l} SWD (scale: \(10^{2}\)) \(\downarrow\) \\ before \\ \end{tabular} & \begin{tabular}{l} SSIM \(\uparrow\) \\ after \\ \end{tabular} \\ \hline Avg. Mean & 7.19 & **6.49** & 0.7539 & **0.7884** \\ Avg. Std & 4.05 & **2.79** & 0.1294 & **0.7260** \\ \hline \hline \end{tabular} \end{table} Table 5: Results of Sliced Wasserstein Distance and Structural Similarity Index Measure on CIFAR-10-C (Severity 1). Figure 3: (a.) Analysis of the effect of batch size for different baselines on Cifar-10-C. We show the average performance on severity 1 for every corruption type. When batch size is small, CVP has a better performance on all corruption types, compared to fine-tune and two VP methods. (b.) Different prompting design. We show the error rate of the different prompt methods, including \(\ell_{\infty}\), \(\ell_{2}\), \(conv.\), \(conv.+\ell_{\infty}\), and \(conv.+\ell_{2}\). The \(conv.\) achieves the lowest error rate. (c.) We show the analysis for self-supervised loss v.s. performance on VP and CVP for gaussian corruption type in CIFAR-10-C. We setup the norm-bound for VP as 8/255 or unbounded. The loss curve demonstrates that when increasing the iter. number from 1 to 30, adapting with VP without norm bound hugely over-fits the self-supervised loss. The loss of VP largely reduces, yet the performance does not improve. On the other hand, our CVP has a lower risk of over-fitting as the loss curve can reduce smoothly with accuracy gradually increasing. Future work includes interpreting convolutional prompts and prompting with multi-modality in the foundation models.
2305.04070
Probing beyond-$Λ$CDM cosmology with Gravitational Waves
The propagation of Gravitational Waves has been reliably recognised as a test-bed for beyond standard models of gravity and cosmology. We utilise this property to examine the effects of a class of parametrised beyond-$\Lambda$CDM cosmology on inferece of GW parameters. We find that the combined beyond-$\Lambda$CDM likelihood function exhibits correlations between the parameters which are especially dependent upon binary eccentricity. Expanding on previous results, we demonstrate through Fisher forecasts that we would need nearly 1 year of 3G GW data to be able to infer the beyond-$\Lambda$CDM model to $2\sigma$ significance. We also find counter-intuitively that errors of source-modelling leave large biases upon the inference of the beyond-$\Lambda$CDM parameters which come into play only during GW propagation.
Kabir Chakravarti
2023-05-06T15:22:55Z
http://arxiv.org/abs/2305.04070v1
# Probing beyond-\(\Lambda\)CDM cosmology with Gravitational Waves ###### Abstract The propagation of Gravitational Waves has been reliably recognised as a test-bed for beyond standard models of gravity and cosmology. We utilise this property to examine the effects of a class of parametrised beyond-\(\Lambda\)CDM cosmology on inference of GW parameters. We find that the combined beyond-\(\Lambda\)CDM likelihood function exhibits correlations between the parameters which are especially dependent upon binary eccentricity. Expanding on previous results, we demonstrate through Fisher forecasts that we would need nearly 1 year of 3G GW data to be able to infer the beyond-\(\Lambda\)CDM model to 2\(\sigma\) significance. We also find counter-intuitively that errors of source-modelling leave large biases upon the inference of the beyond-\(\Lambda\)CDM parameters which come into play only during GW propagation. ## 1 Introduction Cosmology with Cold Dark Matter and Dark Energy, known commonly as the \(\Lambda\)CDM model is quite robust as a model, and is able to explain most observations to date with reasonable success. A few mentionable examples are the temperature and polarisation spectra of Cosmic Microwave Background (CMB) and Baryon Acoustic Oscillations as observed by the WMAP [1] and Planck [2] surveys. However as the data have improved, various tensions have also started to appear between data sets and continue to get stronger with newer data. The \(H_{6}\) tension [3] and the \(S_{6}\) tension [4] are noteworthy examples. Such tensions are possibility related to the period of dark-energy domination at low redshifts. Other works have shown that the shortcomings of \(\Lambda\)CDM also involve violations to the cosmological assumptions of isotropy [5, 6, 7] as well as homogeneity [8, 9]. These inadequacies make \(\Lambda\)CDM incomplete at the theoretical level. Motivated by these factors a host of so called beyond-\(\Lambda\)CDM theories have been proposed in order to mitigate all or some of the shortcomings of \(\Lambda\)CDM. Of them the Horndeski [10] class of theories deserve special mention, as they are the most general class of scalar-tensor theories that are second order in their time derivatives, and therefore do not suffer from Ostrogradski instabilities in their solutions. When probes of geometry alone are considered, such as supernova distances and baryon acoustic oscillations, there is not much space for a significant deviation from LCDM background expansion history. Nonetheless, the cosmic tensions imply that a modification to \(\Lambda\)CDM structure formation is necessary In a seminal work, Bellini and Sawicki [11] demonstrated that the space of scalar perturbations break down into four sub-spaces. Modifications to linear scalar perturbations are then described by four effective operators whose strength is determined by the set of parameters \(\{\alpha_{K},\alpha_{B},\alpha_{M},\alpha_{T}\}\). Of these four parameters, two, namely \(\alpha_{T}\) and \(\alpha_{M}\) also, separately, affect the propagation of gravitational waves. It is in this context that sources of GWs become relevant. The observation of GWs from merging Black Holes (BHs) and Neutron Stars (NSs) [12, 13, 14] have ushered in a new era in astronomy and astrophysics. The sensitivity of the network of current detectors, namely the LIGO-Virgo together with the newly joined KAGRA now allows us to detect events from as far away as a few thousands of Mpc. This implies that as GWs travel the intervening distance, their amplitude and phase can get modulated by the beyond-\(\Lambda\)CDM nature of the spacetime, and can be expected to show up as deviations during observations. This unique coincidence opens up a few interesting possibilities. To begin with, assuming a beyond-\(\Lambda\)CDM model one expects to understand the forecasts of its beyond-\(\Lambda\)CDM parameters resulting from a single observation. With a merger event detection rate of 10 per year on the average with the existing LIGO-Virgo-KAGRA network, we can expect a total detection tally to stand \(\sim 100-200\) in a few years. Additionally, with the newer 3G network of ground-based detectors, namely the Einstein telescope and Cosmic Explorer expected to come online around 2030 the event detection rate would go up to tens of thousands a year. In such a scenario, one can also consider the possibility of population-wide inference of the beyond-\(\Lambda\)CDM parameters. The observations from merging NSs in GW170817 have successfully demonstrated the utility of GWs in constraining the \(\alpha_{T}\) subspace. It was reported [15] that the difference in speeds of the graviton and the photon were constrained to better than 1 part in \(10^{15}\). Constraints to \(\alpha_{T}\) have also been obtained at similar magnitudes for very high energetic gravitons by considering the so-called 'Gravito-Cherenkov' effect on highly energetic cosmic rays [16]. While being strong constraints, these observations nevertheless leaves room for interesting possibilities which form the basis considerations of works in this field. Previously, there were a few noteworthy studies along these lines, starting with [17, 18] were one of the first to compute the impact of \(\alpha_{M}\), the so-called 'run-in-Planck-Mass' on GWs. Their results were then expanded upon by [19], who considered the effect of the nature of populations of sources upon the inference of \(\alpha_{M}\). These inferences relating to \(\alpha_{M}\) were furthered by [20, 21]. On the other hand, Baker et.al. as part of the cosmological working group of LISA [22] have come up with an in-depth analysis for the inference of the \(\alpha_{T}\) subspace. As we have seen, the \(\alpha_{M}\) and \(\alpha_{T}\) subspaces are the only subspaces to be probed by propagating GWs. It is therefore naturally logical to combine the results from the \(\alpha_{M}\) and \(\alpha_{T}\) subspaces into one combined inference, as opposed to standalone \(\alpha_{M}\) or \(\alpha_{T}\) inferences. This is basically the problem that we tackle in this paper. Specifically, we want to understand and provide an answer to the following questions 1. What are the forecasts of error for a combined \(\alpha_{M}\) and \(\alpha_{T}\) from a single merger event and what is the degree of covariance between the subspaces? Further, what factors do these covariances depend on? 2. Do effects at source (source parameters, or source modelling accuracy) affect the forecasts of propagation parameters? 3. What size of a population-wide survey can lead to a meaningful inference of \(\alpha_{M}\) and \(\alpha_{T}\), and can it be done by the current detector network? Additionally, will signatures specific to the type of population show up in the inference results? The remainder of the paper is organised as follows, in 2, we briefly review the preliminaries of cosmological propagation, and understand the formulation of how \(\alpha_{T}\) and \(\alpha_{M}\) affect the GW amplitude and the phase. Then in 3, we describe how \(\alpha_{T}\) and \(\alpha_{M}\) explicitly interact with the source via the Post-Newtonian framework. In 4, we discuss our results. Our results are subdivided into three sections. Single events Fisher estimates are carried out in 4.1, which are followed up by single event PN studies in 4.2. Finally population studies are done in 4.3. ## 2 Cosmological preliminaries Throughout this work, we assume the \(\Lambda\)CDM background expansion history with Dark energy content \(\Omega_{\Lambda}^{0}=0.689\), matter content \(\Omega_{m}^{0}=1-\Omega_{\Lambda}^{0}\) and \(\mathcal{H}_{0}=70\) km/sec/Mpc based on Plack data. Our model at background is exactly same to \(\Lambda\)CDM. As we mentioned before, to constrain such classes of theories identical to \(\Lambda\)CDM at the background one turns to perturbations. Our setup can be considered by the following, we assume GW arising out of compact binary merger events occuring in a redshift range \(z\leq 0.5\). In the course of travelling the intervening distance, the GWs will pick up the signatures of the beyond-\(\Lambda\)CDM cosmology, namely \(\alpha_{M}\) and \(\alpha_{T}\). The redshift range is chosen, so that the events do not fall out of the sensitive volume of current generation detectors on one hand, while being sufficiently far away to accumulate enough effects of the non-\(\Lambda\)CDM signatures on the other. As we are interested in beyond-\(\Lambda\)CDM in the low curvature regime, we assume that the mechanics of generation of GWs is completely governed by General Relativity (GR). In the following sections, we focus exclusively on the subspace of interest \(\alpha_{M}\),\(\alpha_{T}\) and discuss the nature of the changes to GWs brought on by these effects. ### Effect of \(\alpha_{M}\) Physically, \(\alpha_{M}\) represents a a variation in the value of Planck mass with time. A variation in Planck mass shows up as a friction term in the perturbation equations. Our starting point is the modified GW propagation equation in presence of a variable Planck Mass according to [17] reads as \[\ddot{h}+\left[2+\alpha_{M}(t)\right]\mathcal{H}(t)+\omega^{2}h=\Gamma(t), \tag{1}\] with \(\omega^{2}\) the frequency of the mode in question. \(\alpha_{M}\) is in the most general case, a time-dependent function parametrising the modification in the friction. The source \(\Gamma(t)\) on the right side operates as soon as there is anisotropic stress, or there exists a dark graviton coupled to the known massless graviton of GR. In this work, we will set \(\Gamma=0\) since we will be solely interested in the modification of the friction term in 1. If we realise that 1 is of the general form \[y^{{}^{\prime\prime}}+p(x)y^{{}^{\prime}}+q(x)y=0\] then we may decompose the function \(h\) into its constituents \[h(T)=u(T)\cdot v(T),\ \ \text{with}\ \ u(T)=\exp\left[-\frac{1}{2}\int dT \left(2+\alpha_{m}\right)\mathcal{H}(T)\right]\] and the function \(v(t)\) is then easily found to obey the differential equation \[\ddot{v}(T)-f(T)v(T)=0\] with \[f(T)\equiv\omega^{2}\left[\left(1+\frac{\alpha_{m}}{2}\right)^{2}\left(\frac{ \mathcal{H}}{\omega}\right)^{2}+\left(1+\frac{\alpha_{m}}{2}\right)\frac{ \partial_{T}\mathcal{H}}{\omega^{2}}-1\right] \tag{2}\] For systems such as astrophysical binaries \(\mathcal{H}<<\omega,\mathcal{H}<<\omega^{2}\), so that 2 just boils down to \(f(T)\approx-\omega^{2}=\) constant, and then we must have \[h(T)\propto\exp\left[-\frac{1}{2}\int dT\left(2+\alpha_{m}(t)\right)\mathcal{H }(t)\right]\times\exp[i\omega T]. \tag{3}\] This is a consequence of GWs being generated on super sub horizon lengthscales. Therefore, within this deep sub-horizon approximation, the contribution of the running Planck Mass \(\alpha_{m}\) is completely washed out from the phase of the GWs, and lies only in dampening the amplitude. The Planck mass running \(\alpha_{m}\) is a model dependent quantity, but for low redshifts we can assume \(\alpha_{M}\approx\) constant \[h(T)\propto\exp\left[-\left(1+\frac{\alpha_{m}}{2}\right)\mathcal{H }T\right]\times e^{i\omega T}\] \[h(z)=\left[1+z\right]^{-\left(1+\frac{\alpha_{m}}{2}\right)}h(z=0) \tag{4}\] We can now see that if we set \(\alpha_{M}=0\), we recover the usual \(1/(1+z)\) redshift damping of \(\Lambda\)CDM cosmology. ### Effect of \(\alpha_{T}\) The presence of tensor modes of perturbation violate equivalence, and thereby allow for sub-luminal propagating modes. In literature, this is known as the tensor excess speed, and is often written as \(\alpha_{T}=c_{T}^{2}-1\). We model the effects based exactly on the outline provided in [22], but nevertheless provide a brief outline for completeness. We start with the parameter \(\Delta\) defined in 2.12-2.13 of the reference \[\Delta =1-\frac{(c_{T})_{\text{obs}}}{(c_{T})_{\text{acc}}}\] \[=1-(c_{T})_{\text{obs}} \tag{5}\] The source mechanics being Einstein gravity forces gravitons to be always emitted with \(c=1\), which sets \((c_{T})_{\rm sec}=1\). Intervals of time at the source and at observation are thus related by \[dt_{\rm obs}=\left(\frac{1+z}{1-\Delta}\right)dt_{\rm sec} \tag{6}\] meaning that the instantaneous frequency evolution at the source and at the observer are then related by 2.18 of [22], which reads \[\left(\frac{d\omega}{dt}\right)_{\rm sec}=\left(\frac{1+z}{1-\Delta}\right)^{2 }\left(1-\frac{\rm d}{\rm d}\ \frac{\log(1-\Delta)}{\rm logf_{\rm obs}}\right)\left(\frac{d\omega}{dt} \right)_{\rm obs} \tag{7}\] Here the source value represents the true value, while the observed value is modulated by propagation effects. At this point we also note that following 5 quantities get either redshifted (decrease) or blueshifted, so that certain combinations of quantities remain invariant under propagation. As we shall see in Section 3 these propagation invariants are crucial in simplifying computations of GW amplitude and phase. To proceed further, we adopt \((c_{T})_{\rm obs}\) to be 2.24 of [22], the so-called EFT inspired ansatz. We reproduce the ansatz here for completeness. \[c_{T}^{2}(f)=\left[1+\left(\frac{f_{*}}{f}\right)^{2}-\left(\frac{f_{*}}{f} \right)^{2}\sqrt{1+2(1-c_{0})\left(\frac{f}{f_{*}}\right)^{2}}\right] \tag{8}\] where f is the GW frequency at observation. Fig 1 (left panel) shows the behaviour of \((c_{T})_{\rm obs}\) as a function of frequency, with changing the parameter \(c_{0}\) while keeping \(f_{*}\) constant. It is clearly seen that \(f_{*}\) is the frequency of transition from sub-luminal to luminal motion of the gravitons at the observer.. However, two counter arguments rescue use. First, the emission of EM counterparts relative to GW is somewhat model dependent, and hence any constraint derived is particular to a model of emission. The uncertainty of EM emission times across modelling partially weakens the otherwise extremely strong constraint. Secondly, and more importantly, it can also be possible that the transition frequency \(f_{*}\) is lower than the lowest frequency we could observe in a GW event with ground based detectors. In this case, we can still have sub-luminal gravitons and we would not even violate any bounds from GW170817 like events. Furthermore, as we mentioned before, the Gravito-Cerenkov constraints only affect very highly energetic gravitons (\(10^{10}\) GeV) and are therefore not applicable to our scenario. If 7 is applied to orbital frequency, it is clearly seen that propagation non-trivially changes the phasing. In order to get the phasing at observation, the quantity of interest \([\Phi(t)]_{\rm obs}\) is computed by integrating the observed orbital frequency \([\Omega(t)]_{\rm obs}\). In Fig 1 (right panel) we show the behaviour of the orbital frequency factor \(\left(1-\frac{\rm d}{\rm d}\frac{\log(1-\Delta)}{\rm d}\right)\) where we use \(c_{T}\) given by 8 as a function of \(t\) with a range of chosen parameters for \(c_{0}\) and \(f_{*}=0.1\) Hz, well below the lowest observed frequency bin for GW170817. To conclude the section, we note that \(\alpha_{T}\) also affects the amplitude, in that it changes quantities like the perceived mass and luminosity distance at the point of observation, because of sub-luminal motion. It only remains to calculate each of the effects, which we will take up explicitly in the following section. ## 3 Source modelling and explicit \(\alpha_{M},\alpha_{T}\) effects It turns out that mergers of compact binaries emerge as possibly the best understood sources of GWs. A large part of this understanding is because of the existence of analytic or semi-analytic solutions spanning nearly the entire lifespan of the binary and also over a wide range of binary parameters. The solutions have historically been made possible because of the formulation of the PN framework, which aims to iteratively solve the field equations using the binary orbital velocity as an expansion parameter. The current state of the art sits at 4 PN order in orbital dynamics and corresponding GW phasing. It is remarkable that for comparable mass binaries, PN remains nearly consistent up to about \(r\approx 6m\) or the Last Stable Circular Orbit (LSCO). Numerical simulations are needed only past LSCO to capture the merger. Phasing and \(\alpha_{T}\)The idea behind binary phasing computation is simple. A binary with an average orbital separation \(a\) continuously loses both energy \(E\) and angular momentum \(L\) due to emission of GWs. It turns out that both \(E,J\) can be expanded in terms of the expansion parameter \(x=(m\Omega)^{2/3}\) (9) Here \(\Omega\) is the angular velocity of the binary. Using Kepler's Law at leading order \(\Omega^{2}a^{3}=m\) we see that \(x=v^{2}\) to leading order, where \(v\) is the orbital velocity of the binary. Consequently the the energy flux \({\cal F}(x)\) and angular momentum flux \(\mathcal{G}(x)\) relate as \[\frac{dE(x)}{dt} =-\mathcal{F}(x,e)\] \[\frac{dL(x,e)}{dt} =-\mathcal{G}(x,e)\] \[m\frac{\Phi_{\text{orb}}(t)}{dt} =x^{3/2} \tag{10}\] where \(e\) is the eccentricity of the binary in question. For circular binaries of comparable mass the relevant expressions for \(E(x)\) and \(\mathcal{F}(x)\) are computed to 3.5 PN order in [23].[24] computes the expressions of \(E(x),L(x,e)\) and their corresponding flux losses for the case of elliptic binaries of comparable mass. Finally, [25] forms our basis for the EMRI systems, where the equations are computed to 1 PN order in dissipation. One then substitutes explicit expressions of \(E(x),L(x,e),\mathcal{F}(x,e),\mathcal{G}(x,e)\) so obtained in 10 to obtain the differential equation for the simultaneous evolution of the orbital velocity \(\Omega(t)\) and eccentricity \(e(t)\). \[\dot{\Omega}(t) =\mathcal{M}^{5/3}\Omega^{1/3}\quad\sum_{i}O_{i}(\eta,e)x^{i}\] \[\dot{\epsilon}(t) =-\mathcal{M}^{5/3}\Omega^{8/3}e\quad\sum_{i}\mathcal{E}(\eta,e) x^{i}\] \[\Phi_{\text{orb}}(t) =\int dt\quad\Omega(t) \tag{11}\] where \(\eta\) is the symmetric mass ratio of the binary in question. The explicit expressions for the terms \(\mathcal{O}(\eta,e)\) and \(\mathcal{E}(\eta,e)\) have been taken from [24]. Integrating 11 gives us the desired orbital phasing \(\Phi_{\text{orb}}(t)\). The GW phasing is just twice the orbital phasing, so \(\Phi(t)=2\times\Phi_{\text{orb}}(t)\). Considering the energy and angular momentum, and their corresponding fluxes to different \(x\) (PN) powers, we end up with the corresponding ordered PN solutions for GTR. Incorporating the \(\alpha_{T}\) effect now follows by substituting 7 and 6 into 11, so that we finally obtain \[\dot{\Omega}(t)_{\text{obs}} =\mathcal{M}_{\text{obs}}^{5/3}\Omega_{\text{obs}}^{11/3}\quad \frac{\sum_{i}O_{i}(\eta,e)x^{i}}{\left(1-\frac{\text{d}}{\text{d}}\frac{ \text{log}(1-\Delta)}{\text{log}(1-\Delta)}\right)}\] \[\dot{e}(t) =-\mathcal{M}_{\text{obs}}^{5/3}\Omega_{\text{obs}}^{5/3}\quad \sum_{i}\mathcal{E}(\eta,e)x^{i}\] \[\Phi_{\text{orb,obs}}(t) =\int dt\quad\Omega_{\text{obs}}(t) \tag{12}\] where \(\Phi_{\text{obs}}(t)=2\times\Phi_{\text{orb,obs}}(t)\) Several remarks are in order. First, we note from 9 that the variable \(x\) is a propagation invariant. Physically, this just means that distance and time are stretched the same amount by expansion, so velocity \(v\propto\sqrt{x}\) has to be invariant. This is immensely useful at a computational level as it implies that propagation effects have not to be incorporated order by order, but rather as a multiplicative factor to the instantaneous angular velocity. Second, we see that \(\alpha_{T}\) indeed explicitly modulates the phase by a multiplicative phasing factor, which is plotted in the right panel of Fig. 1. It is clearly seen that it does tend to have a small \(\mathcal{O}(1)\) but non-negligible effect. Finally, we see that the modulating factor does not appear in the eccentricity evolution equation explicitly. However as \(e(t)\) is coupled to \(\Omega(t)\) the observed eccentricity is also dependent on \(\alpha_{T}\). Amplitude and \(\alpha_{T},\alpha_{M}\)To compute the amplitude, we note that it is only the quadrupolar amplitude that will survive asymptotically. They are computed by calculating the double time derivative of the source quadrupolar moments \(M_{ij}\). (See Eq. 4.65 of [26] for a detailed expression). The corresponding GW polarisations for a line of sight oriented optimal binary in GTR thus becomes \[h_{+}(t) =-2\left(\frac{\mu}{d}\right)[m\Omega(t)]^{2/3}\quad\left(\frac{ 2\cos\left(2\Phi(t)\right)+e(t)\cos\Phi(t)\left(1+2\cos^{2}\Phi(t)\right)+e(t )^{2}}{1-e(t)^{2}}\right)\] \[h_{+}(t) =-2\left(\frac{\mu}{d}\right)[m\Omega(t)]^{2/3}\quad 2\left(\frac{ \sin\left(2\Phi(t)\right)+e(t)\sin\Phi(t)\left(1+2\cos^{2}\Phi(t)\right)}{1- e(t)^{2}}\right) \tag{13}\] where \(m,\mu\) are the source total and reduced mass respectively and \(d\) is the physical distance to the binary. Under cosmological expansion in ACDM, we have the observed reduced mass \(\mu_{\text{obs}}=\mu/(1+z)\), where \(z\) is the redshift. Equivalently, for a given comoving distance \(d_{C}=d\), the proper distance scales as \(d_{B}=d_{C}/(1+z)=d/(1+z)\). Finally, we also need to take into account that 13 is expressed in source frame time, and thus in the observer's frame we pick up an additional factor of \(1/(1+z)\). Putting all factors together, we see that the combination \[\left(\frac{\mu}{d}\right)\rightarrow\left(\frac{\mu}{d}\right)\left(\frac{1}{ 1+z}\right)\] in 13. Indeed, this is just the familiar amplitude damping factor. As we saw in 4, including \(\alpha_{M}\) is achieved by multiplying by a factor of \((1+z)^{-\alpha_{M}/2}\). The effects of \(\alpha_{T}\) can now be included. As evident from 3.7 and 3.1 of [22], \(\mu\) picks a factor of \(e_{T}\), while \(d\) contributes \(1/\sqrt{e_{T}}\). So at last, the expression for the GW at the detector, including both \(\alpha_{M},\alpha_{T}\) becomes \[h_{+}(t) =-2\left(\frac{\mu}{d}\right)\frac{c_{T}[2\Omega(t)]_{\text{obs}} ^{3/2}}{(1+z)^{(1+\alpha_{M}/2)}}\quad[m\Omega(t)]^{2/3}\quad\left(\frac{2 \cos\left(2\Phi(t)\right)+e(t)\cos\Phi(t)\left(1+2\cos^{2}\Phi(t)\right)+e(t )^{2}}{1-e(t)^{2}}\right)\] \[h_{\times}(t) =-2\left(\frac{\mu}{d}\right)\frac{c_{T}[2\Omega(t)]_{\text{obs}} ^{3/2}}{(1+z)^{(1+\alpha_{M}/2)}}\quad[m\Omega(t)]^{2/3}\quad 2\left(\frac{\sin\left(2\Phi(t) \right)+e(t)\sin\Phi(t)\left(1+2\cos^{2}\Phi(t)\right)}{1-e(t)^{2}}\right) \tag{14}\] Results We have performed three separate but related tasks, with GWs from binary inspirals over a wide variation of source characteristics. First, we made use of 14 to calculate Fisher forecasts for single events. Here we have computed \(\alpha_{T},\alpha_{M}\) forecasts from both circular and elliptic configurations of comparable mass binaries as well as EMRIs as opposed to [22] who make use of inspiral of circular EMRI systems only. For our systems of comparable mass binaries, we have considered the phase evolution equations to 2 PN order beyond quadrupolar, following [23, 24] for both the circular and elliptic cases. Second, we have made use of the PN formalism to reduce accuracy order-by-order in 14, and study the corresponding effect upon the \(\alpha_{T},\alpha_{M}\) forecasts. Finally, we have also considered an exercise of population-wide inference of \(\alpha_{T},\alpha_{M}\) in order to get an idea of the volume of data necessary to adequately constrain the subspace. We primarily premise our work upon the inference obtained from ground-based detectors, namely the LVK network and the upcoming 3G detector network. As EMRIs are not relevant to ground based detectors, they are not our main focus and we have only included 1PN beyond-quadrupolar effects for them, following [25]. We have also limited our population-wide studies to comparable mass inspirals only because current characterisations of EMRI populations turn out to be heavily dependent on numerical N-body modelling of galactic nuclei environments [27]. In all our studies, our analysis is purely considering non-spinning binaries only. It is evident from its formulation that the gravitational waveform \(h(\vec{\theta})\) is a multivariate function, where \(\vec{\theta}=\{M,\eta,\alpha_{m},c_{0},f_{*}\}\). The functional dependencies separate out into dependencies at the source in \(M,\eta\) and dependencies during propagation in \(\alpha_{m},c_{0}k\vec{\varepsilon}f_{*}\). We ultimately want to run a simultaneous Bayesian inference upon both the source and propagation parameters. However as Bayesian MCMC is computationally expensive, one normally performs a computationally cheap Fisher error estimate. The Fisher estimates are obtained by sampling near the peak of the likelihood function. As is well known [28], this means that the Fisher results are a good approximation only in the Linear Signal Approximation (LSA). \[h(\vec{\theta})=h(\vec{\theta}_{0})+\partial_{t}h\Delta\theta^{i} \tag{15}\] . Now with 15, it is evident that \[p(d|\vec{\theta})\propto\exp\left[-\frac{1}{2}\frac{\langle\partial_{t}h| \partial_{t}h\rangle}{\langle h|\hbar|\rangle^{2}}\Delta\theta^{i}\Delta \theta^{j}\right] \tag{16}\] Evaluating the covariance from the distribution, one can see that it is proportional to the inverse of the Fisher Matrix \(\langle\partial_{t}h|\partial_{t}h\rangle^{-1}\). However, for real signals away from LSA the inverse of the Fisher matrix can only be regarded as a lower bound upon the error covariance matrix. This is also well known as the Cramer-Rao bound. ### Single event estimates 1 shows the results of Fisher forecasts for a single GW event. To be consistent across systems we have considered the results at a signal to noise factor \(\mathrm{S/N}\) of 10. Having EMRI and comparable mass results side by side helps us to study the comparative strengths of either system. We immediately notice that despite being truncated at 1PN order lower, the errors of the forecasts for EMRI systems for all the parameters except \(\alpha_{M}\) are much less compared to the systems of comparable mass. This improvement is because of improved accuracy arising from the relatively longer inspiral timescales in the EMRI system. The improvement is particularly pronounced for the estimates for \(\alpha_{f},f_{*}\) where the errors associated with comparable mass systems are \(\sim 10\%\) but for EMRIs are \(\sim 0.001\%\) meaning that the errors decrease by a factor of \(10^{-4}\). In addition to having longer inspiral, forecasts for an injected \(f_{*}=0.002\) Hz are further improved as the transition from sub-luminal to luminal motion for gravitons occurs at a frequency which is within the sensitivity band of eLISA like detectors. In contrast, we see that the estimation of \(\alpha_{M}\) is hardly affected by the system in question. Furthermore the error in \(\alpha_{M}\) over every kind of system is \(\sim 300-400\%\), which is quite high compared to the other parameters. The inaccuracy itself should not be surprising, because \(\alpha_{M}\) does not appear in the phase and also because it is degenerate with the redshift of the system. Additionally, both EMRI and comparable mass systems have the same functional dependences for their asymptotic amplitudes - which would explain why estimation of \(\alpha_{M}\) does not change between EMRI and comparable mass systems. Fig 2 plots the Fisher ellipses of the comparable mass binary systems, for 0 and non 0 eccentricities. We note that the \(e=0\) case shows mild covariances in the \((\alpha_{M},f_{*})\)rows, as is evident from the horizontal nature of the ellipses. We see that for the \(e=0.25\) case, the corresponding ellipses get tilted. This means that the likelihood function has non-trivial eccentricity dependences, which show up in the fisher plots as eccentricity dependent covariances. It is thus clear that eccentric binaries \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{Source params} & \multicolumn{3}{c|}{Cosmo params} & \multicolumn{3}{c|}{Noise} & \multicolumn{3}{c|}{Errors} & & & \\ \hline \begin{tabular}{c} Chip \\ Mass \\ \end{tabular} & \begin{tabular}{c} Red. \\ \(\mu\) \\ \end{tabular} & \begin{tabular}{c} \(\alpha_{m}\) \\ \end{tabular} & \begin{tabular}{c} \(\alpha_{0}\) \\ \end{tabular} & \begin{tabular}{c} \(f_{*}\) \\ \end{tabular} & \begin{tabular}{c} PSD \\ \end{tabular} & \begin{tabular}{c} lower \\ cut- \\ off \\ \end{tabular} & \begin{tabular}{c} upper \\ cut- \\ \end{tabular} & \begin{tabular}{c} \(\delta(\ln\)\(M)\) \\ \end{tabular} & \begin{tabular}{c} \(\delta(\ln\mu)\) \\ \end{tabular} & \begin{tabular}{c} \(\frac{\delta(\ln\mu)}{\alpha_{m}}\) \\ \end{tabular} & \begin{tab Figure 2: Single event forecast results for comparable mass binary systems with \(e=0\) (top) and \(e=0.25\). Blue/Red plots represent simultaneous forecast of \(\mathcal{M},\mu,\alpha_{M}\), while green/pink include \(\alpha_{T}\) effects as well. In addition to affecting the variances marginally, the eccentricity does seem to affect covariances will behave as nuisances by introducing unnecessary bias in case our inference model is specific to circular inspirals. Indeed, this can be thought of a strong motivation for the accurate modelling of eccentric systems for both EMRI and comparable mass. ### Changing the PN accuracy In 4.1, we demonstrated the forecasts of a single event for different binary configurations. So we might ask ourselves: how important really is it for us to model the source? In other words, if we were to decrease the PN accuracy order by order, would it have any effect upon the estimates. Before proceeding, it is worthwhile to pause, and describe the nature of the PN terms order by order. As is evident from 3, 14 is obtained by considering the expression of the \(\mathcal{F}(x,e)\) and \(\mathcal{G}(x,e)\) upto a 2 powers in \(x\) beyond the leading order. In order to do this, we have had to account for the contribution of the leading order hereditary (or tail) term appearing at order \(x^{3/2}\) for both \(\mathcal{F}(x,e)\) and \(\mathcal{G}(x,e)\), which requires a careful consideration of the calculation of the so-called eccentricity tail enhancement functions \(\phi(e)\) and \(\phi(e)\) respectively. Analytically, these functions are infinite series of Bessel functions with the eccentricity \(e\) as their argument. We have calculated these functions by fitting to their numerical values as presented in Appendix B of [24]. Fig 3 shows the result of the fitting. The tiny residuals compared to the function values indicate a very good fit, with the maximum error being of order 0.1%. Let us now consider the case with comparable mass binaries. The results in 1 have been obtained using the highest (2PN) order considered. However, what happens if we reduce the highest order? To do this, we note that in addition to accounting for the expressions of \(E,L,\mathcal{F},\mathcal{G}\), we also need to account for the order-by-order correction to the Kepler's 3rd Law. In our work, we have used the PN-corrected Kepler's 3rd Law to calculate the corresponding separation \(a\) for a given orbital frequency \(\Omega_{i}\) obtained by integrating 12. We terminate our evolution when the separation equals the Last Stable Circular Orbit (LSCO), i.e \(a\leq\tau_{\rm LSCO}\). As is well understood, crossing the LSCO initiates a radial infall or plunge, and can be considered as a timestamp when the inspiral process terminates and merger begins. With this framework in place, we reduce the PN accuracy both from the analytic expressions, as well as from the Kepler's Laws. Fig 4 shows the results of performing such an operation. Before discussing the results we should note the factors that affect estimation accuracy, when PN order is played with. It is easy to see that the binary phasing is directly affected as terms are added or subtracted to the expression of \(E\) or \(L\) and their corresponding fluxes. As PN ordering also plays with the Kepler's Law, they also affect the upper cut-off or termination frequency of the inspiral, thereby changing the length of the inspiral phase order-by-order, provided all orders start from the same starting frequency. As it involves a combination of factors, the estimation accuracy should not be expected to exhibit monotonic behaviour with changing PN order. We can now focus on the results of the operation, as depicted in Fig 4. The figure on the left computes the order-by-order estimate for the \(\mathcal{M}=10,\mu=5\) system, while the one on the right does the same for the \(\mathcal{M}=20,\mu=11.3\) system. The upper panels represent circular configurations, while the lower ones represent the cases with initial eccentricity \(e_{0}=0.25\). The PN-ordered accuracy computations reveals several interesting facts. To begin with, we note that the Fisher forecasts (or errors) for \(\alpha_{T}=(c_{0},f_{*})\) are more sensitive to PN corrections than the errors of \(\alpha_{M}\). This is hardly surprising. PN terms after the phase, not the amplitude. The amplitude is unaffected with \(x\) because higher order moments do not survive asymptotically. This also explains why the \(\alpha_{M}\) errors are unaffected even by changes in the PN order. Next, we note that for both circular and eccentric binaries the PN-ordered \(\alpha_{T}\) errors are dependent on the configuration. As configurations directly play with binary phasing the effect should also be expected. Most importantly, we find that the estimates of \(\alpha_{T}\) do vary non-trivially across the PN-order. We find that estimates are affected for both circular and eccentric binaries. While \(\alpha_{T}=(c_{0},f_{*})\) estimates are affected in general, \(c_{0}\) estimates are seen to be particularly extremely effected while playing with the PN order. This would possibly suggest that the change in PN order is capable of mimicking the kind of dephasing introduced by \(\alpha_{T}\) through \(c_{0}\) as shown in Fig 1. Furthermore, in all the cases, it can be observed that the estimates for \(\alpha_{T}\) fluctuate at 1.5 PN order. As the tail terms are known to enter at 1.5 PN order, this means that such terms play a large role in the overall error budget. Due to relevant terms being known to higher PN order, we chose comparable masses systems. Although performed for comparable mass binary systems, there are important lessons for the case of the EMRI systems as well. EMRI systems have much longer inspirals, so the relative effects of adding/subtracting terms order-by-order is expected to be greater. Second, this calculation indirectly highlights the absolute importance of accurately the tail and tail-of-tail eccentricity terms for PN systems. This is especially true, because tiny discrepancies at the beginning will add up over much longer inspiral EMRI timescales - and have the potential to give rise to huge errors in the measurement of \(\alpha_{T}\) Figure 3: Fitting for the eccentricity enhancement functions \(\phi(e)\) and \(\hat{\phi}(e)\) for the tail contributions. The residuals are \(\leq 0.3\%\) for \(\phi(e)\) and \(\leq 0.2\%\) for \(\hat{\phi}(e)\) We are thus led to infer that modelling of source (considered here in PN ordered effects) does contribute non-negligibly towards errors of cosmological propagation if they appear in the phasing as seen by the observer. This exercise also demonstrates the importance of the leading tail contribution for the case of comparable mass systems. Specifically, large and non-trivial errors can be expected in case of binaries where tail effects were not taken into consideration. ### Population studies The results obtained in 4.1 demonstrate that although inferences from single merger events are quite a powerful tool to infer dynamical properties of the binary, they are not nearly enough for inference of the cosmological propagation parameters, namely \(\alpha_{T}=(c_{0},f_{*})\) and \(\alpha_{M}\), which are of \(\mathcal{O}100\%\) and above. If this is true, it indicates that meaningful inferences can only be performed when we coherently combine information from a population of merger events, in a process known as coherent power-stacking. This is similar to Poisonian statistics and an error reduction by a factor of \(1/\sqrt{N}\), where \(N\) is the number of observations. Accordingly, we have performed a Monte-Carlo simulation of the comparable mass BBH merger events in an effort to make error estimates based on population-wide inferences. However, in our case, it should be remembered that the error-reduction rate is not nearly as strong as \(1/\sqrt{N}\), because all our events will not originate from the same distance. We skip the analogous exercise considered with EMRI systems, #### 4.3.1 Choice of populations We have assumed events to be randomly and uniformly distributed per unit comoving volume element. For low redshifts \(z\lesssim 0.5\), the rate of generation of BBH systems do not depend on the redshift \(z\), and hence the above is a valid approximation to assume. We have also assumed events up to a maximum redshift of \(z=0.5\), after which it is expected that the sensitivities of current ground based detectors like aLIGO would decrease significantly. We consider two separate populations of 50000 and 100000 events respectively. Additionally for each of the mentioned cases, we assumed two kinds of mass-distributions in the populations. In the first case, we have assumed that the components of the binary are derived from a seed uniform distribution between \(10M_{\odot}\leq m\leq 50M_{\odot}\). The limits to the range are inspired from early models of supernova remnants. In the second case we have consider the other limit and assume that the component BHs are seeded from a relatively narrow Gaussian distribution centered around \(\mu=50M_{\odot}\) and a standard deviation of \(\sigma=5M_{\odot}\). Additionally, we have assumed for every case that the binaries forming the population are uniformly distributed in their inclination \((0,\pi)\) and angle of polarisation \((0,2\pi)\). To set meaningful bounds on the initial eccentricity \(c_{0}\), we first observe that as progenitors to systems of BBHs, given a separation binary main sequence stars have to have an upper limit in their eccentricity to avoid collision as they are extended bodies. Consequently the BBH system is expected to inherit an upper limit from it's progenitor binary main sequence star configuration. The details of this upper limit turn to be heavily model-dependent and are not central to our results. We have thus assumed initial eccentricity \(0\leq c_{0}\leq 0.5\). The choice of upper limit value to the eccentricity is ad-hoc. It should be also remarked here that this eccentricity is chosen when the orbital frequency \(M\Omega\sim 10^{-4}\), and it is indeed so that all these binaries become almost circular when they are visible by ground-based detectors. Fig 5 shows the members of the population that clear an SNR threshold of 20 for all cases. The black horizontal and vertical lines indicate a specific \(\mathcal{M}=20.0,\eta=0.245\) bin, which is our bin of interest. Consequently, we combine information from all events that fall within this bin. Figure 4: Errors in Fisher forecasts in \(\alpha_{m},c_{0},f_{*}\) as a function of the PN order at source, for two instances of comparable mass binary systems. Large variations are seen in the error estimates, notably for \(c_{0}\) across the systems. Figure 5: Configuration of masses of populations along with redshift (shown as colour code) that have an SNR \(\geq\) 20 among 50000 events for the uniform mass distribution (upper panels) and the Gaussian mass distribution. The figure on the right shows exactly the same, but for 100000 events #### 4.3.2 Population-wide inference We are now in a position to analyse the results of our population-wide inferences. Fig 6 shows the results of the inference studies for \(\alpha_{T},\alpha_{M}\) for the binary populations seeded from uniform (upper sub-panels ) and gaussian (lower sub-panels) distributions of component mass. As stated before, the mass bin of our choice is \(\mathcal{M}=20.0,\eta=0.245\). The upper panel is the combined inference from 50000 events, while the lower one has 100000 events. We are drawn to make some important conclusions. First, we note that the errors in \(\alpha_{T}\) which were \(\thicksim 80-90\%\) for \(\alpha_{0}\) and \(\thicksim 10-15\%\) for \(f_{\alpha}\) for a single event reduce to \(\thicksim 10-20\%\) and \(\thicksim 5\%\) respectively for the 50000 strong population survey. For the 10000 strong population, we find that the same errors on \(\alpha_{T}\) get constrained to less than 5% for \(\alpha_{0}\) and less than 1.3% for \(f_{\alpha}\). It is thus clear that EMRHs are the cleanest probes of the \(\alpha_{T}\) subspace. However, until LISA comes online EMRHs are not possible to observe, and population-wide inferences of \(\alpha_{T}\) turn out to be useful tools in constraining them. We further note that in order to achieve constraints on \(\alpha_{T}\) to a few percent, we realistically need a population of 10000 strong events which translates to a total of \(\thicksim 4000\) detections. With a detection rate of \(\thicksim 10\) per year we can see that the current generation of ground based GW detectors will most likely not be able to resolve these effects. However, the 3G network of ground based GW detectors scheduled around 2030s are expected to detect tens of thousands of event per year, and would thus be able to resolve \(\alpha_{T}\) to a few percent with \(\thicksim 1\) year of data. We thus demonstrate that populations of comparable mass inspirals can produce meaningful constraints with just 1 year of 3G data and can potentially narrow the \(\alpha_{T}\) parameter space for EMRI inference. Single event EMRIs, like their comparable mass inspiral counterparts will nevertheless fail to constrain the \(\alpha_{M}\) subspace. As the formation channels of supermassive BHs are not clearly understood, we are handicapped in modelling realistic populations of EMRIs. In this case therefore, the population-wide inference of comparable mass binaries happens to be our only option. As the single event inferences produce error-margins which are \(\thicksim 300-400\%\) it is much harder to constrain \(\alpha_{M}\). We show the results of population-wide \(\alpha_{M}\) inference in the third coloum of Fig 6. We compute that with the 100000 strong population ( or \(\thicksim 4000\) detections) \(\alpha_{M}\) is constrained to an accuracy of nearly 25%. Estimates from population-wide inferences are expected to be dependent upon the nature of the populations themselves. But how big is the dependence, and at what threshold does it begin to appear? The first question was exactly the premise of the investigations done in [21]. We want to see if these questions show up in our results as well. We find from Fig 6 that interestingly, the \(\alpha_{M}\) inferences are comparatively less sensitive to differences in population, as compared to the \(\alpha_{T}\) subspace. This observation can once again be explained by considering that variation in component mass over populations have much less effect on the amplitude of GWs, as compared to their phase. Focusing on the \(\alpha_{T}=(c_{\gamma},f_{\star})\) subspace, our results also show that differences across seed populations show up more starkly (particularly in \(c_{0}\)) with 100000 events (4000 detections). Hence we expect this number of detections to be the threshold of such effects of differences in population to show up. The narrow posteriors of \(c_{0}\) for both sets of populations lead us to conclude that as a parameter, it will be statistically distinguishable from the baseline model which is our case is just ACDM. We also note that for ACDM \(f_{\star}\) is undefined, as will be the case if \(c_{0}=0\) in 8. However, the same cannot be directly said for the parameter \(\alpha_{M}\), because of the presence of tens of percent of error even with the 100000 strong population. To quantify this uncertainty, we have run an equivalent inference on \(c_{0}\) and \(\alpha_{M}\) for the baseline model ACDM. shows us the results. As expected, for both sets of populations \(c_{0}\) turns out to be statistically distinguishable, given our choice of \(c_{0}=0.2\). With \(\alpha_{M}\), we see that for the 100000 strong population for both the gaussian and uniform distributions, the posterior for \(\alpha_{M}=0.7\) intersects the corresponding one for \(\alpha_{M}=0\) just over \(2\sigma\). We are thus led to believe that a \(2\sigma\) significant detection over ACDM is probable for \(\alpha_{M}\) with \(\thicksim 4000\) detections. ## 5 Conclusions GWs offer a reliable window into understanding if beyond-ACDM models of gravity could be meaningfully inferred. From this work, we are able to arrive at several independent and important conclusions. We have performed inferences of the beyond-ACDM parameters \(\alpha_{T},\alpha_{M}\) with single events as well as with population-wide surveys. Our Fisher estimates for single events shows correlations in between the parameters which depend on eccentricity which demonstrates how unmodelled eccentricity Figure 6: Population-wide inference results for \(c_{0},f_{\star}\) and \(\alpha_{M}\) with the errors \(\sigma_{c0},\sigma_{f_{\star}}\) and \(\sigma_{\alpha_{M}}\) respectively for choice of \(\mathcal{M}=20\) and \(\eta=0.24\) with 50000 events (top panels) and 100000 events. The thin lines represent the errors of each individual event that crosses an SNR threshold of 20, while the thick lines represent the combined errors. The results are shown assuming the seed population of the components of the BBH come from uniform distribution (top sub-panels) and gaussian (bottom sub-panels) distributions. panel can silently bias inference studies. We also find that EMRIs by virtue of their long inspiral times have the best chances of inferring the \(\alpha_{M}\) subspace just in a single event. However, owing to the presence of seismic cut-of-fit this task cannot be performed by ground-based detectors. Space based missions like eLISA can be our answer here. Furthermore we show that even EMRI systems cannot resolve \(\alpha_{M}\) by a single event. In order to get around this problem, we successfully demonstrate the power of population-wide inferences of comparable mass binary merger systems as a tool to constrain the otherwise poorly constrained \(\alpha_{M}\) to \(\thicksim 25\%\). With such a constraint, we can infer \(\alpha_{M}\) over its baseline value of 0 to \(2\sigma\approx 95\%\) confidence interval. It turns out that this is the best we can do with \(\alpha_{M}\). However, given our state-of-art of GW detectors, we cannot achieve this target in a reasonable amount of time as the most accurate inference will take \(\thicksim 4000\) detections on the average. Fortunately, the number of detections necessary are right in the ballpark of the upcoming 3G detector network thanks to their enhanced detection rates. With 3G detectors, \(\thicksim 4000\) detections would take around 1 year, which would be the time-frame necessary to achieve the accuracy we calculate. For a single event, we have also considered the effects of modelling inaccuracies at source, by progressively decreasing the PN accuracy order-by-order. We demonstrate that in a counter-intuitive twist, lowering of the order of PN accuracy does affect the outcomes of inference study of propagation parameters, particularly when dealing with eccentric binaries. We anticipate that such kind of modelling runs the risk of being a nuisance by giving rise to a large source of bias in the \(\alpha_{T},\alpha_{M}\). In order to mitigate this problem, one must therefore perform a more complete and accurate source modelling, namely including higher order PN eccentric tail and tail-of-tail dependent terms. In addition, the inclusion of spin dependent terms like individual spins, spin-orbit and spin-spin couplings are expected to introduce precession of the binary orbits, which will also modify the beyond-\(\Lambda\)CDM inference estimates. Refinements would also need to include the current state-of-art at 4 PN order. In our present work, we have attempted a simplistic Fisher forecast study for beyond \(\Lambda\)CDM parameters with three situations, namely inference single events (EMRIs or comparable mass inspirals), studying the effects of PN orders on inferences, and considering population-wide inferences. Our choice of binary configurations was also simplified by ignoring spin and higher order source effects. A subsequent study would thus have to include these effects at source and perform a full Bayesian sampling of the multidimensional likelihood function for the beyond \(\Lambda\)CDM subspace. We leave such an exercise for a future attempt. ## 6 Acknowledgements This work is a result of early discussions with Ippocratis Saltas and Roberto Oliveri. It has partly been supported by the grant from the Czech Academy of Sciences under Project No. LQ10010210. K.C acknowledges additional helpful inputs from Ignacy Sawicki, Luc Blanchet, David Trestini, and Georgois Loukes Gerakopoulos. K.C also acknowledges the use of the Phoebe cluster at CEICO, FZU and I.T. support by Josef Dvoracek.
2306.12342
Estimates for more Brascamp-Lieb forms in $L^p$-spaces with power weights
We consider a class of Brascamp-Lieb forms and give conditions which guarantee the boundedness of these form on $L^p$-spaces with weights that are a power of the distance to the origin. These conditions are close to necessary and sufficient.
Russell M. Brown, Katharine A. Ott
2023-06-21T15:39:31Z
http://arxiv.org/abs/2306.12342v1
# Estimates for more Brascamp-Lieb forms in \(L^{p}\)-spaces with power weights ###### Abstract We consider a class of Brascamp-Lieb forms and give conditions which guarantee the boundedness of these form on \(L^{p}\)-spaces with weights that are a power of the distance to the origin. These conditions are close to necessary and sufficient. ## 1 Introduction The goal of this paper is to give conditions which guarantee the boundedness of certain Brascamp-Lieb forms on weighted \(L^{p}\)-spaces. To describe the forms we study, we let \[E=\{v_{1},\ldots,v_{N}\}\subset{\bf R}^{m}\setminus\{0\}\] be a finite collection of non-zero vectors which does not contain a pair of collinear vectors. We fix \(k\geq 1\) and for \(v=(v^{1},\ldots,v^{m})\in{\bf R}^{m}\) and \(x=(x^{1},\ldots,x^{m})\in{\bf R}^{mk},\) we define \(v\cdot x\in{\bf R}^{k}\) by \(v\cdot x=\sum_{i=1}^{m}v^{i}x^{i}.\) If \(f_{1}\ldots,f_{N}\) are non-negative, measurable functions on \({\bf R}^{k},\) the form we will study is \[\Lambda(f_{1},\ldots,f_{N})=\int_{{\bf R}^{km}}\prod_{i=1}^{N}f_{i}(v_{i}\cdot x )\,dx. \tag{1.1}\] We will use \(L^{p}_{\alpha}({\bf R}^{k})\) to denote the weighted \(L^{p}\)-space with the norm \[\|f\|_{L^{p}_{\alpha}({\bf R}^{k})}=\|\|\cdot|^{\alpha}f\|_{L^{p}({\bf R}^{k})}.\] Going forward, all norms will be over \({\bf R}^{k}\). Our goal, which is only partially achieved, is to characterize the set of indices \((1/p_{j},\lambda_{j})_{j=1}^{N}\) for which we have the estimate \[\Lambda(f_{1},\ldots,f_{N})\leq C\prod_{j=1}^{N}\|f_{j}\|_{L^{p_{j}}_{\lambda_ {j}}}. \tag{1.2}\] The constant \(C\) may depend on the set of vectors \(E\), the dimension \(k\), and the indices \((1/p_{j},\lambda_{j})_{j=1}^{N}\). We will focus most of our attention on the case where \(E=\{v_{1},\ldots v_{N}\}\subset{\bf R}^{m}\) has the property that any subset \(K\subset E\) with cardinality \(\#K=m\) is a basis for \({\bf R}^{m}\). We call such a set \(E\)_generic_. We will establish the following theorem, which gives conditions on the indices \((1/p_{j},\lambda_{j})_{j=1}^{N}\) that guarantee that the bound (1.2) holds. **Theorem 1.3**.: _Let \(E=\{v_{1},\ldots,v_{N}\}\) be a generic set in \({\bf R}^{m}\). Suppose that \((1/p_{j},\lambda_{j})_{j=1}^{N}\in((0,1)\times{\bf R})^{N}\) and that the following list of conditions are true:_ \[\sum_{j=1}^{N}(\frac{1}{p_{j}}+\frac{\lambda_{j}}{k})=m, \tag{1.4}\] \[\sum_{v_{j}\notin V}(\frac{1}{p_{j}}+\frac{\lambda_{j}}{k})>m- \dim(V)\quad\mbox{for all non-zero, proper subspaces $V$ of ${\bf R}^{m}$},\] (1.5) \[\sum_{v_{j}\notin V}\lambda_{j}\geq 0\quad\mbox{for $V$ a subspace of ${\bf R}^{m}$},\] (1.6) \[\sum_{j=1}^{N}\frac{1}{p_{j}}\geq 1. \tag{1.7}\] _Then estimate (1.2) holds._ The next result gives necessary conditions which are close to the sufficient conditions in the previous theorem. The set of indices \((1/p_{j},\lambda_{j})_{j=1}^{N}\) which satisfy our necessary conditions in Theorem 1.8 below form a polytope in the hyperplane defined by equation (1.4). The above result, Theorem 1.3, tells us that the estimate (1.2) holds on the interior of this polytope. The estimate may hold on part of the boundary, but we do not investigate results on the boundary here. **Theorem 1.8**.: _Let \(E=\{v_{1},\ldots,v_{N}\}\subset{\bf R}^{m}\) be a set of vectors and assume that no pair of vectors from \(E\) are collinear. If (1.2) holds, then the indices \((1/p_{j},\lambda_{j})_{j=1}^{N}\in([0,1]\times{\bf R})^{N}\) satisfy (1.4), (1.7), (1.6), and_ \[\sum_{v_{j}\notin V}(\frac{1}{p_{j}}+\frac{\lambda_{j}}{k})\geq m-\dim(V) \quad\mbox{for all subspaces $V\subset{\bf R}^{m}$}. \tag{1.9}\] There is a great deal of work on Brascamp-Lieb forms dating back to the original work of Brascamp, Lieb, and Luttinger [5]. Two papers that give recent developments and include more of the history are by Bennett, Carbery, Christ and Tao [3] and Carlen, Lieb and Loss [9]. A good place to start our story is a paper of Barthe [1, Proposition 3] who gives a necessary and sufficient condition on the indices \((1/p_{1},\ldots,1/p_{N})\) for which we have estimate (1.2) in the case when exponents for the weights, \(\lambda_{j}\), are all zero. In fact, he shows that the set of indices \((1/p_{1},\ldots,1/p_{N})\) is the matroid polytope for the set of vectors \(E=\{v_{1},\ldots,v_{N}\}\) (though he does not use this terminology). His condition was generalized by Bennett _et al._ and we make fundamental use of these results in our work. Barthe's work is not restricted to forms constructed using generic sets of vectors and it is an interesting problem to characterize the set of weighted spaces for which we have (1.2) without the assumption that the set of vectors \(E\) is a generic set. A key point in our proof of Theorem 1.3 is an extension of results for Brascamp-Lieb forms from Lebesgue spaces to Lorentz spaces by a real interpolation argument. This idea dates back at least to O'Neil [21] who studies Young's inequality in Lorentz spaces and gives an application to fractional integration. It reappears in several places including Christ [10]. Our work will rely on results of Bez, Lee, Nakamura and Sawano [4] who study Brascamp-Lieb forms on Lorentz spaces. Our interest in Brascamp-Lieb forms arose from the study of a scattering map for a first order system in the plane. This map originated in work of Beals and Coifman [2] and Fokas and Ablowitz [11] who observed that the scattering map transforms solutions of the Davey-Stewartson equations to solutions of a linear evolution equation. In the work of the first author [6], we expand the scattering map in a series of multi-linear expressions and use estimates for certain Brascamp-Lieb forms to establish that the series is convergent in a neighborhood of zero in \(L^{2}\). In his dissertation [19, 20], Z. Nie gave a proof of this result that relied on multi-linear interpolation. The works of Perry (with an appendix by M. Christ) [22] and Brown, Ott and Perry (with an appendix by N. Serpico) [8] give estimates for the map on weighted Sobolev spaces. The recent work of Nachman, Regev, and Tataru [18] establishes that the map is continuous on all of \(L^{2}\), but does not rely on our approach using Brascamp-Lieb forms. Motivated by this body of work and especially the results of Brown, Ott, and Perry, we became interested in the question of characterizing the complete set of \(L^{p}\)-spaces with power weights for which we have the bound (1.2). A first step in this direction appears in our work with Lee [7] where we consider forms constructed using generic sets of vectors in \({\bf R}^{2}\). Our results in [7] are close to optimal in the sense that we find a closed polytope containing the set of indices \((1/p_{j},\lambda_{j})_{j=1}^{N}\) for which we have (1.2), and we show that in the interior of this polytope we have estimate (1.2). In the current paper, we extend these methods from \({\bf R}^{2}\) to handle forms based on generic sets of vectors in \({\bf R}^{m}\) with \(m>2\). The polytope we find lies in a hyperplane in \({\bf R}^{2N}\) and by interior we mean the interior in the relative topology for this hyperplane. The methods developed over the arc of our works can be used to establish the estimate (1.2) for certain indices when the set of vectors do not satisfy the generic condition. In fact, going back to the results of Brown, Ott, and Perry [8], the form studied there is defined using non-generic sets of vectors and we were successful in finding estimates for the form. However, when considering arbitrary sets of vectors \(E\subset{\bf R}^{m}\), our methods are only successful if we have the additional condition that \(E\) is a generic set. To be more precise about this limitation, in the last section of this paper we give an example of a form where there are points in the interior of the polytope defined by Theorem 1.3 for which a straightforward extension of our argument used in the generic case fails to establish (1.2). Using duality, the estimates for Brascamp-Lieb forms (at least for the range of indices \(p\) that we consider) are equivalent to estimates for multi-linear operators. A good place to start this part of the story is a theorem of Stein and Weiss [23], which gives optimal results for the action of the Riesz potential of order \(\beta\) on spaces \(L^{p}_{\alpha}\). Using duality, we can see that estimates for this operator are equivalent to weighted estimates for the form \[\int_{{\bf R}^{2k}}\frac{f(x)g(y)}{|x-y|^{k-\beta}}\,dxdy.\] In our earlier work [7] we showed how to obtain the Stein-Weiss result from the case \(m=2\) of our Theorem on multilinear forms defined using generic sets of vectors in \({\bf R}^{2}\). Work of Grafakos [12] considers an \(n\)-linear fractional integration operator in the unweighted case. Grafakos's operator may be related to a form constructed using a set of generic vectors in \({\bf R}^{2}\). Grafakos and Kalton [13, Section 5] consider a multi-linear form involving a set of vectors that is not generic. Kenig and Stein [15] consider a multilinear fractional integral and give results when their operator maps into an \(L^{p}\)-space with \(p<1\). This is a result that cannot be obtained by using duality and appealing to estimates for forms. There are a number of authors who have considered weighted estimates for bilinear fractional integrals. The author Komori-Furuya [16] has studied these operators on the spaces \(L^{p}_{\alpha}\) we consider here. Moen [17] gives conditions on general weights which guarantee boundedness of a bilinear fractional integral. In comparison, our work considers less general weights, but gives results that are close to optimal. An interesting open problem is to find conditions on weights that allow us to establish the finiteness of Brascamp-Lieb forms for more general classes of weighted spaces. The outline of this paper is as follows. In section 2 we prove the conditions which guarantee the finiteness of the form and (1.2). Section 3 is devoted to the proofs of the necessary conditions and section 4 gives several examples illustrating our results and the limitations of our methods. We thank our colleague, C.W. Lee for suggesting that we study Brascamp-Lieb forms constructed using generic sets of vectors. Sufficient conditions In this section, we give the proof that the conditions in Theorem 1.3 are sufficient to imply the estimate (1.2). The proof will proceed in four steps of increasing complexity. First, we prove the theorem in the case when \(\lambda_{j}=0\) for all \(j\) and \(k=1\). Next, we remove the restriction on \(k\). The third step is to prove the result under the assumption that all \(\lambda_{j}\geq 0\). The final step is to show that when \(\lambda_{j}<0\) for at least one index \(j\) we may reduce to the case where all \(\lambda_{j}\geq 0\). Before beginning the proof, we recall a few facts about the Lorentz spaces, \(L^{p,r}\), where \(1\leq p<\infty\) and \(1\leq r\leq\infty\). A definition of these spaces may be found in Stein and Weiss [24, p. 191]. This family of spaces include the familiar \(L^{p}\)-spaces as \(L^{p}=L^{p,p}\) and arise as real interpolation spaces of the \(L^{p}\)-spaces. An observation of Calderon is that the spaces \(L^{p,r}\) may be normed if \(1<p<\infty\) and \(1\leq r\leq\infty\) (and of course when \(p=r=\infty\), \(L^{\infty,\infty}=L^{\infty}\)). Imitating the definition of \(L^{p}_{\alpha}\), we define weighted Lorentz spaces \(L^{p,r}_{\alpha}\) as the collection of functions \(f\) for which \(|\cdot|^{\alpha}f\) lies in \(L^{p,r}\). We recall the extension of Holder's inequality to Lorentz spaces due to O'Neil [21, Theorem 3.4]. Suppose \(p\in(1,\infty)\), \(p_{1},p_{2}\in(1,\infty)\), \(1/p=1/p_{1}+1/p_{2}\), \(r,r_{1},r_{2}\in[1,\infty]\), and \(1/r\leq 1/r_{1}+1/r_{2}\). Then there is a finite constant \(C\) such that \[\|f_{1}f_{2}\|_{L^{p,r}}\leq C\|f_{1}\|_{L^{p_{1},r_{1}}}\|f_{2}\|_{L^{p_{2},r _{2}}}. \tag{2.1}\] One useful observation is that for \(0<\lambda<k\), the function \(|x|^{-\lambda}\) lies in the space \(L^{\lambda/k,\infty}({\bf R}^{k})\) and in fact if \(0<\lambda-\alpha<k\) we have \[|x|^{-\lambda}\in L^{k/(\lambda-\alpha),\infty}_{\alpha}({\bf R}^{k}). \tag{2.2}\] We begin with the case when \(k=1\) and \(\lambda_{j}=0\) for all \(j\). Under this restriction, provided that (1.4) and (1.5) hold, the theorem of Bennett, Carbery, Christ, and Tao [3, Theorem 2.1] implies that (1.2) holds. Next, Proposition 2.1 from our previous work [7] shows that the boundedness of the form for \(k>1\) follows from the case where \(k=1\). From here, a result of Bez _et al._[4, Theorem 1] allows us to conclude that we have an estimate for the form in Lorentz spaces. In particular, if the indices \((1/p_{j},0)_{j=1}^{N}\in(0,1)^{N}\times\{0\}^{N}\) satisfy (1.4), (1.5) and the indices \((1/r_{1},\ldots,1/r_{N})\) satisfy \(\sum_{j=1}^{N}1/r_{j}\geq 1\), then we have \[\Lambda(f_{1},\ldots,f_{N})\leq C\prod_{j=1}^{N}\|f_{j}\|_{L^{p_{j},r_{j}}}. \tag{2.3}\] The result of Bez _et al._ depends on a multi-linear interpolation argument which may be found in work of M. Christ [10] or S. Janson [14]. Next, suppose that \(\lambda_{j}\geq 0\) for all \(j\). Set \[\frac{1}{r_{j}}=\frac{1}{p_{j}}+\frac{\lambda_{j}}{k},\] and observe that for the one-dimensional space \(\mathbf{R}v_{j}\) we have \[\big{\{}v_{i}:v_{i}\notin\mathbf{R}v_{j}\big{\}}=\big{\{}v_{1},\ldots,v_{N}\big{\}} \setminus\{v_{j}\}.\] In this case, the conditions (1.4) and (1.5) imply \[\frac{1}{p_{j}}+\frac{\lambda_{j}}{k}=m-\sum_{i\neq j}(\frac{1}{p_{i}}+\frac{ \lambda_{i}}{k})<m-(m-1)=1. \tag{2.4}\] Also from the assumptions that \(1/p_{j}\in(0,1)\) and \(\lambda_{j}\geq 0\) for all \(j\), it immediately follows that \(\frac{1}{p_{j}}+\frac{\lambda_{j}}{k}>0\). Finally, employing (2.3), the generalized Holder inequality (2.1) and the observation (2.2) (with \(\alpha=0\)) we have \[\Lambda(f_{1},\ldots,f_{N}) \leq C\prod_{j=1}^{N}\|f_{j}\|_{L^{r_{j},p_{j}}}\] \[\leq C\prod_{j=1}^{N}\|f_{j}|\cdot|^{\lambda_{j}}\|_{L^{p_{j},p_{ j}}}\||\cdot|^{-\lambda_{j}}\|_{L^{k/\lambda_{j},\infty}}=C\prod_{j=1}^{N}\|f_{j} \|_{L^{p_{j}}_{\lambda_{j}}}\] which gives (1.2) in the case that all \(\lambda_{j}\geq 0\). The next step is to prove estimate (1.2) when \(\lambda_{j}<0\) for some \(j\). We will proceed by induction on the number of indices \(j\) for which \(\lambda_{j}<0\). The following technical Lemma gives most of the details of the induction step. **Lemma 2.5**.: _Given a family \((1/p_{j},\lambda_{j})_{j=1}^{N}\) satisfying (1.4-1.7), and \(1/p_{j}\in(0,1)\), with at least one \(j\) with \(\lambda_{j}<0\), let \(j_{0}\) be an index so that \(\lambda_{j_{0}}=\min\{\lambda_{j}:j=1,\ldots,N\}\). We may find a finite family \(\{\beta^{(\alpha)}\}_{\alpha\in\zeta}\) so that \((1/p_{j},\beta^{(\alpha)}_{j})_{j=1}^{N}\) satisfy (1.4-1.7). The vectors of exponents \(\beta^{(\alpha)}\) are indexed by a family of multi-indices \(\zeta\subset\{0,1,\ldots,N\}^{\ell-m+1}\) where \(\ell\) is the number of positive entries in \(\lambda\). Moreover, \((1/p_{j},\beta^{(\alpha)}_{j})\) satisfy_ \[\beta^{(\alpha)}_{j_{0}}=0\quad\text{for all}\;\;\alpha\in\zeta, \tag{2.6}\] \[\beta^{(\alpha)}_{j}=\lambda_{j}\quad\text{if}\;\;\lambda_{j}<0 \text{ and }j\neq j_{0},\] (2.7) \[0\leq\beta^{(\alpha)}_{j}\leq\lambda_{j}\quad\text{if}\;\;\lambda _{j}\geq 0. \tag{2.8}\] _Also, we have_ \[\Lambda(f_{1},\ldots,f_{N})\leq C\sum_{\alpha\in\zeta}\Lambda(|\cdot|^{\lambda _{1}-\beta^{(\alpha)}_{1}}f_{1},\ldots,|\cdot|^{\lambda_{N}-\beta^{(\alpha)}_ {N}}f_{N}). \tag{2.9}\] If we grant the Lemma, then the proof by induction is quite easy. We have already established the base case when all the exponents \(\lambda_{j}\) are non-negative. To proceed by induction, we assume that we have Theorem 1.3 when \(J\) of the exponents \(\lambda_{j}\) are negative. If \(\lambda\) is a vector of exponents with \(J+1\) negative entries and so that \((1/p_{j},\lambda_{j})_{j=1}^{N}\) satisfy the conditions of Theorem 1.3, then using the family \(\{\beta^{(\alpha)}:\alpha\in\zeta\}\) from Lemma 2.5, we have \[\Lambda(f_{1},\ldots,f_{N}) \leq\sum_{\alpha\in\zeta}\Lambda(|\cdot|^{\lambda_{1}-\beta_{1}^{ (\alpha)}}f_{1},\ldots,|\cdot|^{\lambda_{N}-\beta_{N}^{(\alpha)}}f_{N})\] \[\leq C\sum_{\alpha\in\zeta}\prod_{j=1}^{N}\||\cdot|^{\lambda_{j}- \beta_{j}^{(\alpha)}}f_{j}|\|_{L_{\beta_{j}^{(\alpha)}}^{p_{j}}}=C\prod_{j=1}^ {N}\|f_{j}\|_{L_{\lambda_{j}}^{p_{j}}}.\] Thus, establishing Lemma 2.5 will complete the proof of Theorem 1.3. To prove Lemma 2.5, we will make use of the following lemmata which give alternative characterizations of (1.5) and (1.6) in the case of generic sets of vectors. **Lemma 2.10**.: _Let \(E=\{v_{1},\ldots,v_{N}\}\) be a generic set in \(\mathbf{R}^{m}\) and let \((1/p_{j},\lambda_{j})_{j=1}^{N}\in(0,1)^{N}\times\mathbf{R}^{N}\). The following conditions on these indices are equivalent:_ \[\sum_{v_{j}\notin W}\lambda_{j} \geq 0 \text{for all proper subspaces }W\subset\mathbf{R}^{m}, \tag{2.11}\] \[\sum_{v_{j}\notin K}\lambda_{j} \geq 0 \text{for all }K\subset E\,\text{such that }\,\#K\leq m-1. \tag{2.12}\] Proof.: First we show that (2.12) implies (2.11). Given \(W\) a proper subspace of \(\mathbf{R}^{m}\), set \(K=\{v_{\ell}:v_{\ell}\in W\}\). Then \(\#K\leq\dim W\leq m-1\) and from (2.12) we obtain \[\sum_{v_{i}\notin W}\lambda_{i}=\sum_{v_{i}\notin K}\lambda_{i}\geq 0. \tag{2.11}\] For the other implication, assume that (2.11) holds. Given \(K\), set \(W=\text{span}(K)\) which will be a proper subspace of \(\mathbf{R}^{m}\) since \(\#K\leq m-1\). Thanks to our assumption that the set \(E=\{v_{1},\ldots,v_{N}\}\) is generic and that \(\#K\leq m-1\), it follows that \(W\cap E=K\). Then it follows that \[\sum_{v_{i}\notin K}\lambda_{i}=\sum_{v_{i}\notin W}\lambda_{i}\geq 0.\] In our next Lemma, we need to avoid the cases when \(K\) is empty or all of \(E\) and we will have equality in (2.15). **Lemma 2.13**.: _Let \(E=\{v_{1},\ldots,v_{N}\}\) be a generic set in \(\mathbf{R}^{m}\) and suppose \((1/p_{j},\lambda_{j})_{j=1}^{N}\in(0,1)^{N}\times\mathbf{R}^{N}\) Then the following two statements are equivalent:_ \[\sum_{v_{j}\notin V}(\frac{1}{p_{j}}+\frac{\lambda_{j}}{k})>m-\dim(V)\quad \text{for all subspaces }V\text{ with }1\leq\dim(V)\leq m-1 \tag{2.14}\] \[\sum_{v_{j}\notin K}(\frac{1}{p_{j}}+\frac{\lambda_{j}}{k})>m-\#K\quad\text{ for all }\,K\subset E\,\text{with }1\leq\#K\leq m-1. \tag{2.15}\] Proof.: First, to prove that (2.14) implies (2.15), let \(K\subset E\) with \(1\leq\#K\leq m-1\). Let \(V=\operatorname{span}(K)\) and use the assumption that \(E\) is a generic set to conclude \(\dim(V)=\#K\) and \(\{v_{j}:v_{j}\notin V\}=E\setminus K\) Thus \[\sum_{v_{j}\notin K}(\frac{1}{p_{j}}+\frac{\lambda_{j}}{k})=\sum_{v_{j}\notin V }(\frac{1}{p_{j}}+\frac{\lambda_{j}}{k})>m-\dim(V)=m-\#K.\] Now suppose that we have (2.15) and let \(V\subset\mathbf{R}^{m}\) be a proper subspace. Set \(K=\{v_{j}:v_{j}\in V\}\). By the generic condition, \(\#K\leq\dim(V)\) which implies that \(m-\dim(V)\leq m-\#K\). Thus in the case that \(\#K\geq 1\), (2.14) follows from (2.15). If \(\#K=0\) and \(\dim V\geq 1\), we may use (2.15) for the set \(K=\{v_{1}\}\) and find that there is at least one \(j\) for which \(1/p_{j_{0}}+\lambda_{j_{0}}/k\geq 0\). Then \[\sum_{j=1}^{N}(\frac{1}{p_{j}}+\frac{\lambda_{j}}{k})\geq\sum_{j\neq j_{0}}( \frac{1}{p_{j}}+\frac{\lambda_{j}}{k})>m-1\geq m-\dim V.\] Thus (2.14) follows from (2.15). Proof of Lemma 2.5.: For convenience, we assume that the vectors \(v_{j}\) are labeled so that the exponents \(\lambda\) are in decreasing order and \(\ell\) is the last index for which \(\lambda_{\ell}>0\): \[\lambda_{1}\geq\cdots\geq\lambda_{\ell}>0\geq\lambda_{\ell+1}\geq\cdots\geq \lambda_{N}.\] With this re-indexing our goal is find a family \(\{\beta^{(\alpha)}\}\) with \(\beta^{(\alpha)}_{N}=0\). To begin, we use (2.12) with \(K=\{v_{1},\ldots,v_{m-1}\}\) to obtain that \[\lambda_{N}+\sum_{i=m}^{\ell}\lambda_{i}\geq\sum_{i=m}^{N}\lambda_{i}\geq 0. \tag{2.16}\] Thus it follows that \[|\lambda_{N}|\leq\sum_{i=m}^{\ell}\lambda_{i}. \tag{2.17}\] Now we will proceed by giving a construction involving at most \(\ell-m+1\) steps to produce the family \(\{\beta^{(\alpha)}\}\subset\mathbf{R}^{N}\). The vectors \(\beta^{(\alpha)}\) will be indexed by multi-indices \(\alpha\in\{0,1,\ldots,N\}^{\ell-m+1}\). To begin, we set \(\beta^{(0)}=\lambda\) and then set \[\gamma_{0}=\min(\lambda_{1},\ldots,\lambda_{m},|\lambda_{N}|)=\min(\lambda_{m},|\lambda_{N}|).\] Now define \[\beta^{(j_{1},0)} =(\lambda_{1},\ldots,\lambda_{j_{1}}-\gamma_{0},\ldots,\lambda_{ m+1},\ldots,\lambda_{N}+\gamma_{0})\] \[=\lambda-\gamma_{0}e_{j_{1}}+\gamma_{0}e_{N},\] and set \(\zeta_{1}=\{\beta^{(j_{1},0)}:j_{1}=1,\ldots,m\}.\) It is clear that for each \(\alpha\in\zeta_{1}\), \((1/p_{j},\beta^{(\alpha)}_{j})_{j=1}^{N}\) satisfies (1.4) and (1.7). It remains to show that (1.5) and (1.6) hold. We will use Lemma 2.10 to show that \(\beta^{(j_{1},0)}\) satisfies (1.6). Thus let \(K\subset E\) be a set with \(\#K\leq m-1\) and consider the four cases according to whether or not \(v_{j_{1}}\) and \(v_{N}\) lie in \(K\). If both \(v_{j_{1}},v_{N}\) lie in \(K\), or both lie in \(E\setminus K\), then \[\sum_{v_{j}\notin K}\beta^{(j_{1},0)}_{j}=\sum_{v_{j}\notin K}\lambda_{j}\geq 0.\] If \(v_{N}\notin K\) and \(v_{j_{1}}\in K\), then \[\sum_{v_{j}\notin K}\beta^{(j_{1},0)}_{j}=\gamma_{0}+\sum_{v_{j}\notin K} \lambda_{j}\geq\gamma_{0}+0\geq 0.\] Finally, the most interesting case is when \(v_{N}\in K\) and \(v_{j_{1}}\notin K\). In this scenario, \[\sum_{v_{j}\notin K}\beta^{(j_{1},0)}_{j}=\lambda_{j_{1}}-\gamma_{0}-\lambda_ {N}+\sum_{v_{j}\notin(K\setminus\{v_{N}\})\cup\{v_{j_{1}}\})}\lambda_{j}\geq 0.\] The inequality above holds since \(\lambda_{j_{1}}-\gamma_{0}\geq 0\) and \(-\lambda_{N}\geq 0\). Thus we have verified that (1.6) holds for \(\beta^{(j_{1},0)}\), and now we proceed with a similar argument using Lemma 2.13 to show that (1.5) is also true. For this, again we consider four cases. The cases \(\{v_{j_{1}},v_{N}\}\subset K\) and \(\{v_{j_{1}},v_{N}\}\subset K^{c}\) are straightforward since in these cases we have: \[\sum_{v_{j}\notin K}(\frac{1}{p_{j}}+\frac{\beta^{(j_{1},0)}_{j}}{k})=\sum_{v _{j}\notin K}(\frac{1}{p_{j}}+\frac{\lambda_{j}}{k})\geq m-\#K.\] In the third case, \(v_{j_{1}}\in K,v_{N}\notin K\), and \[\sum_{v_{j}\notin K}(\frac{1}{p_{j}}+\frac{\beta^{(j_{1},0)}_{j}}{k})=\frac{ \gamma_{0}}{k}+\sum_{v_{j}\notin K}(\frac{1}{p_{j}}+\frac{\lambda_{j}}{k})> \gamma_{0}+m-\#K.\] Finally, if \(v_{j_{1}}\notin K\) and \(v_{N}\in K\), \[\sum_{v_{j}\notin K}(\frac{1}{p_{j}}+\frac{\beta^{(j_{1},0)}_{j}}{k})=m-\sum_{ v_{j}\in K}(\frac{1}{p_{j}}+\frac{\beta^{(j_{1},0)}_{j}}{k})>m-\#K.\] The last inequality above follows if we can prove the claim that \[\frac{1}{p_{j}}+\frac{\beta^{(j_{1},0)}_{j}}{k}<1\quad\text{for}\;\;v_{j}\in K. \tag{2.18}\] To see that (2.18) holds, consider two cases: \(v_{j}\in K\setminus\{v_{N}\}\), and \(v_{j}=v_{N}\). In the first case, \[\frac{1}{p_{j}}+\frac{\beta_{j}^{(j_{1},0)}}{k} \leq\frac{1}{p_{j}}+\frac{\lambda_{j}}{k}\] \[=\sum_{i=1}^{N}(\frac{1}{p_{i}}+\frac{\lambda_{i}}{k})-\sum_{i\neq j }(\frac{1}{p_{i}}+\frac{\lambda_{i}}{k})\] \[<m-(m-1)=1.\] The last line above follows from (1.4) and (1.5). In the second case, when \(j=N\), \[\frac{1}{p_{N}}+\frac{\beta_{N}^{(j_{1},0)}}{k}\leq\frac{1}{p_{N}}<1,\] since \(\beta_{N}^{(j_{1},0)}\leq 0\) and we assume that \(1/p_{N}<1\). Having verified (1.5) for each \(\beta^{(j_{1},0)}\in\zeta_{1}\), we move on to the task of establishing the inequality (2.9). Using our assumption that the set \(E=\{v_{1},\ldots,v_{N}\}\) is generic, we have that \(\{v_{1},\ldots,v_{m}\}\) is a basis for \(\mathbf{R}^{m}\). Therefore we can express \(v_{N}=\sum_{j=1}^{m}\alpha_{j}v_{j}\). Since \(\gamma_{0}>0\), we have \[|x\cdot v_{N}|^{\gamma_{0}}\leq C\sum_{j=1}^{m}|x\cdot v_{j}|^{\gamma_{0}}.\] Inserting this inequality into the form gives \[\begin{split}\Lambda(f_{1},\ldots,f_{N})&=\Lambda(f _{1},\ldots,\frac{|x\cdot v_{N}|^{\gamma_{0}}}{|x\cdot v_{N}|^{\gamma_{0}}}f_{N} )\\ &\leq C\sum_{j=1}^{m}\Lambda(f_{1},\ldots,|x\cdot v_{j}|^{\gamma_{ 0}}f_{j},\ldots,|x\cdot v_{N}|^{-\gamma_{0}}f_{N})\\ &=C\sum_{j=1}^{m}\Lambda(|x\cdot v_{1}|^{\lambda_{1}-\beta_{1}^{( j,0)}}f_{1},\ldots,|x\cdot v_{N}|^{\lambda_{N}-\beta_{N}^{(j,0)}}f_{N}).\end{split} \tag{2.19}\] This is the estimate (2.9). We should also note that for each \(\beta^{(j_{1},0)}\in\zeta_{1}\), we have \(\beta_{N}^{(j_{1},0)}=\min(0,\lambda_{N}+\lambda_{m})\). If \(\beta_{N}^{(j_{1},0)}=0\), we are done. Otherwise, we repeat the construction as follows: For each \(\beta^{(j_{1},0)}\in\zeta_{1}\), define \(\beta^{(j_{1},j_{2},0)}\), with \(j_{2}\in\{1,\ldots,m\}\setminus\{j_{1}\}\) by setting \[\beta^{(j_{1},j_{2},0)}=\beta^{(j_{1},0)}-\gamma_{1}e_{j_{2}}+\gamma_{1}e_{N},\] where \(\gamma_{1}=\min(\lambda_{m+1},|\lambda_{N}+\gamma_{0}|)\). Arguing as above, we have \((\frac{1}{p_{j}},\beta^{(j_{1},j_{2},0)})\) satisfies (1.4), (1.5), (1.6), and (1.7). Moreover, \[\beta_{N}^{(j_{1},j_{2},0)}=\min(0,\lambda_{m}+\lambda_{m+1}+\lambda_{N}).\] Set \(\zeta_{2}=\left\{\beta^{(j_{1},j_{2},0)}:j_{1}\in\{1,\ldots,m\},j_{2}\in\{1,\ldots,m+1\}\setminus\{j_{1}\}\right\}\). Continuing in this manner, (2.17) guarantees that we have \(\beta^{(\alpha)}_{N}=0\) for all \(\beta^{(\alpha)}\in\zeta_{i}\) for some \(i\leq\ell-m+1\). This completes the proof of the Lemma 2.5. ## 3 Necessary Conditions In this section we prove Theorem 1.8. It is worth noting that the results in this section do not require the condition that the set of vectors \(E\) be generic. We begin with a simple, technical Lemma. **Lemma 3.1**.: _If \(V\) and \(W\) are vector subspaces of \(\mathbf{R}^{m}\) with \(V\subset W\) and \(\{v_{1},\ldots,v_{\ell}\}\subset W\setminus V\), then there exists \(w\in W\cap V^{\perp}\) so that \(w\cdot v_{j}\neq 0,j=1,\ldots,\ell\)._ Before giving the proof, we introduce some additional notation. In the argument below, it will be useful to consider \(\mathbf{R}^{mk}\) as a tensor product, \(\mathbf{R}^{m}\otimes\mathbf{R}^{k}\). Thus, if \(x=(x^{1},\ldots,x^{m})\in\mathbf{R}^{mk}\) with each \(x^{j}\in\mathbf{R}^{k}\), we can write \(x=\sum_{j=1}^{m}e_{j}\otimes x^{j}\) with \(e_{j}\) denoting the unit vector in the direction of the \(j\)th coordinate axis. From this it is clear that \(\mathbf{R}^{m}\otimes\mathbf{R}^{k}\) is spanned by elements of the form \(y\otimes z\) with \(y\in\mathbf{R}^{m}\) and \(z\in\mathbf{R}^{k}\). With this notation, our map \(v\cdot x\) can be defined on products by \(v\cdot(y\otimes z)=(v\cdot y)z\) and then we use linearity to extend the definition to all of \(\mathbf{R}^{m}\otimes\mathbf{R}^{k}\). Proof.: We inductively define \(w_{\alpha}\) for \(\alpha=1,\ldots,\ell\) so that \[w_{\alpha}\cdot v_{i}\neq 0,\quad i=i,\ldots,\alpha\;\;\text{and}\;\;w_{\alpha} \in W\cap V^{\perp}.\] At the conclusion, we will set \(w=w_{\ell}\). To begin, let \(w_{1}=v_{1}^{\perp}\), the projection of \(v_{1}\) onto \(V^{\perp}\). Next, given \(w_{\alpha}\), if \(w_{\alpha}\cdot v_{\alpha+1}\neq 0\) then set \(w_{\alpha+1}=w_{\alpha}\). If, on the other hand, \(w_{\alpha}\cdot v_{\alpha+1}=0\), set \(w_{\alpha+1}=cw_{\alpha}+v_{\alpha+1}^{\perp}\). Here, we choose \(c\) so that \(|v_{j}\cdot w_{\alpha+1}|\geq 1\), \(j=1,\ldots,\alpha\). For example, one can choose \[c=\frac{1+\max\{|v_{\alpha+1}^{\perp}\cdot v_{j}|:j=1,\ldots,\alpha\}}{\min\{| w_{\alpha}\cdot v_{j}|:j=1,\ldots,\alpha\}}.\] This completes the construction. Proof of Theorem 1.8.: We begin by applying Lemma 3.1 to the pair of subspaces \(\{0\}\subset\mathbf{R}^{m}\) to find \(w_{1}\in\{0\}^{\perp}=\mathbf{R}^{m}\) and satisfying \[w_{1}\cdot v_{j}\neq 0,\qquad j=1,\ldots,N.\] Next, for \(w\in\mathbf{R}^{m}\), \(u\in\mathbf{R}^{k}\), we define \(w\otimes u=(w^{1}u,w^{2}u,\ldots,w^{m}u)\in\mathbf{R}^{mk}\). Assume \(|u|=1\) and then set \[S_{R}=B^{mk}(Rw_{1}\otimes u,\epsilon R),\] where the superscript on \(B\) is included to make clear that we have a ball in \({\bf R}^{mk}\). We can choose \(\epsilon>0\) small so that \[c_{1}R\leq|v_{j}\cdot x|\leq c_{2}R,\qquad x\in S_{R},\ j=1,\ldots,N.\] This follows since \[|v_{j}\cdot Rw_{1}\otimes u-v_{j}\cdot x|\leq|v_{j}|\epsilon R\] and \[|v_{j}\cdot Rw_{1}\otimes u|=R|v_{j}\cdot w_{1}||u|=R|v_{j}\cdot w_{1}|.\] If we let \[f_{j}(y)=\chi_{[c_{1},c_{2}]}\left(\frac{|y|}{R}\right),\quad y\in{\bf R}^{k},\] then \(f_{j}(v_{j}\cdot x)=1\) if \(x\in S_{R}\) and \(\|f_{j}\|_{L^{p_{j}}_{\lambda_{j}}}\approx R^{k/p_{j}+\lambda_{j}}\). Thus if (1.2) holds with the indices \((1/p_{j},\lambda_{j})_{j=1}^{N}\), we have \[cR^{mk}\leq\int_{{\bf R}^{mk}}\prod_{j=1}^{N}f_{j}(v_{j}\cdot x)\,dx\leq C\prod _{j=1}^{N}R^{k/p_{j}+\lambda_{j}}.\] Since this holds for all \(R\in(0,\infty)\), we can conclude \[m=\sum_{j=1}^{N}\left(\frac{1}{p_{j}}+\frac{\lambda_{j}}{k}\right),\] and so (1.4) is proved. We turn to the proof of (1.9). To this end, let \(V\subset{\bf R}^{m}\) be a subspace. To begin we apply Lemma 3.1 to the pair of subspaces \(\{0\}\subset V\) to find \(w_{1}\in V\) for which \[w_{1}\cdot v_{j}\neq 0,\quad v_{j}\in V.\] If \(\{v_{j}:v_{j}\notin V\}\) is a nonempty set, then we find \(w_{2}\in V^{\perp}\) so that \[w_{2}\cdot v_{j}\neq 0,\quad v_{j}\notin V.\] This time around, we set \[S_{R}=B^{mk}(w_{1}\otimes u,\epsilon)\cap(V\otimes{\bf R}^{k})+B^{mk}(Rw_{2} \otimes u,\epsilon R)\cap(V^{\perp}\otimes{\bf R}^{k}).\] Here, \(V\otimes{\bf R}^{k}=\{x:x=v\otimes y,v\in V,y\in{\bf R}^{k}\}\) and \(u\in{\bf R}^{k}\) is a unit vector. Then we have \[|S_{R}|\approx R^{k\dim(V^{\perp})}.\] If we choose \(\epsilon>0\) small and \(R_{0}\) large, we can find \(c_{1},c_{2}\) so that \[c_{1}\leq|v_{j}\cdot x|\leq c_{2},\quad v_{j}\in V,x\in S_{R},\] and \[Rc_{1}\leq|v_{j}\cdot x|\leq c_{2}R,\quad v_{j}\notin V,\,R>R_{0},\,x\in S_{R}.\] Thus, with \(f_{j}\) defined as \[f_{j}(y)=\begin{cases}\chi_{[c_{1},c_{2}]}(|y|),&v_{j}\in V,\\ \chi_{[c_{1},c_{2}]}(|y|/R),&v_{j}\notin V,\end{cases}\] then it follows that \(f_{j}(v_{j}\cdot x)=1\) if \(x\in S_{R}\) and \(R>R_{0}\). In total, the boundedness of the form implies \[R^{k\dim(V^{\perp})} \leq C\int_{S_{R}}\prod_{j=1}^{N}f_{j}(v_{j}\cdot x)\,dx\] \[\leq C\prod_{j=1}^{N}\|f_{j}\|_{L^{p_{j}}_{\lambda_{j}}}\] \[\leq C\prod_{v_{j}\notin V}R^{k/p_{j}+\lambda_{j}}.\] Since this inequality holds for all \(R>R_{0}\), we can conclude that \[\sum_{v_{j}\notin V}\frac{k}{p_{j}}+\lambda_{j}\geq\dim(V^{\perp}).\] Finally, we establish (1.6), the inequality for the \(\lambda_{j}\)'s. Again, fix a subspace \(V\subset\mathbf{R}^{m}\). Choose \(w_{1},w_{2}\) as before: \(w_{1}\in V\) so that \(w_{1}\cdot v_{j}\neq 0\) for all \(v_{j}\in V\), and \(w_{2}\in V^{\perp}\) so that \(w_{2}\cdot v_{j}\neq 0\) for all \(v_{j}\notin V\). Set \[w_{N}=(w_{1}+Nw_{2})\otimes u,\] and then define \[S_{N}=B^{mk}(w_{N},\epsilon).\] Here, \(u\in\mathbf{R}^{k}\) is a unit vector as usual. For \(\epsilon>0\) small, we have \[c_{1}\leq|v_{j}x|\leq c_{2},\quad x\in S_{N},\,v_{j}\in V,\] and if \(N\) is large, say \(N>N_{0}\), \[|v_{j}\cdot x-v_{j}\cdot w_{N}\otimes u|\leq c\epsilon,\quad x\in S_{N}.\] This time around, set \[f_{j}(y)=\begin{cases}\chi_{[c_{1},c_{2}]}(|y|),&v_{j}\in V,\\ \chi_{B(v_{j}\cdot w_{N}u,c_{c})}(|y|),&v_{j}\notin V.\end{cases}\] Then it follows that \[\prod_{j=1}^{N}f_{j}(v_{j}\cdot x)=1,\quad\text{if}\,\,\,x\in S_{N},\,N>N_{0},\] and \(\|f_{j}\|_{L^{p_{j}}_{\lambda_{j}}}\approx 1\) if \(v_{j}\in V\) and \(\|f_{j}\|_{L^{p_{j}}_{\lambda_{j}}}\approx N^{\lambda_{j}}\) if \(v_{j}\notin V\). Thus for \(N\geq N_{0}\), the boundedness of the form implies \[C\leq\int_{\mathbf{R}^{mk}}\prod_{j=1}^{N}f_{j}(v_{j}\cdot x)\,dx\leq\prod_{j =1}^{N}\|f_{k}\|_{L^{p_{j}}_{\lambda_{j}}}\approx\prod_{v_{j}\notin V}N^{ \lambda_{j}}.\] Since this inequality holds for all \(N>N_{0}\), we have \(\sum_{v_{j}\notin V}\lambda_{j}\geq 0\) as desired. Our last step is to establish that the condition (1.7) must hold if we have (1.2). Towards this end, we define \[f_{j}(t)=\sum_{\ell=1}^{\infty}a_{j,\ell}2^{-\ell(k/p_{j}+\lambda_{j})}\chi_{[ 2^{\ell},2^{\ell+1})}(|t|),\quad t\in\mathbf{R}^{k},\] where we assume each \(a_{j,\ell}\geq 0\). With this choice, we have \[\|f_{j}\|_{L^{p_{j}}_{\lambda_{j}}}\leq C\left(\sum_{\ell=1}^{\infty}a_{j,\ell }^{p_{j}}\right)^{1/p_{j}}.\] We will set \(a_{j,\ell}=\ell^{-(1+\epsilon)/p_{j}}\) where \(\epsilon>0\). With this choice, we have \(f_{j}\in L^{p_{j}}_{\lambda_{j}}\). Next, we use Lemma 3.1 to find \(w\in\mathbf{R}^{m}\) so that \[v_{j}\cdot w\neq 0,\quad v_{j}\in E. \tag{3.2}\] As before we choose a unit vector \(u\in\mathbf{R}^{k}\) and set \(S_{\ell}=B^{mk}(2^{\ell}w\otimes u,2^{\ell}\epsilon)\). Thanks to (3.2) and our choice of \(a_{j,\ell}\), we may find \(\ell_{0}\) so that \[f_{j}(v_{j}\cdot x)\geq c\ell^{-(1+\epsilon)/p_{j}}2^{-\ell(k/p_{j}+\lambda_{ j})},\qquad\ell\geq\ell_{0},\,x\in S_{\ell}. \tag{3.3}\] Using (3.3) and that \(|S_{\ell}|\approx 2^{k\ell m}\), we have \[\sum_{\ell\geq\ell_{0}}2^{klm}\prod_{j=1}^{N}\ell^{-(1+\epsilon)/p_{j}}2^{-\ell (k/p_{j}+\lambda_{j})}\leq\sum_{\ell=\ell_{0}}^{\infty}\int_{S_{\ell}}\prod_{j=1 }^{N}f_{j}(v_{j}\cdot x)\,dx\leq C\prod_{j=1}^{N}\left\|f_{j}\right\|_{L^{p_{j} }_{\lambda_{j}}},\] where the last inequality follows since we assume the estimate (1.2). Using our observation about \(\left\|f_{j}\right\|_{L^{p_{j}}_{\lambda_{j}}}\), we may conclude \[\sum_{\ell=\ell_{0}}^{\infty}\ell^{-(1+\epsilon)\sum_{j=1}^{N}1/p_{j}}\leq C( \sum_{\ell=1}^{\infty}\ell^{-(1+\epsilon)})^{\sum_{j=1}^{N}1/p_{j}}.\] In particular, we have that the sum \(\sum_{\ell=\ell_{0}}^{\infty}\ell^{-(1+\epsilon)\sum_{j=1}^{N}1/p_{j}}\) is finite for each \(\epsilon>0\). This implies that we must have (1.7). ## 4 Examples In this section, we give several of examples of forms for where we can use our theorem to study the boundedness. We also give an example of a non-generic set of vectors where a naive extension of our methods fails to establish boundedness in the interior of the polytope defined by the necessary conditions in Theorem 1.8. _Example._ Let \(E=\{e_{1}+\cdots+e_{m},e_{1},\ldots,e_{m}\}\subset\mathbf{R}^{m}\). Then it is easy to see that the set \(E\) is generic. Thus Theorem 1.3 will apply to the form \[\int_{\mathbf{R}^{m}}f_{0}(x_{1}+\cdots+x_{m})\prod_{i=1}^{m}f_{i}(x_{i})\,dx.\] Our next example shows how to generate arbitrarily large sets of generic vectors \(E\) and helps to justify our use of the term generic. _Example._ If \(E_{N}=\{v_{1},\ldots,v_{N}\}\) is a generic family in \(\mathbf{R}^{m}\), we may add an additional vector \(v_{N+1}\) so that \(E_{N+1}=\{v_{1},\ldots,v_{N+1}\}\) is a generic set. By the generic condition on \(E_{N}\), for each subset \(K\subset E_{N}\) with \(\#K=m-1\), \(\operatorname{span}(K)\) is a subspace of dimension \(m-1\). As \(\mathbf{R}^{m}\) cannot be the union of a finite number of proper subspaces, we may find a vector \(v_{N+1}\) which does not belong to any of the subspaces spanned the subsets of \(E_{N}\) of cardinality \(m-1\). Thus, the set \(E_{N+1}\) is generic. For completeness, we note that we can begin with the standard basis and then use the inductive step above to generate large generic sets of vectors. Our final observation shows some of the aforementioned limitations of the current methods. As we note in Theorem 1.8, our necessary conditions do not require the set of vectors \(E\) to be generic. When we began this work, we had hoped to establish a result that was close to necessary and sufficient for forms that are based on general sets of vectors \(E\). Unfortunately, we are not able to do this. We will give a non-generic set of five vectors in \({\bf R}^{3}\) and show that a naive attempt to extend the proof of Theorem 1.3 fails to establish that (1.2) holds for indices in the interior of the polytope given by Theorem 1.8. _Example._ Consider the set of 5 vectors in \({\bf R}^{3}\) given by \[E=\{e_{1},e_{1}+e_{2},e_{1}+e_{3},e_{1}-e_{2},e_{1}-e_{3}\}.\] We will label these vectors by \(v_{1}=e_{1},v_{2}=e_{1}+e_{2},v_{3}=e_{1}+e_{3},v_{4}=e_{1}-e_{2},v_{5}=e_{1}-e _{3}\). Note that there are two dependent sets of three vectors in \(E\). As before, the result of Bennett _et al._ characterizes the indices for which we have (1.2) when the exponents for the weights \(\lambda_{j}\) are zero. Continuing, we may extend the result to Lorentz spaces and use the generalized Holder's inequality to obtain (1.2) in the case that the weights are non-negative. We only need to note that as long as no two vectors are collinear, we may use (1.5) as in the proof of (2.4) to conclude that \(0<1/p_{j}+\lambda_{j}<1\) (here \(k=1\)). We consider the vector of indices \[(1/p_{1},\ldots,1/p_{5},\lambda_{1},\ldots,\lambda_{5})=(11/15,6/15,2/3,6/15, 2/3,-2/15,2/15,0,2/15,0).\] We leave it as an exercise to verify that this vector of indices satisfies the conditions of Theorem 1.3. The most interesting condition to check is (1.5), and here one must note that because of the linear dependencies \(2v_{1}=v_{2}+v_{4}\) and \(2v_{1}=v_{3}+v_{5}\), there are two sums in (1.5) to check where \(\dim(V)=2\) and the condition \(v_{j}\notin V\) is true for two indices \(j\) rather than three due to the linear dependence. We observe that we have a linear dependency \(2v_{1}=v_{2}+v_{4}\) and arguing as in (2.19) we can show that \[\Lambda(f_{1},\ldots f_{5})\leq C(\Lambda(|\cdot|^{-2/15}f_{1},|\cdot|^{2/15}f _{2},f_{3},f_{4},f_{5})+\Lambda(|\cdot|^{-2/15}f_{1},f_{2},f_{3},|\cdot|^{2/15 }f_{4},f_{5})).\] We would like to have the estimate (1.2) for the vectors of indices: \[(1/p_{1},\ldots,1/p_{5},\beta_{1}^{(2,0)},\ldots\beta_{5}^{(2,0)}) =(11/15,6/15,2/3,6/15,2/3,0,0,0,2/15,0),\] \[(1/p_{1},\ldots,1/p_{5},\beta_{1}^{(4,0)},\ldots,\beta_{5}^{(4,0) }) =(11/15,6/15,2/3,6/15,2/3,0,2/15,0,0,0).\] If we consider the inequality (1.5) for the subspace \(V=\mbox{span}\{v_{1},v_{3},v_{5}\}=\mbox{span}\{e_{1},e_{3}\}\), we see that for both \(\beta^{(2,0)}\) and \(\beta^{(4,0)}\) \[\sum_{v_{j}\notin V}\frac{1}{p_{j}}+\beta_{j}^{(k,0)}=\frac{1}{p_{2}}+\beta_{2 }^{(k,0)}+\frac{1}{p_{4}}+\beta_{4}^{(k,0)}=\frac{14}{15}<1=3-\dim V.\] Thus, this avenue of proof would require us to use an estimate which Theorem 1.8 tells us cannot hold. We close by listing several problems related to our work worthy of further study. * Find conditions which are close to necessary and sufficient for forms that are based on general sets of vectors \(E\), rather than being restricted to generic sets of vectors. * Consider weighted estimates for the more general forms such as those studied in Bennett _et al._[3]. * Find conditions on families of more general weights which allow us to establish that a Brascamp-Lieb form is bounded on weighted \(L^{p}\)-spaces.
2307.05517
Adaptive Graph Convolution Networks for Traffic Flow Forecasting
Traffic flow forecasting is a highly challenging task due to the dynamic spatial-temporal road conditions. Graph neural networks (GNN) has been widely applied in this task. However, most of these GNNs ignore the effects of time-varying road conditions due to the fixed range of the convolution receptive field. In this paper, we propose a novel Adaptive Graph Convolution Networks (AGC-net) to address this issue in GNN. The AGC-net is constructed by the Adaptive Graph Convolution (AGC) based on a novel context attention mechanism, which consists of a set of graph wavelets with various learnable scales. The AGC transforms the spatial graph representations into time-sensitive features considering the temporal context. Moreover, a shifted graph convolution kernel is designed to enhance the AGC, which attempts to correct the deviations caused by inaccurate topology. Experimental results on two public traffic datasets demonstrate the effectiveness of the AGC-net\footnote{Code is available at: https://github.com/zhengdaoli/AGC-net} which outperforms other baseline models significantly.
Zhengdao Li, Wei Li, Kai Hwang
2023-07-07T09:55:41Z
http://arxiv.org/abs/2307.05517v1
# Adaptive Graph Convolution Networks ###### Abstract Traffic flow forecasting is a highly challenging task due to the dynamic spatial-temporal road conditions. Graph neural networks (GNN) has been widely applied in this task. However, most of these GNNs ignore the effects of time-varying road conditions due to the fixed range of the convolution receptive field. In this paper, we propose a novel Adaptive Graph Convolution Networks (AGC-net) to address this issue in GNN. The AGC-net is constructed by the Adaptive Graph Convolution (AGC) based on a novel context attention mechanism, which consists of a set of graph wavelets with various learnable scales. The AGC transforms the spatial graph representations into time-sensitive features considering the temporal context. Moreover, a shifted graph convolution kernel is designed to enhance the AGC, which attempts to correct the deviations caused by inaccurate topology. Experimental results on two public traffic datasets demonstrate the effectiveness of the AGC-net3 which outperforms other baseline models significantly. Footnote 3: Code is available at: [https://github.com/zhengdaoli/AGC-net](https://github.com/zhengdaoli/AGC-net) Keywords:Graph neural networks, traffic flow forecasting, multivariate time-series over graphs ## 1 Introduction Traffic flow forecasting is the task of predicting future traffic flow based on past data [9]. Accurate and reliable forecasting is crucial for providing drivers and intelligent transportation systems (ITS) with better decision-making abilities and traffic control schemes [29]. Early methods focused on time-series modeling, such as Kalman Filtering [20], ARIMA [38], VAR [5], SVR [28], Bayesian models [26], etc. These methods mainly rely on statistical models and make independent assumptions about multivariate time-series data from each sensor, making it difficult to capture correlations among variables. To capture both spatial and temporal information, traffic flow forecasting can be formulated as a multivariate time-series forecasting over a graph constructed from sensor networks. Graph neural networks (GNN) are widely applied in traffic forecasting due to their natural computational architecture for modeling complex spatial relations by aggregating information from neighbors [17]. To capture temporal information, most GNN-based models, inspired by recurrent neural networks (RNN), long short-term memory (LSTM), or gated recurrent unit (GRU) [6], incorporate memory and gated mechanisms [18, 1, 36, 35, 25]. However, the non-stationary nature of traffic [23, 24] limits the effectiveness of GNNs due to their fixed receptive fields. This non-stationary property is caused by various factors such as weather conditions, accidents, special events, or road construction, which can cause sudden changes in traffic flow patterns. For instance, accidents on highways can have a more widespread impact than in urban areas, which affect only a localized region. These diverse ranges require graph convolutions to be adaptable to fit these changes. Thus, it is crucial to have models with adaptivity to different traffic scenarios for effective traffic flow prediction. Although some models [4, 11] attempt to address this by stacking multiple convolutions with different receptive ranges, they may introduce redundant information due to the lack of a filtering scheme for the receptive fields within a range. To overcome these challenges, we propose an Adaptive Graph Convolution Networks (AGC-net) that utilizes Adaptive Graph Convolutions (AGC) in each layer. AGC can dynamically filter the best receptive fields locally and globally depending on the traffic flow at each time step. AGC consists of a set of graph wavelets with various learnable scales and a context attention block that refines the best receptive fields according to contextual information. We also introduce a learnable shifted convolution kernel to refine the inaccurate road topology. Our main contributions lie in two aspects. First, the proposed adaptive graph convolution addresses the limitation of conventional GNN in handling spatial-temporal applications due to the fixed-range receptive fields. Second, extensive experimental results demonstrate that our AGC-net achieves the best performance on both two public datasets. ## 2 Related work ### Traffic Forecasting Early traffic forecasting methods are not satisfactory. Because the statistical models they base on, such as Kalman Filtering [20], ARIMA [38], VAR [5] etc., rely on stationarity assumption and are very limited to handle spatial and temporal road conditions simultaneously. Inspired by convolutional neural networks (CNNs) and recurrent neural networks (RNNs), some deep learning based approaches [8, 19, 33] are proposed to extract more complicated spatial, temporal information and higher nonlinearity. However, these methods are limited by the intrinsic properties of CNNs or RNNs, that are not generically designed for the non-Euclidean data, such as traffic networks, so that capturing spatial information with temporal dependencies is difficult. In other aspect, traffic sample is non-stationary, which means the distribution differs in different time segments [23, 24]. Consequently, the structural correlations among nodes vary over different distributions. That non-stationary property makes a challenge for those methods that assume a fixed structure through all time steps. [12] proposed ASTGCN which works on highway traffic and uses attention mechanisms on spatial dimension and temporal dimension respectively to capture spatial and temporal dynamics. [4] proposed MRA-BGCN model, this model incorporates edge correlations in graph structure. [34] proposed a framework SLC with a generic graph convolutional formulation to capture the spatial information dynamically. Besides the purely GNN-based methods, other recent researches are tend to use fully-connected or transformer architecture to capture both spatial and temporal information. [22] utilizes fully connected layers as the graph gate and the time gate to extract spatial and temporal information respectively. In fact, the graph gate also uses an adjacency-like matrix acting as a graph convolution kernel. [32] constructs a spatial transformer and a temporal transformer architectures. In spatial transformer, the learnable positional embedding layer is formulated as an \(N\times N\) matrix to learn the spatial correlation. ### Graph Neural Networks There are two mainstream graph neural networks in terms of convolution operator[37], i.e., spatial-based and spectral-based. Spatial-based GNNs utilize aggregating operation to filter information from node neighborhoods, such an operation is an analogy to a graph convolution, the aggregator is known as a convolution operator [13, 21, 10]. Spectral-based GNNs are based on graph spectral theory[14], pioneering works such as [3] and other following GCNs [2, 7, 16]implement the graph convolution by graph Fourier transform. Xu proposed GWNN [31], another kind of spectral-based GNNs which leverages graph wavelet transform [14] as the convolution kernel or convolution operator. ## 3 Preliminaries ### Notations and Problem Formulation The road network can be represented by an undirected attributed graph \(G=(V,E)\), where \(V\) is a set of \(N=|V|\) nodes and \(E\) is a set of edges. The signal at time step \(t\) over \(G\) is denoted as \(\mathbf{X}_{t}\in\mathbb{R}^{N\times C}\), where \(C\) is the number of traffic conditions, e.g., speed, flow, etc. Then the goal is formulated by using past \(H\) time steps conditions \(\mathcal{X}=(\mathbf{X}_{1},\mathbf{X}_{2},\ldots,\mathbf{X}_{H})\in\mathbb{R }^{H\times N\times C}\) to predict next \(P\) time steps flows \(\mathcal{Y}=(\mathbf{Y}_{H+1},\mathbf{Y}_{H+2},\ldots,\mathbf{Y}_{H+P})\in \mathbb{R}^{P\times N}\). The adjacency matrix of \(G\) is denoted by \(\mathbf{A}\). The graph Laplacian matrix of \(G\) is defined as \(\mathbf{L}=\mathbf{D}-\mathbf{A}\), where \(\mathbf{D}\) is a diagonal degree matrix with \(D_{ii}=\sum_{j}A_{ij}\). Then the normalized Laplacian matrix is \(\mathbf{L}^{\prime}=\mathbf{I}_{N}-\mathbf{D}^{-1/2}\mathbf{A}\mathbf{D}^{-1/2}\), where \(\mathbf{I}_{N}\) is the identity matrix. The real symmetric matrix \(\mathbf{L}^{\prime}\) has \(N\) orthonormal eigenvectors and associated non-negative eigenvalues in diagonal matrix form, denoted as \(\mathbf{U}=(\mathbf{u}_{1},\mathbf{u}_{2},\ldots,\mathbf{u}_{N})\) and \(\Lambda=\mathrm{diag}(\lambda_{1},\lambda_{2},\ldots,\lambda_{N})\) respectively, such that \(\mathbf{L}^{\prime}=\mathbf{U}\Lambda\mathbf{U}^{-1}\). ### Graph Wavelet Transformation Graph wavelet transform (GWT) has some benefits compared to graph Fourier transform, such as high efficiency, high sparseness, and localized convolution property [31]. GWT uses a set of graph wavelets as the bases in spectral domain, defined as \((\mathbf{\psi}_{s,1},\mathbf{\psi}_{s,2},\ldots,\mathbf{\psi}_{s,N})\). Each wavelet \(\mathbf{\psi}_{s,i}\) corresponds to a signal on the graph diffused away from node \(i\) at scale \(s>0\). Mathematically, \(\mathbf{\Psi}_{s}\) is defined as \(\mathbf{\Psi}_{s}=\mathbf{U}\mathbf{G}_{s}\mathbf{U}^{-1}\), where \(\mathbf{G}_{s}=\mathrm{diag}(g(s\lambda_{1}),\ldots,g(s\lambda_{N}))\). The \(\mathbf{G}_{s}\in\mathbb{R}^{N\times N}\) is the diagonal scaling matrix, and \(g(s\lambda)=e^{s\lambda}\) corresponds to a heat kernel [31]. The graph wavelet transform of signal \(\mathbf{f}\) over \(G\) is defined as \(\mathbf{\Psi}_{s}^{-1}\mathbf{f}\). Then the convolution between signal \(\mathbf{g}\) and \(\mathbf{f}\) is defined as: \[\mathbf{g}*_{G}\mathbf{f}=\mathbf{\Psi}_{s}((\mathbf{\Psi}_{s}^{-1}\mathbf{g})\odot( \mathbf{\Psi}_{s}^{-1}\mathbf{f}))=\mathbf{\Psi}_{s}\mathbf{\Theta}\mathbf{\Psi}_{s}^{-1} \mathbf{f} \tag{1}\] where \(\mathbf{\Theta}\in\mathbb{R}^{N\times N}\) is a learnable diagonal matrix. ## 4 Methodology ### AGC-net Architecture As shown in Fig.1, AGC-net is mainly constructed by an encoder model followed by a decoder model. The goal of encoder is to generate the time-sensitive spatial representations through multiple stacked AGC layers where each AGC takes \(K\) graph wavelets \(\mathbf{\Psi}_{1}\) to \(\mathbf{\Psi}_{K}\) and is parameterized independently by \(\theta_{1}\) to \(\theta_{L}\) for the construction of convolution kernels. Then the time-sensitive spatial representations are fed into the decoder which is composed of GRU for the prediction. ### Adaptive Graph Convolution Block To build AGC, the first step is to involve Multi-range Graph Convolution (MGC). The second step is to enhance MGC by a Context Attention Mechanism that can adjust convolutional receptive fields according to the contextual information over time. Figure 1: The architecture of AGC-net. The encoder module is composed of multiple AGC layers. The decoder module leverages GRUs and a linear transformation to produce predicted traffic flows. **Single-range Graph Convolution.** A single-range graph convolution \(g_{*k}(\cdot)\) using a graph wavelet \(\mathbf{\Psi}_{k}\) defined in Sec.3.2 as the convolution kernel is formulated as: \(g_{*k}(\mathbf{X}_{t})=\mathbf{\Psi}_{k}\Theta_{k}\mathbf{\Psi}_{k}^{-1}\mathbf{ X}_{t}\mathbf{W}_{k}+Bias\), where \(\mathbf{X}_{t}\in\mathbb{R}^{N\times C_{in}}\) is the input signal at time step \(t\), \(\mathbf{W}_{k}\in\mathbb{R}^{C_{in}\times C_{out}}\) is the learnable parameter matrix for feature transformation, \(C_{in}\) is the input feature dimension, and \(C_{out}\) is the output feature dimension. \(\Theta_{k}\in\mathbb{R}^{N\times N}\) is a learnable diagonal matrix and \(Bias\in\mathbb{R}^{N\times C_{out}}\) is the bias matrix. **Multi-range Graph Convolution.** MGC consists of multiple single-range graph convolutions with different kernels. We define an operation \(\mathbf{MGC}[g_{*1},g_{*2},...]\) to compose \(K\) different graph convolutions. In general, \(\mathbf{MGC}\) could be a concatenate operation, i.e., \(||_{k=1}^{K}g_{*k}(\cdot)\), or summation of all \(g_{*k}(\cdot)\), i.e., \(\sum_{k=1}^{K}g_{*k}(\cdot)\). Intuitively and empirically, we find that a learnable coefficient \(\pi_{k}\) for each \(g_{*k}(\cdot)\) can improve the performance, i.e., \[\mathbf{MGC}[g_{*1},\ldots,g_{*K}]=\sum_{k=1}^{K}\pi_{k}\cdot g_{*k}(\cdot). \tag{2}\] Each kernel of \(g_{*k}\) corresponds to a \(\mathbf{\Psi}_{k}\in\{\mathbf{\Psi}_{k}\}_{k=1}^{K}\). \(K\) indicates the capacity to capture various spatial correlations with different ranges. Naturally, our concern becomes how to find the best \(\pi_{k}\). **Context Attention Mechanism.** To further leverage the contextual information, we propose the context attention mechanism to learn \(\pi_{k}\). As shown in Fig.2, the MGC with this mechanism (MGC-Attention) at the \(l\)-th layer takes the last layer hidden state \(\mathbf{Z}_{t}^{l-1}\) and the output of the \(g_{*k}(\mathbf{Z}_{t}^{l-1})\) as the contextual information. Then these two inputs are fed into two independent Linear transformation modules and are transformed into two corresponding representations both in dimension \(S\), i.e., \(\mathbf{V}_{t,k}=\mathbf{Linear}_{W_{v}}\left(g_{*k}(\mathbf{Z}_{t}^{l-1})\right)\), and \(\mathbf{Q}_{t}=\mathbf{Linear}_{W_{q}}\left(Z_{t}^{l-1}\right)\). The normalized inter product divided by dimension scale \(S\) of these two representations are calculated as the similarity scores between graph convolutions and the contextual information, i.e., \[\mathbf{s}_{t,k}=\frac{\mathbf{Q}_{t}^{T}\mathbf{V}_{t,k}}{S|\mathbf{Q}_{t}| |\mathbf{V}_{t,k}|}. \tag{3}\] Then we use a _softmax_ function to calculate the \(\pi_{t,k}\): \[\pi_{t,k}=\frac{\exp(\mathbf{s}_{t,k})}{\sum_{m=1}^{K}\exp(\mathbf{s}_{t,m})}. \tag{4}\] All the computations of \(\mathbf{s}_{t,k}\) and \(\pi_{t,k}\) can be implemented efficiently by matrix multiplication. **Adaptive Graph Convolution Layer.** We call the \(l\)-th layer of AGC-net as an adaptive graph convolution layer (AGC), which consists of an MGC-Attention and is simply followed by a non-linear activation function such as \(\mathbf{ReLU}(\cdot)\), i.e., \(\mathbf{AGC}:\mathbf{Z}_{t}^{l}\leftarrow\mathbf{ReLU}\left(\sum_{k=1}^{K}\pi_{ t,k}g_{*k}(\mathbf{Z}_{t}^{l-1})\right).\) Moreover, in our experiments, we compare the MGC-Attention with the MGC-Weighted which uses learnable scalars as the \(\pi_{t,k}\) without any contextual information. ### Learnable Shifted Convolution Kernel We design a learnable matrix \(\mathbf{D}\in\mathbb{R}^{N\times N}\) for learning a better topology, which can make up for the inaccurate topology in practice that hinders the graph convolution. To address the computational concerns, we use a low rank matrix instead, that is factorized by the product of two low dimension matrices, i.e., \(\widetilde{\mathbf{D}}=L1L2\), where \(L1\in\mathbb{R}^{N\times r},L2\in\mathbb{R}^{r\times N},N>>r\). The enhanced graph convolution \(g_{*k}\) is simply obtained by replacing the \(\mathbf{\Psi}_{k}\Theta_{k}\mathbf{\Psi}_{k}^{-1}\) with \(\mathbf{\Psi}_{k}\Theta_{k}\mathbf{\Psi}_{k}^{-1}+\alpha\widetilde{\mathbf{D}}\), where the \(\alpha\) is a hyper-parameter to adjust the contribution of \(\widetilde{\mathbf{D}}\). The topology is very sparse in practice. To guarantee the sparsity, the Frobenius norm \(||\widetilde{\mathbf{D}}||_{F}^{2}\) is introduced in the loss function. ## 5 Experiments ### Datasets and Performance Metric We conduct experiments on two public traffic datasets. One is **METR-LA**, which is collected from observation sensors in the highway of Los Angeles County. This dataset uses 207 sensors and 4 months of data dated from 1st Mar 2012 until 30th Jun 2012. The other is **PeMS-BAY**, which is collected from Caltrans PeMS in Bay Area of California. PeMS-BAY has 6 months of data from 325 sensors (nodes), which ranges from Jan 2017 to May 2017. For both datasets, we adopt three widely used metrics to evaluate the performance of our model, i.e., Mean Absolute Error (**MAE**), Root Mean Squared Error (**RMSE**), Mean Absolute Percentage Error (**MAPE**). ### Baseline Models We compare our AGC-net with the following baseline methods that use the same datasets as ours, i.e., **ARIMA**[38]. **DCRNN**[18] combines recurrent neural networks and diffusion convolution. **Graph WaveNet**[30] develops an adaptive dependency matrix to Figure 2: Multi-range graph convolution with the context attention mechanism. capture spatial dependency and stack dilated 1D convolution to capture temporal dependency. **SLCNN**[34] uses two structure learning convolutions (using a fixed structure and a learnable structure respectively) to capture the global and local information. **MRA-BGCN**[4] incorporates graph edges information in addition to node information to capture spatial dependency. **STAWnet**[27] utilizes the attention mechanism to directly learn an adjacency matrix for the graph convolution and use temporal convolution networks to capture the temporal information. **FC-GAGA**[22] uses a learnable fully connected hard graph gating mechanism to learn the spatial-temporal information without any prior knowledge of topology. ### Training Setup The loss function is MAE with the Frobenius norm of \(\widetilde{D}\). To analyze the effects of some key settings, e.g., wavelet amount, low rank matrix dimension, etc., we conduct several ablation studies which show that for different datasets, the best hyperparameters are different. For example, the best number of graph wavelets for METR-LA dataset is 20, but for PeMS-BAY, it is 15. For the shifted matrices, we set \(r\)=30 for both METR-LA and PeMS-BAY with \(\alpha=0.01\). We conduct all the experiments on one TeslaV100, using the Adam optimizer [15] with the learning rate \(0.002\), weight_decay \(0.0001\), and batch size \(128\) to train the model. Further discussions about the effects of these hyperparameters are discussed in the section 5.5. We also try the procedure in [12] to involve parallel periodic traffic patterns, i.e., hourly, daily-periodic and weekly-periodic flows. \begin{table} \begin{tabular}{c|c|c c c|c c c|c c c} \hline \hline \multirow{3}{*}{Dataset} & \multirow{3}{*}{Models} & \multicolumn{3}{c|}{Prediction of 15 min} & \multicolumn{3}{c|}{Prediction of 30 min} & \multicolumn{3}{c}{Prediction of 1 hour} \\ \cline{3-11} & & MAE & RMSE & MAPE & MAE & RMSE & MAPE & MAE & RMSE & MAPE \\ \hline \multirow{6}{*}{METR-LA} & ARIMA & 3.99 & 8.12 & 9.6\% & 5.15 & 10.45 & 12.7\% & 6.90 & 13.23 & 17.4\% \\ & DCRNN & 2.77 & 5.38 & 7.3\% & 3.15 & 6.45 & 8.8\% & 3.60 & 7.59 & 10.5\% \\ & Graph WaveNet & 2.69 & 5.15 & 6.9\% & 3.07 & 6.22 & 8.4\% & 3.53 & 7.37 & 10.0\% \\ & SLCNN & **2.53** & 5.18 & 6.7\% & **2.88** & 6.15 & **8.0\%** & **3.30** & 7.20 & 9.7\% \\ & MRA-BGCN & 2.67 & 5.12 & 6.8\% & 3.06 & 6.17 & 8.3\% & 3.49 & 7.30 & 10.0\% \\ & STAWnet & 2.70 & 5.22 & 7.0\% & 3.04 & 6.14 & 8.2\% & 3.44 & 7.16 & 9.8\% \\ & FC-GAGA & 2.70 & 5.24 & 7.0\% & 3.04 & 6.19 & 8.3\% & 3.45 & 7.19 & 9.9\% \\ & **AGC-net(Ours)** & 2.61 & **4.83** & **6.6\%** & 2.94 & **5.68** & **8.0\%** & 3.34 & **6.61** & **9.4\%** \\ & **Improvements(\%)** & - & **+4.7** & **+2.9** & - & **+7.5** & - & - & **+7.7** & **+3.1** \\ \hline \multirow{6}{*}{PeMS-BAY} & ARIMA & 1.62 & 3.30 & 3.5\% & 2.33 & 4.76 & 5.4\% & 3.38 & 6.50 & 8.3\% \\ & DCRNN & 1.38 & 2.95 & 2.9\% & 1.74 & 3.97 & 3.9\% & 2.07 & 4.74 & 4.9\% \\ \cline{1-1} & Graph WaveNet & 1.30 & 2.74 & 2.7\% & 1.63 & 3.70 & 3.7\% & 1.95 & 4.52 & 4.6\% \\ \cline{1-1} & SLCNN & 1.44 & 2.90 & 3.0\% & 1.72 & 3.81 & 3.9\% & 2.03 & 4.53 & 4.8\% \\ \cline{1-1} & MRA-BGCN & 1.29 & 2.72 & 2.9\% & 1.61 & 3.67 & 3.8\% & 1.91 & 4.46 & 4.6\% \\ \cline{1-1} & STAWnet & 1.31 & 2.78 & 2.8\% & 1.62 & 3.70 & 3.7\% & 1.89 & 4.36 & 4.5\% \\ \cline{1-1} & FC-GAGA & 1.34 & 2.82 & 2.8 & 1.66 & 3.75 & 3.7\% & 1.93 & 4.40 & 4.5\% \\ \cline{1-1} & **AGC-net(Ours)** & **1.18** & **2.31** & **2.3\%** & **1.48** & **3.14** & **3.1\%** & **1.85** & **3.9** & **4.2\%** \\ \cline{1-1} & **Improvements(\%)** & **+8.5** & **+15.1** & **+20.7** & **+8.1** & **+14.4** & **+18.4** & **+2.1** & **+10.6** & **+8.7** \\ \hline \hline \end{tabular} \end{table} Table 1: Performance comparison of our method with seven others. ### Performance Analysis As shown in Table 1, our method outperforms the other methods in terms of all the metrics on PeMS-BAY and more than half of metrics on METR-LA. In particular, our model improves significantly on RMSE and MAPE metrics, ranging from 10.6% to 15.1%, 8.7% to 20.7% respectively. Another observation is that the improvements of our method are different over three metrics and different datasets. This is due to the intrinsic distribution of data. PeMS-BAY has more nodes and edges than METR-LA, which leads to the requirement of extracting more complex spatial information. ### Ablation Study We conduct ablation studies in terms of 5 different settings, i.e., (a). Using adjacency matrix (denoted as ) for single-range graph convolution kernel. (b). Using periodic patterns () introduced in [12]. (c). Using MGC-Weighted () to replace convolution in setting (a). (d). Using MGC-Attention () instead of MGC-Weighted. (e). Using shifted convolution kernel () in the MGC-Attention. Table 2 demonstrates the performance under the above settings, which verify the effectiveness of each module. We analyze each observation as followings. **First**, observations from (a) verify the effectiveness of encoder-decoder architecture that obtains similar performance as [18]. **Second**, involving periodic patterns improves performance which verifies the existence of various temporal patterns. **Third**, comparing settings (c) and (d), we observe that by using MGC-Attention, the performance increases significantly compared with MGC-Weighted. This indicates that the context attention mechanism can learn a better composition of convolutions by exploiting complex contextual information. At the last setting (e), the result shows that the shifted convolution kernel has a significant performance improvement. It verifies that accurate structural information is very important to the prediction task. Moreover, the shifted convolution kernel performs well even if it is composed of two low dimension matrices instead of a full-dimension matrix. This is because that under the condition that the topology is sparse in these two datasets, only a few complementary edge completions are sufficient for gaining more precise structural information. Due to the space limitation, more experimental results and discussions on hyperparameters can be found in the code repository. \begin{table} \begin{tabular}{c|c|c c c} \hline \hline Setting \# & Modules & MAE & RMSE & MAPE \\ \hline (a) & & 3.70 & 7.14 & 10.7\% \\ (b) & & 3.53 & 6.87 & 10.3\% \\ (c) & & 3.58 & 6.89 & 10.5\% \\ (d) & & 3.51 & 6.85 & 10.2\% \\ (e) & & 3.36 & **6.63** & **9.7\%** \\ \hline \hline \end{tabular} \end{table} Table 2: Ablation Study on METR-LA (1 hour forecasting). To further justify the effectiveness of the core component, we conducted a significance analysis using a t-test method on the three proposed modules of AGC-net, 10 rounds of training with different random seeds were conducted on the METR-15 dataset. The results are shown in Figure 3. We found that each module significantly improves performance with a very low p-value (less than 0.001), and the small standard deviation of MAE from each setting indicates the stability of each module. Three modules are under following different settings correspondingly: (a) 'w/o attention' (only one fixed-range graph wavelet transform is applied, without attention and shifted kernel), (b) 'w/o shifted kernel' (attention is applied without the shifted kernel), and (c) 'AGC-net' (all modules are applied). ## 6 Conclusions Main conclusions are as follows, (1). the experimental results verified the existence of the dynamic spatial correlations at different time steps, which is a challenge for graph convolution. The proposed adaptive graph convolution is well to extract such spatial dynamics. (2). A learnable shifted graph convolution kernel is proposed to enhance the graph convolution to obtain more accurate spatial information, and the experiments validate its effectiveness for traffic flow prediction. Moreover, the experiments show that the model has competitive performance even when the kernel is in a low dimension. (3). Experimental results demonstrate that our model outperforms baseline methods significantly.
2306.04520
Estimating Koopman operators with sketching to provably learn large scale dynamical systems
The theory of Koopman operators allows to deploy non-parametric machine learning algorithms to predict and analyze complex dynamical systems. Estimators such as principal component regression (PCR) or reduced rank regression (RRR) in kernel spaces can be shown to provably learn Koopman operators from finite empirical observations of the system's time evolution. Scaling these approaches to very long trajectories is a challenge and requires introducing suitable approximations to make computations feasible. In this paper, we boost the efficiency of different kernel-based Koopman operator estimators using random projections (sketching). We derive, implement and test the new "sketched" estimators with extensive experiments on synthetic and large-scale molecular dynamics datasets. Further, we establish non asymptotic error bounds giving a sharp characterization of the trade-offs between statistical learning rates and computational efficiency. Our empirical and theoretical analysis shows that the proposed estimators provide a sound and efficient way to learn large scale dynamical systems. In particular our experiments indicate that the proposed estimators retain the same accuracy of PCR or RRR, while being much faster.
Giacomo Meanti, Antoine Chatalic, Vladimir R. Kostic, Pietro Novelli, Massimiliano Pontil, Lorenzo Rosasco
2023-06-07T15:30:03Z
http://arxiv.org/abs/2306.04520v2
# Estimating Koopman operators with sketching to provably learn large scale dynamical systems ###### Abstract The theory of Koopman operators allows to deploy non-parametric machine learning algorithms to predict and analyze complex dynamical systems. Estimators such as principal component regression (PCR) or reduced rank regression (RRR) in kernel spaces can be shown to provably learn Koopman operators from finite empirical observations of the system's time evolution. Scaling these approaches to very long trajectories is a challenge and requires introducing suitable approximations to make computations feasible. In this paper, we boost the efficiency of different kernel-based Koopman operator estimators using random projections (sketching). We derive, implement and test the new "sketched" estimators with extensive experiments on synthetic and large-scale molecular dynamics datasets. Further, we establish non asymptotic error bounds giving a sharp characterization of the trade-offs between statistical learning rates and computational efficiency. Our empirical and theoretical analysis shows that the proposed estimators provide a sound and efficient way to learn large scale dynamical systems. In particular our experiments indicate that the proposed estimators retain the same accuracy of PCR or RRR, while being much faster. Code is available at [https://github.com/Giodiro/NystromKoopman](https://github.com/Giodiro/NystromKoopman). ## 1 Introduction In the physical world, temporally varying phenomena are everywhere, from biological processes in the cell to fluid dynamics to electrical fields. Correspondingly, they generate large amounts of data both through experiments and simulations. This data is often analyzed in the framework of dynamical systems, where the state of a system \(\mathbf{x}\) is observed at a certain time \(t\), and the dynamics is described by a function \(f\) which captures its evolution in time \[\mathbf{x}_{t+1}=f(\mathbf{x}_{t}).\] The function \(f\) must capture the whole dynamics, and as such it may be non-linear and even stochastic for instance when modeling stochastic differential equations, or simply noisy processes. Applications of this general formulation arise in fields ranging from robotics, atomistic simulations, epidemiology, and many more. Along with a recent increase in the availability of simulated data, data-driven techniques for learning the dynamics underlying physical systems have become commonplace. The typical approach of such techniques is to acquire a dataset of training pairs \((\mathbf{x}_{t},\mathbf{y}_{t}=\mathbf{x}_{t+1})\) sampled in time, and use them to learn a model for \(f\) which minimizes a forecasting error. Since dynamical systems stem from real physical processes, forecasting is not the only goal and the ability to interpret the dynamics is paramount. One particularly important dimension for interpretation is the separation of dynamics into multiple temporal scales: fast fluctuations can e.g. be due to thermodynamical noise or electrical components in the system, while slow dynamics describe important conformational changes in molecules or mechanical effects. Koopman operator theory [24, 25] provides an elegant framework in which the potentially non-linear dynamics of the system can be studied via the Koopman operator \[(\mathcal{K}\psi)(\mathbf{x})=\mathbf{E}\big{[}\psi(f(\mathbf{x}))\big{]}, \tag{1}\] which has the main advantage of being linear but is defined on a typically infinite-dimensional set of observable functions. The expectation in (1) is taken with respect to the potential stochasticity of \(f\). Thanks to its linearity, the operator \(\mathcal{K}\) can e.g. be applied twice to get two-steps-ahead forecasts, and one can compute its spectrum (beware however that \(\mathcal{K}\) is not self-adjoint, unless the dynamical process is time-reversible). Accurately approximating the Koopman operator and its spectral properties is of high interest for the practical analysis of dynamical systems. However doing so efficiently for long temporal trajectories remains challenging. In this paper we are interested in designing estimators which are both theoretically accurate and computationally efficient. Related worksLearning the spectral properties of the Koopman operator directly from data has been considered for at least 3 decades [36], resulting in a large body of previous work. Among the different approaches proposed over time (see Mezic [37] for a recent review) it is most common to search for finite dimensional approximations to the operator. DMD [52, 59], tICA [38, 45] and many subsequent extensions [28] for example can be seen as minimizers of the forecasting error when \(\psi\) is restricted to be a linear function of the states [48]. eDMD [62, 22] and VAC [41, 42] instead allow for a (potentially learnable, as in recent deep learning algorithms [29, 34, 65, 58]) dictionary of non-linear functions \(\psi\). KernelDMD [63, 23] and kernel tICA [53] are further generalizations which again approximate the Koopman operator but using an infinite dimensional space of features \(\psi\), encoded by the feature map of a reproducing kernel. While often slow from a computational point of view, kernel methods are highly expressive and can be analyzed theoretically, to prove convergence and derive learning rates of the resulting estimators [26]. Approximate kernel methods which are much faster to run have been recently used for Koopman operator learning by Baddoo et al. [6] where an iterative procedure is used to identify the best approximation to the full kernel, but no formal learning rates are demonstrated, and by Ahmad et al. [3] who derive learning rates in Hilbert-Schmidt norm (while we consider operator norm) for the Nystrom KRR estimator (one of the three considered in this paper). ContributionsIn this paper we adopt the kernel learning approach. Starting from the problem of approximating the Koopman operator in a reproducing kernel Hilbert space, we derive three different estimators based on different inductive biases: kernel ridge regression (KRR) which comes from Tikhonov regularization, principal component regression (PCR) which is equivalent to dynamic mode decompositin (DMD) and its extensions, and reduced rank regression (RRR) which comes from a constraint on the maximum rank of the estimator [21]. We show how to overcome the computational scalability problems inherent in full kernel methods using an approximation based on random projections which is known as the Nystrom method [54, 61]. The approximate learning algorithms scale very easily to the largest datasets, with a computational complexity which goes from \(O(n^{3})\) for the exact algorithm to \(O(n^{2})\) for the approximate one. We can further show that the Nystrom KRR, PCR and RRR estimators have the same convergence rates as theirs exact, slow counterparts - which are known to be optimal under our assumptions. We provide learning bounds in operator norm, which are known to translate to bounds for dynamic mode decomposition and are thus of paramount importance for applications. Finally, we thoroughly validate the approximate PCR and RRR estimators on synthetic dynamical systems, comparing efficiency and accuracy against their exact counterparts [26], as well as recently proposed fast Koopman estimator streaming KAF [18]. To showcase a realistic scenario, we train on a molecular dynamics simulation of the fast-folding Trp-cage protein [32]. Structure of the paperWe introduce the setting in Section 2, and define our three estimators in Section 3. In Section 4 we provide bounds on the excess risk of our estimators, and extensive experiments on synthetic as well as large-scale molecular dynamics datasets in Section 5. ## 2 Background and related work NotationWe consider a measurable space \((\mathcal{X},\mathcal{B})\) where \(\mathcal{X}\) corresponds to the state space, and denote \(L^{2}_{\pi}:=L^{2}(\mathcal{X},\mathcal{B},\pi)\) the \(L^{2}\) space of functions on \(\mathcal{X}\) w.r.t. to a probability measure \(\pi\), and \(L^{\infty}_{\pi}\) the space of measurable functions bounded almost everywhere. We denote \(\text{HS}(\mathcal{H})\) the space of Hilbert-Schmidt operators on a space \(\mathcal{H}\). SettingThe setting we will consider is that of Markovian, time-homogeneous stochastic process \(\{X_{t}\}_{t\in\mathbb{N}}\) on \(\mathcal{X}\). By definition of a Markov process, \(X_{t}\) only depends on \(X_{t-1}\) and not on any previous states. Time-homogeneity ensures that the transition probability \(\mathbb{P}\big{[}X_{t+1}\in B|X_{t}=\mathbf{x}\big{]}\) for any measurable set \(B\) does not depend on \(t\), and can be denoted with \(p(\mathbf{x},B)\). This implies in particular that the distribution of \((X_{t},X_{t+1})\) does not depend on \(t\), and we denote it \(\rho\) in the following. We further assume the existence of the _invariant_ density \(\pi\) which satisfies \(\pi(B)=\int_{\mathcal{X}}\pi(\mathbf{x})p(\mathbf{x},B)\,\mathrm{d}\mathbf{x}\). This classical assumption allows one to study large class of stochastic dynamical systems, but also deterministic systems on the attractor, see e.g. [12]. The Koopman operator \(\mathcal{K}_{\pi}:L^{2}_{\pi}(\mathcal{X})\to L^{2}_{\pi}(\mathcal{X})\) is a bounded linear operator, defined by \[(\mathcal{K}_{\pi}g)(\mathbf{x})=\int_{\mathcal{X}}p(\mathbf{x},\mathbf{y})g(\mathbf{y})\, \mathrm{d}\mathbf{y}=\mathbf{E}\big{[}g(X_{t+1})|X_{t}=\mathbf{x}\big{]},\quad g\in L ^{2}_{\pi}(\mathcal{X}),\mathbf{x}\in\mathcal{X}. \tag{2}\] We are in particular interested in the eigenpairs \((\lambda_{i},\varphi_{i})\in\mathbb{C}\times L^{2}_{\pi}\), that satisfy \[\mathcal{K}_{\pi}\varphi_{i}=\lambda_{i}\varphi_{i}. \tag{3}\] Through this decomposition it is possible to interpret the system by separating fast and slow processes, or projecting the states onto fewer dimensions [13; 17; 7]. In particular, the Koopman mode decomposition (KMD) allows to propagate the system state in time. Given an observable \(g:\mathcal{X}\to\mathbb{R}^{d}\) such that \(g\in\mathrm{span}\{\varphi_{i}|i\in\mathbb{N}\}\), the modes allow to reconstruct \(g(\mathbf{x})\) with a Koopman eigenfunction basis. The modes \(\mathbf{\eta}^{g}_{i}\in\mathbb{C}^{d}\) are the coefficients of this basis expansion: \[(\mathcal{K}_{\pi}g)(\mathbf{x})=\mathbf{E}\big{[}g(X_{t})|X_{0}=\mathbf{x}\big{]}= \sum_{i}\lambda_{i}\varphi_{i}(\mathbf{x})\mathbf{\eta}^{g}_{i}. \tag{4}\] This decomposition describes the system's dynamics in terms of a stationary component (the Koopman modes), a temporal component (the eigenvalues \(\lambda_{i}\)) and a spatial component (eigenfunctions \(\varphi_{i}\)). Kernel-based learningIn this paper we approximate \(\mathcal{K}_{\pi}\) with kernel-based algorithms, using operators in reproducing kernel Hilbert spaces (RKHS) \(\mathcal{H}\) associated with kernel \(k:\mathcal{X}\times\mathcal{X}\to\mathbb{R}\) and feature map \(\phi:\mathcal{X}\to\mathcal{H}\). We wish to find an operator \(A:\mathcal{H}\to\mathcal{H}\) which minimizes the risk \[\mathcal{R}_{\text{HS}}(A)=\mathbf{E}_{\rho}\big{[}\ell(A,(\mathbf{x},\mathbf{y})) \big{]}\quad\text{ where }\quad\ell(A,(\mathbf{x},\mathbf{y})):=\|\phi(\mathbf{y})-A\phi(\mathbf{x})\|^{2}. \tag{5}\] The operator \(A^{*}\) should thus be understood as an estimator of the Koopman operator \(\mathcal{K}_{\pi}\) in \(\mathcal{H}\) as will be clarified in (15). In practice \(\pi\) and \(\rho\) are unknown, and one typically has access to a dataset \(\{(\mathbf{x}_{i},\mathbf{y}_{i})\}_{i=1}^{n}\) sampled from \(\rho\), where each pair \((\mathbf{x}_{i},\mathbf{y}_{i}=f(\mathbf{x}_{i}))\) may equivalently come from a single long trajectory or multiple shorter ones concatenated together. We thus use the empirical risk \[\hat{\mathcal{R}}_{\text{HS}}(A)=\frac{1}{n}\sum_{i=1}^{n}\ell(A,(\mathbf{x}_{i}, \mathbf{y}_{i})) \tag{6}\] as a proxy for (5). In practice, minimizing eq. (6) may require finding the solution to a very badly conditioned linear system. To avoid this potential pitfall, different regularization methods (such as Tikhonov or truncated SVD) can be applied on top of the empirical risk. **Remark 2.1** (**Connections to other learning problems)**:: _The problem of minimizing eqs. (5) and (6) has strong connections to learning conditional mean embeddings [55; 40; 30] where the predictors and targets are embedded in different RKHSs, and to structured prediction [10; 11] which is an even more general framework. On the other hand, the most substantial difference from the usual kernel regression setting [8] is the embedding of both targets and predictors into a RKHS, instead of just targets._ We denote the input and cross covariance \(C=\mathbf{E}_{\pi}[\phi(\mathbf{x})\otimes\phi(\mathbf{x})]\) and \(C_{YX}=\mathbf{E}_{\rho}[\phi(\mathbf{y})\otimes\phi(\mathbf{x})]\), and their empirical counterparts as \(\hat{C}=\frac{1}{n}\sum_{i=1}^{n}[\phi(\mathbf{x}_{i})\otimes\phi(\mathbf{x}_{i})]\) and \(\hat{C}_{YX}=\frac{1}{n}\sum_{i=1}^{n}\phi(\mathbf{y}_{i})\otimes\phi(\mathbf{x}_{i})]\). We also use the abbreviation \(C_{\lambda}:=C+\lambda I\). Minimizing the empirical risk (6) with Tikhonov regularization [8] yields the following KRR estimator \[\hat{A}_{\lambda}=\operatorname*{arg\,min}_{A\in\mathbf{HS}(\mathcal{H})} \hat{\mathcal{R}}_{\text{HS}}(A)+\lambda\|A\|_{\text{HS}}^{2}=\hat{C}_{YX}( \hat{C}+\lambda I)^{-1}. \tag{7}\] Eq. (7) can be computed by transforming its expression with the kernel trick [20], to arrive at a form where one must invert the kernel matrix - a \(n\times n\) matrix whose \(i,j\)-th entry is \(k(\mathbf{x}_{i},\mathbf{x}_{j})\). This operation requires \(O(n^{3})\) time and \(O(n^{2})\) memory, severely limiting the scalability of KRR to \(n\lesssim 100\,000\) points. Improving the scalability of kernel methods is a well-researched topic, with the most important solutions being random features [46; 47; 64; 19] and random projections [54; 61; 19]. In this paper we use the latter approach, whereby the kernel matrix is assumed to be approximately low-rank and is _sketched_ to a lower dimensionality. In particular we will use the Nystrom method to approximate the kernel matrix projecting it onto a small set of inducing points, chosen among the training set. The sketched estimators are much more efficient than the exact ones, increasingly so as the training trajectories become longer. For example, the state of the art complexity for solving (non vector valued) approximate kernel ridge regression is \(O(n\sqrt{n})\) time instead of \(O(n^{3})\)[35; 1]. Furthermore, when enough inducing points are used (typically on the order of \(\sqrt{n}\)), the learning rates of the exact and approximate estimators are the same, and optimal [5; 49]. Hence it is possible - and in this paper we show it for learning the Koopman operator - to obtain large efficiency gains, without losing anything in terms of theoretical guarantees of convergence. ## 3 Nystrom estimators for Koopman operator regression In this section, we introduce three efficient approximations of the KRR, PCR and RRR estimators of the Koopman operator. Our estimators rely on the Nystrom approximation, i.e. on random projections onto low-dimensional subspaces of \(\mathcal{H}\) spanned by the feature-embeddings of subsets of the data. We thus consider two sets of \(m\ll n\) inducing points \(\{\tilde{\mathbf{x}}_{j}\}_{j=1}^{m}\subset\{\mathbf{x}_{t}\}_{t=1}^{n}\) and \(\{\tilde{\mathbf{y}}_{j}\}_{j=1}^{m}\subset\{\mathbf{y}_{t}\}_{t=1}^{n}\) sampled respectively from the input and output data. The choice of these inducing points (also sometimes called Nystrom centers) is important to obtain a good approximation. Common choices include uniform sampling, leverage score sampling [15; 51], and iterative procedures such as the one used in [6] to identify the most relevant centers. In this paper we focus on uniform sampling for simplicity, but we stress that our theoretical results in Section 4 can easily be extended to leverage scores sampling by means of [49, Lemma 7]. To formalize the Nystrom estimators, we define operators \(\widetilde{\Phi}_{X},\widetilde{\Phi}_{Y}:\mathbb{R}^{m}\to\mathcal{H}\) as \(\widetilde{\Phi}_{X}w=\sum_{j=1}^{m}w_{j}\phi(\tilde{\mathbf{x}}_{j})\) and \(\widetilde{\Phi}_{Y}w=\sum_{j=1}^{m}w_{j}\phi(\tilde{\mathbf{y}}_{j})\), and denote \(P_{X}\) and \(P_{Y}\) the orthogonal projections onto \(\operatorname*{span}\widetilde{\Phi}_{X}\) and \(\operatorname*{span}\widetilde{\Phi}_{Y}\) respectively. In the following paragraphs we apply the projection operators to three estimators corresponding to different choices of regularization. For each of them a specific proposition (proven in Appendix C) states an efficient way of computing it based on the kernel trick. For this purpose we introduce the kernel matrices \(K_{\tilde{\mathcal{X}},X},K_{\tilde{Y},Y}\in\mathbb{R}^{m\times n}\) between training set and inducing points with entries \((K_{\tilde{\mathcal{X}},X})_{ji}=k(\tilde{\mathbf{x}}_{j},\mathbf{x}_{i})\), \((K_{\tilde{Y},Y})_{ji}=k(\tilde{\mathbf{y}}_{j},\mathbf{y}_{i})\), and the kernel matrices of the inducing points \(K_{\tilde{\mathcal{X}},\tilde{\mathcal{X}}},K_{\tilde{Y},\tilde{Y}}\in\mathbb{ R}^{m\times m}\) with entries \((K_{\tilde{\mathcal{X}},X})_{jk}=k(\tilde{\mathbf{x}}_{j},\tilde{\mathbf{x}}_{k})\) and \((K_{\tilde{\mathcal{X}},X})_{jk}=k(\tilde{\mathbf{y}}_{j},\tilde{\mathbf{y}}_{k})\). Kernel Ridge Regression (KRR)The cost of computing \(\hat{A}_{\lambda}\) defined in Eq. (7) is \(O(n^{3})\)[26] which is prohibitive for datasets containing long trajectories. However, applying the projection operators to each side of the empirical covariance operators, we obtain an estimator which additionally depends on the \(m\) inducing points: \[\hat{A}_{m,\lambda}^{\text{KRR}}:=P_{Y}\hat{C}_{YX}P_{X}(P_{X}\hat{C}P_{X}+ \lambda I)^{-1}:\mathcal{H}\to\mathcal{H}. \tag{8}\] If \(\mathcal{H}\) is infinite dimensional, Eq. (8) cannot be computed directly. Proposition 3.1 (proven in Appendix C) provides a computable version of the estimator. **Proposition 3.1** (Nystrom KRR): _The Nystrom KRR estimator (8) can be expressed as_ \[\hat{A}_{m,\lambda}^{\text{KRR}}=\widetilde{\Phi}_{Y}K_{\tilde{Y},\tilde{Y}}^ {\dagger}K_{\tilde{Y},Y}K_{X,\tilde{\mathcal{X}}}(K_{\tilde{\mathcal{X}},X}K_{ \tilde{X},\tilde{\mathcal{X}}}+n\lambda K_{\tilde{\mathcal{X}},\tilde{\mathcal{ X}}})^{\dagger}\widetilde{\Phi}_{X}^{*}. \tag{9}\] _The computational bottlenecks are the inversion of an \(m\times m\) matrix and a large matrix multiplication, which overall need \(O(2m^{3}+2m^{2}n)\) operations. In particular, in Section 4 we will show that \(m\asymp\sqrt{n}\) is sufficient to guarantee optimal rates even with minimal assumptions, leading to a final cost of \(O(n^{2})\). Note that a similar estimator was derived in [3]._ Please note that the \(O(n^{2})\) cost is for a straightforward implementation, and can indeed be reduced via iterative linear solvers (possibly preconditioned, to further reduce the practical running time), and randomized linear algebra techniques. In particular, we could leverage results from Rudi et al. [50] to reduce the computational cost to \(O(n\sqrt{n})\). Principal Component Regression (PCR)Typical settings in which Koopman operator theory is used focus on the decomposition of a dynamical system into a small set of components, obtained from the eigendecomposition of the operator itself. For this reason, a good prior on the Koopman estimator is for it to be low rank. The kernel PCR estimator \(\hat{A}^{\text{PCR}}=\hat{C}_{YX}[\hat{C}]^{\dagger}_{r}\) formalizes this concept [26; 63], where here \([\![\cdot]\!]_{r}\) denotes the truncation to the first \(r\) components of the spectrum. Again this is expensive to compute when \(n\) is large, but the estimator can be sketched as follows: \[\hat{A}^{\text{PCR}}_{m}=P_{Y}\hat{C}_{YX}[\![P_{X}\hat{C}P_{X}]^{\dagger}_{r}. \tag{10}\] The next proposition provides an efficiently implementable version of this estimator. **Proposition 3.2** (Nystrom PCR): _The sketched PCR estimator (10) satisfies_ \[\hat{A}^{\text{PCR}}_{m}=\widetilde{\Phi}_{Y}K^{\dagger}_{\hat{Y},\hat{Y}}K_{ \hat{Y},Y}K_{X,\hat{X}}[\![K^{\dagger}_{\hat{X},\hat{X}}K_{\hat{X},X}K_{X,\hat {X}}]\!]_{r}\widetilde{\Phi}^{*}_{X} \tag{11}\] _requiring \(O(2m^{3}+2m^{2}n)\) operations, i.e. optimal rates can again be obtained at a cost of at most \(O(n^{2})\) operations._ Note that with \(m=n\), \(\hat{A}^{\text{PCR}}_{m}\) is equivalent to the kernel DMD estimator [63], also known as kernel analog forecasting (KAF) [4]. The sketched estimator of Proposition 3.2 was also recently derived in [6], albeit without providing theoretical guarantees. Reduced Rank Regression (RRR)Another way to promote low-rank estimators is to add an explicit rank constraint when minimizing the empirical risk. Combining such a constraint with Tikhonov regularization corresponds to the reduced rank regression [21; 26] estimator: \[A^{\text{RRR}}_{\lambda}=\operatorname*{arg\,min}_{A\in\text{HS:rk}(A)\leq r} \hat{\mathcal{R}}_{\text{HS}}(A)+\lambda\|A\|^{2}_{\text{HS}}. \tag{12}\] Minimizing Eq. (12) requires solving a \(n\times n\) generalized eigenvalue problem. The following proposition introduces the sketched version of this estimator, along with a procedure to compute it which instead requires the solution of a \(m\times m\) eigenvalue problem. For \(m\asymp\sqrt{n}\), which is enough to guarantee optimal learning rates with minimal assumptions (see Section 4), this represents a reduction from \(O(n^{3})\) to \(O(n\sqrt{n})\) time. **Proposition 3.3** (Nystrom RRR): _The Nystrom RRR estimator can be written as_ \[\hat{A}^{\text{RRR}}_{m,\lambda}=[\![P_{Y}\hat{C}_{YX}P_{X}(P_{X}\hat{C}P_{X}+ \lambda I)^{-1/2}]\!]_{r}(P_{X}\hat{C}P_{X}+\lambda I)^{-1/2}. \tag{13}\] _To compute it, solve the \(m\times m\) eigenvalue problem_ \[(K_{\hat{X},X}K_{X,\hat{X}}+n\lambda K_{\hat{X},\hat{X}})^{\dagger}K_{\hat{X},X}K_{Y,\hat{Y}}K^{\dagger}_{\hat{Y},\hat{Y}}K_{\hat{Y},Y}K_{X,\hat{X}}w_{i}= \sigma_{i}^{2}w_{i}\] _for the first \(r\) eigenvectors \(W_{r}=[w_{1},\ldots,w_{r}]\), appropriately normalized. Then denoting \(D_{r}:=K^{\dagger}_{\hat{Y},\hat{Y}}K_{\hat{Y},Y}K_{X,\hat{X}}W_{r}\) and \(E_{r}:=(K_{\hat{X},X}K_{X,\hat{X}}+n\lambda K_{\hat{X},\hat{X}})^{\dagger}K_{ \hat{X},X}K_{Y,\hat{Y}}D_{r}\) it holds_ \[\hat{A}^{\text{RRR}}_{m,\lambda}=\widetilde{\Phi}_{Y}D_{r}E_{r}^{*}\widetilde{ \Phi}^{*}_{X}. \tag{14}\] ## 4 Learning bounds in operator norm for the sketched estimators In this section, we state the main theoretical results showing that optimal rates for operator learning with KRR, PCR and RRR can be reached with Nystrom estimators. AssumptionsWe first make two assumptions on the space \(\mathcal{H}\) used for the approximation, via its reproducing kernel \(k\). **Assumption 4.1** (Bounded kernel): _There exists \(K<\infty\) such that \(\operatorname{ess}\sup_{\mathbf{x}\sim\pi}\lVert\phi(\mathbf{x})\rVert\leq K\)._ Assumption 4.1 ensures that \(\mathcal{H}\) is compactly embedded in \(L^{2}_{\pi}\)[57, Lemma 2.3], and we denote \(\Phi^{*}_{X}:\mathcal{H}\to L^{2}_{\pi}\) the embedding operator which maps any function in \(\mathcal{H}\) to its equivalence class \(\pi\)-almost everywhere in \(L^{2}_{\pi}\). **Assumption 4.2** (Universal kernel): _The kernel \(k\) is universal, i.e. \(\operatorname{cl}(\operatorname{ran}(\Phi^{*}_{X}))=L^{2}_{\pi}\)._ We refer the reader to [56, Definition 4.52] for a definition of a universal kernel. The third assumption on the RKHS is related to the embedding property from Fischer and Steinwart [16], connected to the embedding of interpolation spaces. For a detailed discussion see Appendix A.3. **Assumption 4.3** (Embedding property): _There exists \(\tau\in]0,1]\) and \(c_{\tau}>0\) such that \(\operatorname{ess}\sup_{\mathbf{x}\sim\pi}\lVert C^{-1/2}_{\lambda}\phi(\mathbf{x}) \rVert^{2}\leq c_{\tau}\lambda^{-\tau}\)._ Next, we make an assumption on the decay of the spectrum of the covariance operator that is of paramount importance for derivation of optimal learning bounds. In the following, \(\lambda_{i}(A)\) and \(\sigma_{i}(A)\) always denote the eigenvalues and singular values of an operator \(A\) (in decreasing order). **Assumption 4.4** (Spectral decay): _There exists \(\beta\in]0,\tau]\) and \(c>0\) such that \(\lambda_{i}(C)\leq ci^{-1/\beta}\)._ This assumption is common in the literature, and we will see that the optimal learning rates depend on \(\beta\). It implies the bound \(d_{\text{eff}}(\lambda):=\operatorname{tr}(C^{-1}_{\lambda}C)\lesssim\lambda^{-\beta}\) on the effective dimension, which is a key quantity in the analysis (both statements are actually equivalent, see Appendix E.2). Note that \(d_{\text{eff}}(\lambda)=\mathbf{E}_{\mathbf{x}\sim\pi}\lVert C^{-1/2}_{\lambda} \phi(\mathbf{x})\rVert\leq\operatorname{ess}\sup_{\mathbf{x}\sim\pi}\lVert C^{-1/2}_{ \lambda}\phi(\mathbf{x})\rVert\), and thus it necessarily holds \(\beta\leq\tau\). For a Gaussian kernel, both \(\beta\) and \(\tau\) can be chosen arbitrarily close to zero. Finally, we make an assumption about the regularity of the problem itself. A common assumption occurring in the literature is that \(\mathbf{E}[f(X_{1})\,|\,X_{0}=\cdot]\in\mathcal{H}\) for every \(f\in\mathcal{H}\), meaning that one can define the Koopman operator directly on the space \(\mathcal{H}\), i.e. the learning problem is _well-specified_. However, this assumption is often too strong. Following [27, D.1] we make a different assumption on the cross-covariance remarking that, irrespectively of the choice of RKHS, it holds true whenever the Koopman operator is self-adjoint (i.e. the dynamics is time-reversible). **Assumption 4.5** (Regularity of \(\mathcal{K}_{\pi}\)): _There exists \(a>0\) such that \(C_{XY}C^{*}_{XY}\preccurlyeq a^{2}C^{2}\)._ RatesThe risk can be decomposed as \(\mathcal{R}_{\text{HS}}(A)=\mathcal{E}_{\text{HS}}(A)+\mathcal{R}_{\text{HS},0}\) where \(\mathcal{R}_{\text{HS},0}\) is a constant and \(\mathcal{E}_{\text{HS}}(A):=\lVert\mathcal{K}_{\pi}\Phi^{*}_{X}-\Phi^{*}_{X}A^ {*}\rVert^{2}_{\text{IRS}}\) corresponds to the excess risk (more details in Appendix B). Optimal learning bounds for the KRR estimator in the context of CME (i.e. in Hilbert-Schmidt norm) have been developed in [30] under Assumptions 4.1 to 4.4 in well-specified and misspecified settings. On the other hand, in the context of dynamical systems, Kostic et al. [26, Theorem 1] report the importance of _reduced rank estimators_ that have a small excess risk in operator norm \[\mathcal{E}(A):=\lVert\mathcal{K}_{\pi}\Phi^{*}_{X}-\Phi^{*}_{X}A^{*}\rVert^{2 }_{\mathcal{H}\to L^{2}_{\pi}}. \tag{15}\] The rationale behind considering the operator norm is that it allows to control the error of the eigenvalues approximation and thus of the KMD (3), (4) as discussed below. Optimal learning bounds in operator norm for KRR, PCR and RRR are established in [27]. In this work we show that the same optimal rates remain valid for the _Nystrom_ KRR, PCR and RRR estimators. According to [26] and [27] these operator norm bounds lead to reliable approximation of the Koopman mode decomposition of Eq. (4). We now provide our main result. **Theorem 4.6** (Operator norm error for KRR, i.i.d. data): _Let assumptions 4.1 to 4.5 hold. Let \((\mathbf{x}_{i},\mathbf{y}_{i})_{1\leq i\leq n}\) be i.i.d. samples, and let \(P_{Y}=P_{X}\) be the projection induced by \(m\) Nystrom landmarks drawn uniformly from \((\mathbf{x}_{i})_{1\leq i\leq n}\) without replacement. Let \(\lambda=c_{\lambda}n^{-1/(1+\beta)}\) where \(c_{\lambda}\) is a constant given in the proof, and assume \(n\geq(c_{\lambda}/K^{2})^{1+\beta}\). Then it holds with probability at least \(1-\delta\)_ \[\mathcal{E}(\hat{A}^{\text{KRR}}_{m,\lambda})^{1/2}\lesssim n^{-\frac{1}{2(1+ \beta)}}\qquad\text{provided}\qquad m\gtrsim\max(1,n^{\tau/(1+\beta)})\log(n/ \!\delta).\] The proof is provided in Appendix E.2, but essentially relies on a decomposition involving the terms \(\lVert C^{-1/2}_{\lambda}(C_{YX}-\hat{C}_{YX})\rVert\), \(\lVert C^{-1/2}_{\lambda}(C-\hat{C})\rVert\), \(\lVert C^{-1/2}_{\lambda}(C-\hat{C})C^{-1/2}_{\lambda}\rVert\), as well as bounding the quantity \(\|P_{X}^{\perp}C^{1/2}\|\) where \(P_{X}^{\perp}\) denotes the projection on the orthogonal of \(\operatorname{ran}(P_{X})\). All these terms are bounded using two variants of the Bernstein inequality. Note that our results can easily be extended to leverage score sampling of the landmarks by bounding term \(\|P_{X}^{\perp}C^{1/2}\|\) by means of [49, Lemma 7]; the same rate could then be obtained using a smaller number \(m\) of Nystrom points. The rate \(n^{-1/(2(1+\beta))}\) is known to be optimal (up to the log factor) in this setting by assuming an additional lower bound on the decay of the covariance's eigenvalues of the kind \(\lambda_{i}(C)\gtrsim i^{-1/\beta}\), see [27, Theorem 7 in D.4]. One can see that without particular assumptions (\(\beta=\tau=1\)), we only need the number \(m\) of inducing points to be of the order of \(\Omega(\sqrt{n})\) in order to get an optimal rates. For \(\tau\) fixed, this number increases when \(\beta\) decreases (faster decay of the covariance's spectrum), however note that the optimal rate depends on \(\beta\) and also improves in this case. The dependence in \(\tau\) is particularly interesting, as for instance with a Gaussian kernel it is known that \(\tau\) can be chosen arbitrarily closed to zero [30, 16]. In that case, the number \(m\) of inducing points can be taken on the order of \(\Omega(\log n)\). Note that a bound for the Nystrom KRR estimator has been derived in Hilbert-Schmidt norm by Ahmad et al. [3]. Using the operator norm however allows to derive bounds on the eigenvalues (see discussion below), which is of paramount importance for practical applications. Moreover, we now provide a bound on the error of PCR and RRR estimators, which are not covered in [3]. **Lemma 4.7** (Operator norm error for PCR and RRR, i.i.d. data): _Under the assumptions of Theorem 4.6, taking \(\lambda=c_{\lambda}n^{-1/(1+\beta)}\) with \(c_{\lambda}\) as in Theorem 4.6, \(n\geq(c_{\lambda}/K^{2})^{1+\beta}\), and provided_ \[m\gtrsim\max(1,n^{\tau/(1+\beta)})\log(\nicefrac{{n}}{{\delta}}),\] _it holds with probability at least \(1-\delta\)_ \[\mathcal{E}(\hat{A}_{m,\lambda}^{\text{RRR}})^{1/2} \lesssim c_{\mathrm{RRR}}\,n^{-\frac{1}{2(1+\beta)}},\,\text{ for }r\text{ s.t. }\sigma_{r+1}(\Phi_{Y|X})<\min(\sigma_{r}(\Phi_{Y|X}),n^{-\frac{1}{2(1+\beta)}})\] \[\text{and}\quad\mathcal{E}(\hat{A}_{m}^{\text{PCR}})^{1/2} \lesssim c_{\mathrm{PCR}}\,n^{-\frac{1}{2(1+\beta)}},\,\text{ for }r>n^{\frac{1}{ \beta(1+\beta)}},\] _where \(c_{\mathrm{RRR}}=(\sigma_{r}^{2}(\Phi_{Y|X})-\sigma_{r+1}^{2}(\Phi_{Y|X}))^{-1}\) and \(c_{\mathrm{PCR}}=(\sigma_{r}(\Phi_{X})-\sigma_{r+1}(\Phi_{X}))^{-1}\) are the problem dependant constants._ Note that when rank of \(\mathcal{K}_{\pi}\) is \(r\), then there is no restriction on \(r\) for the RRR estimator, while for PCR the choice of \(r\) depends on the spectral decay property of the kernel. In general, if \(r>n^{\frac{1}{\beta(1+\beta)}}\), then \(\sigma_{r+1}(\Phi_{Y|X})\leq\sigma_{r+1}(\Phi_{X})\lesssim n^{-1/(2(1+\beta))},\) which implies that RRR estimator can achieve the same rate of PCR but with smaller rank. Again the rate is sharp (up to the log factor) in this setting [27]. Koopman mode decompositionAccording to [26, Theorem 1], working in operator norm allows us to bound the error of our estimators for dynamic mode decomposition, as well as to quantify how close the eigenpairs \((\hat{\lambda}_{i},\hat{\varphi}_{i})\) of an estimator \(\hat{A}^{*}\) are to being eigenpairs of the Koopman operator. Namely, recalling that for function \(\hat{\varphi}_{i}\), the corresponding candidate for Koopman eigenfunction in \(L_{\pi}^{2}\) space is \(\Phi_{X}^{*}\hat{\varphi}_{i}\), one has \(\|\mathcal{K}_{\pi}(\Phi_{X}^{*}\hat{\varphi}_{i})-\hat{\lambda}_{i}(\Phi_{X} ^{*}\hat{\varphi}_{i})\|/\|\Phi_{X}^{*}\hat{\varphi}_{i}\|\leq\mathcal{E}(\hat {A})^{1/2}\|\hat{\varphi}_{i}\|/\|\Phi_{X}^{*}\hat{\varphi}_{i}\|\). While eigenvalue and eigenfunction learning rates were studied, under additional assumptions, in [27], where the operator norm error rates were determinant, here, in Section 5, we empirically show that the proposed estimators accurately learn the Koopman spectrum. We refer the reader to Appendix D for the details on computation of eigenvalues, eigenfunctions and KMD of an estimator in practice. Dealing with non-i.i.d. dataThe previous results hold for i.i.d. data, which is not a very realistic assumption when learning from sampled trajectories. Our results can however easily be extended to \(\beta\)-mixing processes by considering random variables \(Z_{i}=\sum_{j=1}^{k}X_{i+j}\) (thus representing portions of the trajectory) sufficiently separated in time to be nearly independent. We now consider a trajectory \(\mathbf{x}_{1},\ldots,\mathbf{x}_{n+1}\) with \(\mathbf{x}_{1}\sim\pi\) and \(\mathbf{x}_{t+1}\sim p(\mathbf{x}_{t},\cdot)\) for \(t\in[1,n]\), and use Lemma J.8 (re-stated from [26]) which allows to translate concentration results on the \(Z_{i}\) to concentration on the \(X_{i}\) by means of the \(\beta\)-mixing coefficients defined as \(\beta_{X}(k):=\sup_{B\in\mathcal{B}\otimes\mathcal{B}}\left|\rho_{k}(B)-(\pi \times\pi)(B)\right|\) where \(\rho_{k}\) denotes the joint probability of \((X_{t},X_{t+k})\). Using this result the concentration results provided in appendix can thus be generalised to the \(\beta\)-mixing setting, and apart from logarithmic dependencies we essentially obtain similar results to the i.i.d. setting except that the sample size \(n\) is replaced by \(p\approx n/(2k)\). ## 5 Experimental validation In this section we show how the estimators proposed in section 3 perform in various scenarios, ranging from synthetic low dimensional ODEs to large-scale molecular dynamics simulations. The code for reproducing all experiments is available online. Our initial aim is to demonstrate the speed of NysPCR and NysRRR, compared to the recently proposed alternative Streaming KAF (sKAF) [18]. Then we show that their favorable scaling properties make it possible to train on large molecular dynamics datasets without any subsampling. In particular we run a metastability analysis of the alanine dipeptide and the Trp-cage protein, showcasing the accuracy of our models' eigenvalue and eigenfunction estimates, as well as their efficiency on massive datasets (\(>500\,000\) points) **Efficiency Benchmarks on Lorenz '63** The chaotic Lorenz '63 system [33] consists of 3 ODEs with no measurement noise. With this toy dynamical system we can easily compare the Nystrom estimators to two alternatives: 1. the corresponding _exact_ estimators and 2. the sKAF algorithm which also uses randomized linear algebra to improve the efficiency of PCR. In this setting we sample long trajectories from the system, keeping the first points for training (the number of training points varies for the first experiment, and is fixed to \(10\,000\) for the second, see fig. 2), and the subsequent ones for testing. In Figure 1 we compare the run-time and accuracy with of NysPCR and NysRRR versus their full counterparts. To demonstrate the different scaling regimes we fix the number of inducing points to 250 and increase the number of data points \(n\). The accuracy of the two solvers (as measured with the normalized RMSE metric (nRMSE) [18] on the first variable) is identical for PCR and close for RRR, but the running time of the approximate solvers increases much slower with \(n\) than that of the exact solvers. Each experiment is repeated 20 times to display error bars over the choice of Nystrom centers. In the second experiment, shown in fig. 2, we reproduce the setting [18] by training at increasingly long forecast horizons. Plotting the nRMSE we verify that sKAF and NysPCR converge to very similar accuracy values, although NysPCR is approximately \(10\) times faster. NysRRR instead offers slightly better accuracy, at the expense of a higher running time compared to NysPCR. Error bars are the standard deviation of nRMSE over 5 successive test sets with \(10\,000\) points each. **Molecular dynamics datasets** An important application of Koopman operator theory is in the analysis of molecular dynamics (MD) datasets, where the evolution of a molecule's atomic positions as they evolve over time is modelled. Interesting systems are very high dimensional, with hundreds or thousands of atoms. Furthermore, trajectories are generated at very short time intervals (\(<1\,\mathrm{ns}\)) but interesting events (e.g. protein folding/unfolding) occur at timescales on the order of at least \(10\,\mathrm{\SIUnitSymbolMicro s}\), so that huge datasets are needed to have a few samples of the rare events. The top eigenfunctions of the Koopman operator learned on such trajectories can be used to project the high-dimensional state space onto low-dimensional coordinates which capture the long term, slow dynamics. We take three \(250\,\mathrm{ns}\) long simulations sampled at \(1\,\mathrm{ps}\) of the alanine dipeptide [60], which is often taken as a model system for molecular dynamics [43; 42]. We use the pairwise distances between heavy atoms as features, yielding a 45-dimensional space. We train a NysRRR model with \(10\,000\) centers on top of the full dataset (\(449\,940\) points are used for training, the rest for validation and testing) with lag time \(100\,\mathrm{ps}\), and recover a 2-dimensional representation which correlates well Figure 1: Full and Nyström estimators trained on L63 with increasing \(n\). Error (_left_) and running time (_right_) are plotted to show efficiency gains without accuracy loss with the Nyström approximation. RBF(\(\sigma=3.5\)) kernel, \(r=25\) principal components and \(m=250\) inducing points. Figure 2: Nyström and sKAF estimators trained on L63 for increasing forecast horizons; the error (_left_) and overall running times (_right_) are shown. We used a RBF kernel with \(\sigma=3.5\), \(r=50\), \(m=250\) (for Nyström methods) and \(\sqrt{n}\log n\) random features (for sKAF). with the \(\phi,\psi\) backbone dihedral angles of the molecule, known to capture all relevant long-term dynamics. Figure 3a shows the top two eigenfunctions overlaid onto \(\phi,\psi\), the first separates the slowest transition between low and high \(\phi\); the second separates low and high \(\psi\). The implied time-scales from the first two non-trivial eigenvalues are \(1262\,\mathrm{ps}\) and \(69\,\mathrm{ps}\), which are close to the values reported by Nuske et al. [43] (\(1400\,\mathrm{ps}\) and \(70\,\mathrm{ps}\)) who used a more complex post-processing procedure to identify time-scales. We then train a PCCA+ [14] model on the first three eigenfunctions to obtain three states, as shown in fig. 3b. PCCA+ acts on top of a fine clustering (in our case obtained with k-means, \(k=50\)), to find the set of maximally stable states by analyzing transitions between the fine clusters. The coarse clusters clearly correspond to the two transitions described above. Finally we take a \(208\,\mathrm{\SIUnitSymbolMicro s}\) long simulation of the fast-folding Trp-cage protein [32], sampled every \(0.2\,\mathrm{ns}\). Again, the states are the pairwise distances between non-hydrogen atoms belonging to the protein, in \(10\,296\) dimensions. A NysRRR model is trained on \(626\,370\) points, using \(5000\) centers in approximately 10 minutes. Note that without sketching this would be a completely intractable problem. Using a lag-time of \(10\,\mathrm{ns}\) we observe a spectral gap between the third and fourth eigenvalues, hence we train a PCCA+ model on the first 3 eigenfunctions to obtain the states shown in fig. 4. The first non-trivial Koopman eigenvector effectively distinguishes between the folded (state 1) and unfolded states as is evident from the first row of fig. 4. The second one instead can be used to identify a partially folded state of the protein (state 0), as can be seen from the insets in fig. 4. Figure 4: First eigenfunctions for Trp-cage dynamics, colored according to the membership probability for each state in a PCCA+ model. The bottom insets show a few overlaid structures from each state. The first eigenfunction exhibits a strong linear separation between state 1 (folded) and the other states. The second separates between state 0 (partially folded) ant the rest. NysRRR model trained with \(m=5000\), \(r=10\), RBF(\(\sigma=0.02\)) kernel, \(\lambda=10^{-10}\). Figure 3: Dynamics of the alanine dipeptide (lag-time 100), Nyström RRR model. On the left the first two non-constant eigenfunctions, overlaid in color on the Ramachandran plot which fully describes the metastable states. On the right the three states of a PCCA+ model trained on the eigenfunctions. Conclusions We introduced three efficient kernel-based estimators of the Koopman operator relying on random projections, and provided a bound on their excess risk in operator norm - which is of paramount importance to control the accuracy of Koopman mode decomposition. Random projections allow to process efficiently even the longest trajectories, and these gains come for free as our estimators still enjoy optimal theoretical learning rates. We leave for future work the refinement our analysis under e.g. an additional source condition assumption or in the misspecified setting. Another future research direction shall be to devise ways to further reduce the computational complexity of the estimators. ## 7 Acknowledgements This paper is part of a project that has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 819789). L. R. acknowledges the financial support of the European Research Council (grant SLING 819789), the AFOSR projects FA9550-18-1-7009, FA9550-17-1-0390 and BAA-AFRL-AFOSR-2016-0007 (European Office of Aerospace Research and Development), the EU H2020-MSCA-RISE project NoMADS - DLV-777826, and the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216. M. P., V. K. and P. N. acknowledge financial support from PNRR MUR project PE0000013-FAIR and the European Union (Projects 951847 and 101070617).
2305.12612
PrOnto: Language Model Evaluations for 859 Languages
Evaluation datasets are critical resources for measuring the quality of pretrained language models. However, due to the high cost of dataset annotation, these resources are scarce for most languages other than English, making it difficult to assess the quality of language models. In this work, we present a new method for evaluation dataset construction which enables any language with a New Testament translation to receive a suite of evaluation datasets suitable for pretrained language model evaluation. The method critically involves aligning verses with those in the New Testament portion of English OntoNotes, and then projecting annotations from English to the target language, with no manual annotation required. We apply this method to 1051 New Testament translations in 859 and make them publicly available. Additionally, we conduct experiments which demonstrate the efficacy of our method for creating evaluation tasks which can assess language model quality.
Luke Gessler
2023-05-22T00:33:52Z
http://arxiv.org/abs/2305.12612v2
# PrOnto: Language Model Evaluations for 859 Languages ###### Abstract Evaluation datasets are critical resources for measuring the quality of pretrained language models. However, due to the high cost of dataset annotation, these resources are scarce for most languages other than English, making it difficult to assess the quality of language models. In this work, we present a new method for evaluation dataset construction which enables any language with a New Testament translation to receive a suite of evaluation datasets suitable for pretrained language model evaluation. The method critically involves aligning verses with those in the New Testament portion of English OntoNotes, and then projecting annotations from English to the target language, with no manual annotation required. We apply this method to 1051 New Testament translations in 859 and make them publicly available. Additionally, we conduct experiments which demonstrate the efficacy of our method for creating evaluation tasks which can assess language model quality. ## 1 Introduction Language models such as BERT Devlin et al. (2019) and other Transformer-based Vaswani et al. (2017) language models (TLMs) are notoriously difficult to understand. Evaluation datasets such as SuperGLUE Wang et al. (2019), BLiMP Warstadt et al. (2020), and others have been essential resources for understanding and comparing different models' capabilities. By measuring two models' performance on a question-answering task, for example, we are able to make an assessment about the models' capabilities relative to each other. Unfortunately, these evaluation tasks almost always require _annotated_ data produced by a human being, and these datasets are therefore very scarce except for the most well-resourced languages, especially English. This scarcity of evaluation datasets has been a significant hindrance for research on TLMs for low-resource languages, as it is much harder to assess the quality and properties of models without them. Here, we present PrOnto, a dataset consisting of **pro**jections of **Onto**Notes' New Testament annotations into New Testament translations in 859 different languages. OntoNotes Hovy et al. (2006) is a corpus with many annotation types covering a wide variety of phenomena in grammar and meaning. A subset of the English portion of OntoNotes contains the Easy-to-Read Version (ERV) translation of the New Testament, complete with a segmentation of each sentence into the book, chapter, and verse of the Bible that it appeared in. Using these verse alignments, we can create new annotations for a given target language, yielding high-quality annotated data for the target language, ready to use in an evaluation, without requiring more human annotation. We focus on annotations which do not require token alignments (e.g., number of referential noun phrases that appear in a verse), as this avoids a source of noise (poor alignments) in annotation projection. In this work, we describe our methods for creating the PrOnto dataset, and also provide experimental results demonstrating its utility as an evaluation resource. We summarize our contributions as follows: * We publish evaluation datasets for 5 tasks across 1051 New Testament translations in 859 languages.1 Footnote 1: These datasets and all of our code are available at [https://github.com/lgessler/pronto](https://github.com/lgessler/pronto) * We publish the system we used to create this dataset, which can be used by anyone to extend this dataset to any language that has a New Testament translation or a part of one. * We perform experiments covering a wide range of languages with respect to typological variables and data-richness which demon strate the utility of this dataset for assessing pretrained language model quality. ## 2 Related Work Beginning with the publication of the first modern TLM, BERT Devlin et al. (2019), pretrained TLMs have had their quality assessed by applying them to a wide array of downstream tasks. It is typical to apply the TLM in question to as many downstream evaluations as practically possible, since downstream tasks vary considerably in which properties of language they are sensitive to. A syntactic parsing task, for example, is presumably more discriminative of formal aspects of grammar, while a sentiment analysis task is presumably more discriminative of meaning-related aspects of grammar. All 11 of the tasks used to evaluate BERT in Devlin et al. (2019) are meaning-oriented tasks, with natural language understanding (NLU) and question answering (QA) being heavily represented. Most post-BERT English TLMs have followed its lead in favoring meaning-related tasks (e.g. Liu et al., 2019; Zhang, 2022, _inter alia_). The English TLM evaluation dataset ecosystem has continued to grow, and some evaluation dataset suites have grown to encompass over 200 tasks (BIG-bench collaboration, 2021). Among other high-resource languages, there is more variation: MacBERT Cui et al. (2020), a Mandarin Chinese BERT, is evaluated using tasks comparable in kind and quantity to those used with BERT, while CamemBERT Martin et al. (2020), a French BERT, is evaluated with a large proportion of Universal Dependencies (UD) Nivre et al. (2016) tasks. The situation for low-resource languages is quite different. Since annotated datasets are so rare and small for low-resource languages, most low-resource TLM evaluation has been centered on just a few datasets, all of which are fairly form-oriented in terms of what they are assessing models for. Occasionally, a family of low-resource languages might have a high-quality evaluation dataset: for example, Ogueji et al. (2021) train a low-resource TLM for 11 African languages, and evaluate on named-entity recognition (NER) using the MasakhaNER dataset Adelani et al. (2021). However, more often, low-resource languages do not have resources like this. Much recent work on low-resource TLMs Chau et al. (2020); Chau and Smith (2021); Muller et al. (2021); Gessler and Zeldes (2022), _inter alia_) uses only two datasets. The first is UD corpora, which consist of human-annotated syntactic trees and tags which can be used for form-related tasks such as part-of-speech tagging and syntactic dependency parsing. The second is the WikiAnn Pan et al. (2017) dataset, an NER dataset that was automatically generated for 282 languages based on the structure of Wikipedia hyperlinks. While evaluations that use both of these datasets have proven to be useful, the UD dataset and to a lesser extent the WikiAnn dataset are both more form- than meaning-based in terms of what they assess in models. This could mean that many low-resource TLM evaluations are missing important dimensions of model quality that cannot be assessed well by existing evaluation datasets. Annotation projection is a technique at least as old as Yarowsky and Ngai (2001), where token alignments are used to project noun phrase boundaries and part-of-speech tags across languages. Much similar work has been done for other annotation types--just a few examples of works in this literature include Pado and Lapata (2009) (semantic roles), Asgari and Schutze (2017) (tense), and Enghoff et al. (2018) (named entiy recognition). It is also worth noting that the idea of using a large collection of Bible data for NLP/CL is not a new idea McCarthy et al. (2020). ## 3 OntoNotes Before we describe our work, we briefly describe some important details of OntoNotes Hovy et al. (2006). OntoNotes is a multilayer annotated corpus whose English portion contains the Easy-to-Read Version (ERV) translation of the New Testament of the Christian Bible. OntoNotes' major anno Figure 1: A sample verse, John 11:35, taken from OntoNotes. Note the annotations for tokenization, part-of-speech, constituency syntax, coreference, and argument structure. This file is in “OntoNotes Normal Form” (ONF), a human-readable format which OntoNotes provides its annotations in. tation types include coreference, Penn Treebank-style constituency syntax, NER, WordNet sense annotations, and PropBank argument structure annotations. The ERV New Testament subcorpus of OntoNotes has all of these major annotation types with the notable exception of NER and WordNet sense annotation, which was not done for the New Testament. An example annotation of John 11:35 is given in Figure 1. The "Tree" annotation has a Penn Treebank-style parse which includes an analysis of the sentence's syntactic structure as well as part-of-speech tags. The "Leaves" section contains multiple annotation types which are anchored on the annotation's head token. The coref type indicates a coreference annotation, which is then followed by coreference type, coreference chain ID, and token span information. The annotation in Figure 1 tells us that: token 0, _Jesus_, is the beginning of a new coreference mention; the coreference type of this mention is IDENT; the mention belongs to coreference chain 16; and this mention begins at token 0 and ends at token 0. The prop type indicates the a PropBank annotation headed at the exponent of a predicate, typically a verb, and gives the PropBank sense ID of the predicate as well as the arguments of the predicate. In the example in Figure 1, the annotation tells us that: cried is the head of a PropBank predicate; the sense of the predicate is cry.02; the beginning of the v argument is headed at token 1, and its corresponding constituent is 0 levels up in the parse tree; and the beginning of the ARGO argument is headed at token 0 and its corresponding constituent is 1 level up in the parse tree. For full details, we refer readers to the official documentation at [https://catalog.ldc.upenn.edu/docs/LDC2013T19/OntoNotes-Release-5.0.pdf](https://catalog.ldc.upenn.edu/docs/LDC2013T19/OntoNotes-Release-5.0.pdf). ## 4 Methods We would like to have more evaluation datasets for low-resource TLM evaluation, though constructing these for each individual language is expensive, as the creation of new datasets generally requires human annotation of some kind. However, in this work, we propose a method for creating evaluation datasets without requiring additional human annotation. New Testament translations are also highly common for low-resource languages because of missionary work, and OntoNotes' New Testament subcorpus is richly annotated. Because the New Testament is partitioned into verses that are highly consistent across translations, it is possible to view verse boundaries as sentence-like alignments across translations, which would allow the projection of sentence-level annotations from OntoNotes to another New Testament translation. This is the approach we take up: we propose five annotation projection methods, apply them to Bible translations, and perform evaluations to assess their utility. More specifically, our goal is to take a New Testament translation in a _target language_, align its verses with the verses present in OntoNotes, and then use OntoNotes' annotations to annotate the target language's translation, verse by verse. Here, we describe the steps we take to process the data. ### Bible Translations We use all permissively-licensed New Testament translations available at ebible.org, a repository of Bible data, processing the proprietary XML format of these translations into our simple TSV format. Some translations are very small or do not contain any of the New Testament, and we discard any with fewer than 500 verses overlapping with OntoNotes, which we do not count in our totals. The final 1051 translations cover a total of 859 languages. ### Alignment We parse OntoNotes' ONF files, and we assume that the target translation is given in a simple TSV format where each row contains the textual content of the verse as well as the verse's book, chapter, and verse number. In an ideal situation, an OntoNotes sentence would correspond to exactly one verse in both the ERV and the target translation, but this is not always the case. These are the possible complications: Figure 2: Matthew 9:5-6, as translated by the ERV (above) and the NRSVUE (below). In the ERV translation, verses 5 and 6 are fused, which means that no boundary between the two is indicated, and that their contents have been altered in linear ordering. 1. A verse contains more than one OntoNotes sentence. Some verses simply contain more than one sentence. 2. An OntoNotes sentence spans more than one ERV verse. Verse boundaries are not guaranteed to coincide with sentence boundaries, so sometimes a sentence will begin in one verse and end in another. In OntoNotes, a sentence never spans more than two verses. 3. The verse in either the ERV or the target translation has been combined with one or more other verses. Bible translators sometimes choose to combine verses and in such cases do not provide internal boundaries for the verses that have been merged. For determining a mapping, (1) presents no problem--we simply associate multiple OntoNotes sentences with a single verse. For (2), we associate the sentence with both verses, retaining the information that a sentence spanned a verse boundary. (For all of the tasks described in this paper, we discard verses that have sentences that cross verse boundaries, but the alignments are still constructed and ready to use.) For (3), if verses have been combined in either the ERV or the target translation, we simply remove the combined verses from consideration. In the ERV, combined verses are very rare, accounting for well under 1% of all verses. In other translations, this figure is also quite small. ### Tasks Once alignment is complete, we are prepared to generate task data. We propose five tasks, all of which are sequence classification tasks either on single sequences or on paired (a la BERT's next sentence prediction) sequences. While we do not pursue this in our present work, we expect that it may also be possible to produce annotations for token-level tasks using high-quality automatically generated word alignments. A fundamental assumption for our approach is that some linguistic properties a sentence might have ought to be _similar enough_ in all languages to yield projected annotations which are useful for model evaluation. Of course, short of examining every last verse, we cannot know with certainty that just because, for example, an English sentence has declarative sentence mood, its Farsi translation would also have declarative sentence mood. But we do have reason to believe that sentence mood ought to be fairly well preserved across translations, given that sentence mood is so highly associated with semantic-pragmatic rather than formal aspects of language (Portner, 2018), and so we can have some justification in assuming that sentence mood ought to be the same between translation pairs. At any rate, regardless of the justifiability of this assumption, we contend that if this assumption does hold for a certain annotation type, then we should see differential performance across pretrained TLMs, which we will examine in SS5.2. Task 1: Non-pronominal Mention Counting (NMC)Predict the number of non-pronominal _mentions_ in a verse. The intuition for this task is that it ought to require a model to understand which spans in a sentence could co-refer, which requires knowledge of both form and meaning. A mention is a span of tokens, often but not always a noun phrase, that has been annotated for coreference, according to the OntoNotes-specific coreference annotation guidelines.2 Footnote 2: [https://www.ldc.upenn.edu/sites/www.ldc.upenn.edu/files/english-coreference-guidelines.pdf](https://www.ldc.upenn.edu/sites/www.ldc.upenn.edu/files/english-coreference-guidelines.pdf) It is important to point out that some entity must be mentioned at least _twice_ in a document in order to be annotated: if an entity is only mentioned once, then the mention is not annotated. This makes this task somewhat pathological, because models will only be getting verse-level (not document-level) context, and this ought to make it impossible to tell in many cases whether a given markable (some tokens that _could_ be a mention) genuinely is a mention. This is unfortunate, but this is not necessarily fatal for the utility of this task.3 Footnote 3: An alternative would be to simply count non-pronominal noun phrases in the parse tree, but this is not perfect either: some noun phrases, such as in _on \([_{\text{NP}}\) the other hand]_, are never referential, a fact which coreference annotations are sensitive to but syntactic annotations are not. Without further work, it is unclear which is practically better, and we choose to use the coreference-based approach in this work. Task 2: Proper Noun in Subject (PNS)Predict whether the subject of the first sentence in the verse contains a proper noun. To determine whether the subject contains a proper noun, we attempt to find a constituent labeled NP-SBJ in the main clause, and if we succeed in finding exactly one, we consider it a positive instance if any of the tokens within it are tagged with "NNP" or "NNPS". Note that this does not necessarily mean that the _head_ of the subject is a proper noun: _scholars_/NNS _from_/IN _Burundi_/NNP would count as a positive instance by our criterion, despite the fact that a common noun heads it. Task 3: Sentence Mood (SM)Predict whether the mood of the main clause of the first sentence is declarative, interrogative, or imperative. In Penn Treebank parse trees, sentence mood is encoded in the label of the highest constituent: for example, S and S-CLF are defined as having declarative sentence mood, S-IMP is imperative, and SQ, SBARQ, and SQ-CLF are interrogative. If the top constituent does not have a label that falls into any of these categories, which likely means it is a sentence fragment or some other unusual sentence type, we discard it. Task 4: Same Sense (SS)Given two verses \(v_{1}\) and \(v_{2}\), and given further that \(v_{1}\) contains at least one usage of the predicate identified by sense label \(s\), predict whether \(v_{2}\) also has a usage of sense label \(s\). Note that in our formulation of this task, the sense label \(s\) is explicitly given as an input rather than left unexpressed because otherwise the model would need to look for whether _any_ sense-usages overlap across the two verses, which is likely too hard. Pairs are sampled so that negative and positive instances are balanced. This task is perhaps the most suspect of all of our five proposed tasks given the great diversity of distinctions that may or may not be made at the word sense level. For example, for the English word _go_, Bukiyip has at least three different lexical items, distinguished by vertical motion relative to the mover's position at the beginning of the going event: _nato_ 'go up, ascend'; _naboh_ 'go down, descend'; and _narih_ 'go around, go at a level grade'. As such, we should expect that performance will likely be nowhere close to 90% even on non-English high-resource languages, as the English sense labels will likely often reflect distinctions which are either unexpressed or not specific enough for the target language's sense-inventory. Still, we expect that for any given language, _some_ sense labels will still be appropriate when projected, and if this is the case, then we expect that higher-quality models will be able to perform better than lower-quality ones. Task 5: Same Argument Count (SAC)Given two verses \(v_{1}\) and \(v_{2}\) which both feature a usage of the predicate identified by sense label \(s\), predict whether both usages of \(s\) have the same number of arguments. Pairs are sampled so that negative and positive instances are balanced. We do not require that the verses have _exactly_ one usage of \(s\), which we do in the interest of using as many distinct verses as possible, though this may be interesting to consider in future work. ## 5 Evaluation In order to evaluate our dataset, we implement a simple sequence classification model and apply it to our tasks using a wide range of pretrained TLMs. We evaluate a wide range of languages and models in order to get as much information as possible about the utility of our methods. These include several low-resource languages, but we also include some high- and medium-resource languages in order to get additional perspective. ### Languages The only work we were able to locate in the literature on low-resource TLMs that both worked on a wide range of languages and made all of their pretrained TLMs publicly available is Gessler and Zeldes (2022), and we therefore include all of the languages they studied in their work. These include the the low-resource languages Wolof, Sahidic Coptic, Uyghur, and Ancient Greek. (Gessler and Zeldes also published models for Maltese, but we were unable to locate a permissively-licensed Maltese Bible.) These also include Tamil and Indonesian, two medium-resource languages. We additionally consider the high-resource languages French and Japanese, which may be interesting to look at given that they are both high resource and typologically similar to and divergent from English, respectively. Any differences that emerge between French and Japanese could be indicative of typological distance degrading the quality of our projected annotations. Additionally, both of these languages have high-quality monolingual TLMs, and it would be interesting to examine if different patterns emerge in high-resource settings. Finally, we include two different English translations. First, we include the original translation used in OntoNotes, the ERV, because it ought to give us an upper bound on projected annotation quality: ERV annotations projected to the same ERV verses ought to have the highest possible quality. Second, we include the Noah Webster's revision of the King James Version. The Webster Bible differs from the KJV only in that mechanical edits were made to replace archic words and constructions, and we include it in order to see if relatively small differences across translations (same language, slightly different register) are enough to cause major differences in task performance, which would then indicate differences in projected annotation quality. #### Bible Manifest #### Model Implementation We use HuggingFace's (Wolf et al., 2020) off-the-shelf AutoModelForSequenceClassification model. This model takes a pretrained TLM and adds a sequence classification head (with pretrained weights, if available). The architectural details of this head vary depending on which exact model a pre-trained TLM is for (e.g. BertModel or Roberta-Model), but most major models, including BERT and RoBERTa, simply use one (BERT) or two (RoBERTa) linear transformations that are applied to the [CLS] (or equivalent) token. The model is trained with a low learning rate for a small number of epochs before it is evaluated on a held-out test set for each task. HyperparamtersSpecifically, we use the default parameters for the transformers package, version 4.28.1, for the Trainer class, with the following exceptions. Learning rate is set to 2e-5, batch size is set to 16, training epochs is set to 10 except for SM in which case it is 20, and weight decay for AdamW is set to 0.01. NMC CappingFor NMC, while we always provide the genuine number of non-pronominal mentions in our dataset, in our experiments, we cap the maximum number of mentions at 3, labeling any sentence with more than 3 mentions as if it only had 3. This was done to make the task easier, as the number of sentences with more than 3 mentions is very low, and the model subsequently suffers while trying to learn how to count higher than three. Sequence Packing for SS and SACRecall that for the SS and SAC tasks, the inputs include not only two verses but also a sense label. First, we pack the two verses into a single input sequence, obeying any model-specific rules about where to put special tokens. In a BERT style model, for example, the sequence would look like [CLS] \(v_{1}\) [SEP] \(v_{2}\) [SEP]. There are many ways the sense label \(s\) could be provided as an input, but we choose to provide the label as an extra token after the final token of the base sequence. To do this, we extend the vocabulary \(\mathcal{V}\) with \(|\mathcal{S}|\) more entries, where \(\mathcal{S}\) is the inventory of sense labels, so that the new vocabulary has size \(|\mathcal{V}|+|\mathcal{S}|\). Senses are individually assigned to the new entries, and each sense is put after the final token, e.g. [CLS] \(v_{1}\) [SEP] \(v_{2}\) [SEP] \(s\). MetricsWe report accuracy on all tasks. Other more specialized metrics might be more informative for some tasks where e.g. the task is a binary classification problem or the label distribution is highly imbalanced, but we find that accuracy alone is sufficient to support our findings here, and choose to work with it exclusively to simplify the discussion. ### List of Bibles Our complete list of Bibles for the evaluation is as follows. We format them so that our own abbreviation for them comes first, the full title follows, and the code for ebible.org's page follows in parentheses (append this code to ebible.org/details.php?id=). 1. ERV: Easy-to-Read version (engerv) 2. WBT: Webster Bible (engwebster) 3. IND: Indonesian New Testament (ind) 4. TAM: Tamil Indian Revised Bible (tam2017) 5. FRA: French Free Holy Bible for the World (frasbl) 6. JPN: New Japanese New Testament (jpn1965) 7. GRC: Greek Majority Text New Testament (grcmt) 8. COP: Coptic Sahidic New Testament (copshc) 9. UIG: Uyghur Bible (uigara) 10. WOL: Wolof Bible 2020 Revision (wolKYG) ### List of Pretrained Models Our complete list of pretrained models from HuggingFace Hub for the evaluation is as follows. Note that some abbreviations are repeated because language will disambiguate which one is meant. The models beginning with lgessler/microbert are taken from Gessler and Zeldes (2022), and the suffixes indicate whether pretraining took place with just MLM (-m) or the combination of MLM and part-of-speech tagging (-mx). (We refer readers to their paper for further details.) 1. bert-base-multilingual-cased: mBERT 2. xlm-roberta-base: XLM-R 3. bert-base-cased: BERT 4. distilbert-base-cased: DistilBERT 5. roberta-base: RoBERTa 6. camembert-base: BERT 7. cl-tohoku/bert-base-japanese: BERT 8. l3cube-pune/tamil-bert: BERT 9. cahya/bert-base-indonesian-522M: BERT 10. lgessler/microbert-....m: \(\mu\)BERT-M (where... is one of wolof, ancient-greek, indonesian, coptic, uyghur, tamil 11. lgessler/microbert-....m: \(\mu\)BERT-MX ### Results EnglishResults for our two English datasets are given in Table 1. A majority-label baseline is given in the row labeled with the translation (ERV or WBT), and results with several common pretrained English models as well as two multilingual models are given. Looking first at our "control" dataset, the projection from the ERV translation onto itself, we can see that overall our models perform well above the majority class baseline, indicating that all of our tasks are not intractable, at least in the most easy setting. It's worth noting that the Sentence Mood task is very easy in this condition, with two models getting a perfect score. The hardest task is Same Argument Count, with the best model performing only 13% higher than the baseline. A striking pattern with the sequence-pair tasks is that the RoBERTa-family models perform at chance in three out of four cases. The only obvious reason why this might be is that the other, BERT-family models are pretrained with a sequence-pair task (next sentence prediction), while RoBERTa does not. We set this matter aside for now and note that even very popular and generally high-quality models can have anomalous performance on some tasks. Turning now to the other English translation, WBT, we see that performance is lower on the whole but remains discernably higher than the baseline in all cases. It is worth noting that the variety of English used in WBT, a slightly modernized form of Early Modern English, is likely quite out of domain for all of our models, and in this sense, the WBT could be thought of as a few-shot setting. A pattern similar to the one for the ERV emerges where the RoBERTa-family models fail to do anything meaningful for the Same Argument Count task. Overall, the results are in line with what we would expect given other published results which have evaluated the quality of these five pretrained models. The monolingual models almost always do best for ERV and in three out of five tasks for WBT (SS and PNS, where mBERT does best). Among the monolingual models, excepting the anomalous RoBERTa cases described above, BERT most often performs best, with DistilBERT doing best in only two cases, which accords with findings that DistilBERT's quality is usually slightly lower than BERT's (Sanh et al., 2020). In sum, these results on English corroborate our claim that our five tasks are well-posed, not pathologically difficult, and indicative of model quality, at least in English settings. Medium-resource LanguagesWe turn now to our "medium-resource" languages in Table 2: French and Japanese at the higher end, and Indonesian and Tamil at the lower end. For all four languages, XLM-RoBERTa continues to struggle with sequence-pair classification tasks, performing essentially at chance for all languages. For French and Japanese, the monolingual BERT model's performance is typically a bit better than either of the multilingual models' performance, with one exception: for the same-sense (SS) task, mBERT performs significantly better than the monolingual model. Thus the broad picture of performance is what we'd expect, though this one surprising result shows that our tasks are broad in what they assess models for. For Indonesian and Tamil, the \(\mu\)BERT models perform slightly worse on average than mBERT, \begin{table} \begin{tabular}{l|c c c c c} Model & NMC & PNS & SM & SS & SAC \\ \hline **ERV** & 49.59 & 72.60 & 91.56 & 50.43 & 50.69 \\ DistilBERT & 71.93 & 99.07 & 99.72 & 94.48 & 61.34 \\ BERT & 71.12 & 99.23 & 100.00 & 97.75 & 63.25 \\ RoBERTa & 70.03 & 98.76 & 99.86 & 89.53 & 50.69 \\ mBERT & 67.30 & 99.23 & 99.86 & 96.11 & 61.04 \\ XLM-R & 69.35 & 99.07 & 100.00 & 49.57 & 50.69 \\ \hline \hline **WBT** & 49.73 & 70.43 & 90.87 & 50.53 & 50.78 \\ DistilBERT & 55.99 & 84.67 & 92.81 & 72.15 & 61.54 \\ BERT & 52.86 & 83.13 & 94.88 & 76.06 & 64.51 \\ RoBERTa & 55.31 & 82.04 & 91.15 & 57.11 & 50.78 \\ mBERT & 53.54 & 85.29 & 93.22 & 79.08 & 60.55 \\ XLM-R & 53.68 & 84.67 & 93.22 & 49.47 & 50.78 \\ \end{tabular} \end{table} Table 1: Task accuracy for English by model and translation. ERV is the Easy-to-Read Version, WBT is the Webster Bible. in line with the results reported by Gessler and Zeldes (2022). Compared to the full-size monolingual models, the \(\upmu\)BERT models also are slightly worse on average, save for SS and SAC for Tamil, where performance is at-chance for the monolingual BERT. Low-resource LanguagesResults for low-resource languages are given in Table 3. Something that distinguishes the low-resource languages from the medium-resource languages and English is that many models now perform no better than the majority baseline. Many of the Wolof and Coptic models perform no better than the baseline, and fewer but still some of the Uyghur and Ancient Greek models do not outperform the baseline. For the \(\upmu\)BERT models, we note that the frequency with which this happens seems connected to dataset size: the tokens used by the \(\upmu\)BERT developers for each language were approximately 500K for Wolof, 1M for Coptic, 2M for Uyghur, and 9M for Ancient Greek. This demonstrates that some of our tasks are too hard to be solved at all by a model if it falls below a quality threshold, which can be seen as a desirable trait. Differences between the best-performing model and the baseline can be very small in some cases, such as for Sentence Mood in most languages. This may indicate that sentence mood annotation projection is inappropriate for some target languages, though the fact that models still do differentiate themselves in how able they are to do it demonstrates that some properties of the target language can at least be correlated with the sentence mood of a translation-equivalent English sentence. The performance gain relative to the baseline remains quite high for the two sense-related tasks. ## 6 Conclusion We have presented PrOnto, a publicly available dataset of evaluation tasks for pretrained language models for 1051 New Testament translations in 859. Overall, our results show that our tasks remain meaningful even when projected to languages which are typologically very different from English, and also even when they are performed by models that were trained on very little data. The fact that the way pretrained models distribute relative to our tasks mostly in the same way they do for established evaluation tasks constitutes evidence that these tasks are indeed indicative of model quality. Moreover, while our intent was primarily to develop this resource for low-resource languages, we have shown that it is able to serve medium- and high-resource languages as well. In future work, we intend to continue developing additional tasks. There is still much data that has not been fully used in the OntoNotes annotations, and some tasks (such as SAC) would likely benefit from refinement or reformulation. We further invite interested readers to consider contributing a task, as our annotation projection pipeline has been structured to make tasks very easy to author. \begin{table} \begin{tabular}{l|c c c c c} Model & NMC & PNS & SM & SS & SAC \\ \hline **FRA** & 49.86 & 76.78 & 89.76 & 50.40 & 51.14 \\ BERT & 57.63 & 82.35 & 92.81 & 67.43 & 64.04 \\ mBERT & 56.27 & 84.83 & 92.67 & 77.43 & 64.88 \\ XLM-R & 57.49 & 84.21 & 92.95 & 49.60 & 51.14 \\ \hline **JPN** & 51.30 & 76.47 & 91.41 & 50.15 & 50.64 \\ BERT & 58.25 & 89.63 & 94.04 & 73.52 & 62.46 \\ mBERT & 59.21 & 88.24 & 93.21 & 79.74 & 51.36 \\ XLM-R & 54.98 & 88.85 & 95.15 & 49.85 & 50.64 \\ \hline **IND** & 49.15 & 72.95 & 92.36 & 50.37 & 50.87 \\ BERT & 54.40 & 87.92 & 92.80 & 69.25 & 62.79 \\ \(\upmu\)BERT-M & 54.12 & 88.24 & 94.09 & 61.46 & 62.28 \\ \(\upmu\)BERT-MX & 53.98 & 87.12 & 93.80 & 59.87 & 62.10 \\ mBERT & 51.28 & 87.44 & 94.52 & 72.08 & 64.35 \\ XLM-R & 55.40 & 86.63 & 92.36 & 50.37 & 49.13 \\ \hline **TAM** & 49.59 & 74.77 & 91.56 & 50.51 & 50.65 \\ BERT & 54.90 & 86.84 & 92.53 & 49.49 & 50.65 \\ \(\upmu\)BERT-M & 53.13 & 81.27 & 91.70 & 62.33 & 62.34 \\ uBERT-MX & 52.32 & 82.51 & 91.29 & 62.92 & 63.11 \\ mBERT & 55.45 & 85.29 & 92.39 & 70.32 & 64.24 \\ XLM-R & 55.86 & 85.14 & 91.56 & 50.51 & 50.65 \\ \end{tabular} \end{table} Table 2: Task accuracy for “medium-resource” languages by language and translation. \begin{table} \begin{tabular}{l|c c c c c} Model & NMC & PNS & SM & SS & SAC \\ \hline **GRC** & 50.41 & 76.32 & 90.73 & 50.40 & 50.87 \\ \(\upmu\)BERT-M & 52.59 & 81.11 & 90.18 & 60.58 & 61.80 \\ \(\upmu\)BERT-MX & 56.81 & 81.42 & 91.56 & 60.95 & 61.71 \\ mBERT & 57.36 & 83.13 & 91.70 & 65.34 & 50.87 \\ XLM-R & 55.99 & 76.32 & 91.42 & 49.60 & 50.87 \\ \hline **COP** & 48.98 & 75.50 & 89.75 & 50.35 & 51.24 \\ \(\upmu\)BERT-M & 50.75 & 78.76 & 89.75 & 61.32 & 62.70 \\ \(\upmu\)BERT-MX & 53.34 & 80.78 & 91.55 & 61.30 & 61.58 \\ mBERT & 49.52 & 75.50 & 89.75 & 52.79 & 51.24 \\ XLM-R & 48.84 & 75.50 & 89.75 & 50.35 & 51.24 \\ \hline **UIG** & 49.37 & 73.53 & 89.96 & 50.23 & 50.78 \\ \(\upmu\)BERT-M & 49.37 & 81.30 & 89.96 & 60.65 & 61.78 \\ \(\upmu\)BERT-MX & 51.19 & 78.45 & 90.10 & 61.51 & 56.21 \\ mBERT & 51.46 & 80.35 & 91.23 & 62.73 & 50.78 \\ XLM-R & 54.53 & 84.94 & 92.93 & 49.77 & 50.78 \\ \hline **WOL** & 51.47 & 77.72 & 90.36 & 50.44 & 50.45 \\ \(\upmu\)BERT-M & 51.47 & 77.72 & 90.36 & 59.78 & 61.05 \\ \(\upmu\)BERT-MX & 59.24 & 79.90 & 90.36 & 63.08 & 63.46 \\ mBERT & 57.35 & 84.75 & 91.65 & 66.49 & 54.46 \\ XLM-R & 56.51 & 82.32 & 91.01 & 50.44 & 49.55 \\ \end{tabular} \end{table} Table 3: Task accuracy for low-resource languages by language and translation. ## Acknowledgments We thank Amir Zeldes for originally suggesting the core idea in this work, and we further thank Nathan Schneider and members of the NERT lab for helpful feedback on a draft of this work. We also thank the maintainers of ebible.org for hosting the open-access Bibles which were used in this work.
2310.07917
A Review of Machine Learning Techniques in Imbalanced Data and Future Trends
For over two decades, detecting rare events has been a challenging task among researchers in the data mining and machine learning domain. Real-life problems inspire researchers to navigate and further improve data processing and algorithmic approaches to achieve effective and computationally efficient methods for imbalanced learning. In this paper, we have collected and reviewed 258 peer-reviewed papers from archival journals and conference papers in an attempt to provide an in-depth review of various approaches in imbalanced learning from technical and application perspectives. This work aims to provide a structured review of methods used to address the problem of imbalanced data in various domains and create a general guideline for researchers in academia or industry who want to dive into the broad field of machine learning using large-scale imbalanced data.
Elaheh Jafarigol, Theodore Trafalis
2023-10-11T22:14:17Z
http://arxiv.org/abs/2310.07917v1
# A Review of Machine Learning Techniques in Imbalanced Data and Future Trends ###### Abstract For over two decades, detecting rare events has been a challenging task among researchers in the data mining and machine learning domain. Real-life problems inspire researchers to navigate and further improve data processing and algorithmic approaches to achieve effective and computationally efficient methods for imbalanced learning. In this paper, we have collected and reviewed 258 peer-reviewed papers from archival journals and conference papers in an attempt to provide an in-depth review of various approaches in imbalanced learning from technical and application perspectives. This work aims to provide a structured review of methods used to address the problem of imbalanced data in various domains and create a general guideline for researchers in academia or industry who want to dive into the broad field of machine learning using large-scale imbalanced data. keywords: imbalanced learning, rare events, data mining, classification, prediction + Footnote †: journal: Elesvier ## Introduction Classification problems are a major part of supervised learning and very often, the data is not equally distributed between the classes. The performance of the classifier is affected by the ratio of the majority class to the minority class, hence misclassification is more severe when the data is extremely imbalanced [1; 2; 3; 4; 5; 6]. In addition to the relative proportion of classes, the absolute number of available instances in the minority class is also an important factor. The problem with imbalanced data is magnified when the minority class consists of rare events. Rare events are defined as events that occur significantly less often than common events. In the case of rare events, classification becomes more challenging, because the classifier is often overwhelmed by the majority class and the results are biased. Therefore, without a significant loss in overall accuracy, the minority class is misclassified. Based on the type of data, the size of the data set and the distribution of data between classes, the issue of imbalanced learning can appear at different levels. The problem definition issues are caused by a lack of adequate information about the minority class[7]. Problem definition issues can cause evaluation metrics such as accuracy and error rate to fail in representing the minority class. Therefore, other evaluation metrics are defined to measure the classifier in imbalanced learning problems. The data issues are the result of absolute rarity and extremely imbalanced data. Resampling methods are the standard solution to this issue. Algorithm issues are caused by inadequacies of the learning algorithm and may result in poor classification accuracy of the minority class. Such issues are caused by the model's failure in learning the necessary criteria for classification. The goal of imbalanced learning is to find an optimal classifier that is capable of providing a balanced degree of predictive accuracy for the minority class as well as the majority class [8; 9; 10; 11; 12; 13; 14]. These methods are primarily attempting to address the issue of absolute class imbalance that exists in some datasets. However, the relative class imbalance is still an important issue in datasets where we have an abundance of training examples, but the distribution of the different classes might be severely skewed. In this latter situation, one can have access to enough examples from the minority class, even if the frequency of the minority class is very small, as long as the total number of examples is sufficiently big [15]. With the broad applications of imbalanced learning in the real world, this area has attracted the interest of many researchers and despite the advances, most imbalanced learning methods are still sensitive to highly imbalanced data. In this survey, we have selected 258 peer-reviewed papers among the papers published on the topic of imbalanced learning and its applications. Figure 1 presents the technical key words used in our search. In this paper, an overview of different approaches for the problem of imbalanced learning categorized based on the format provided in Figure 2, followed bu its applications in real-life problems is presented. The paper is organized as follows, in section 1, we provide a categorized definition of problem definition approaches and the different types of metrics used in this setting. In section 2, we focus on data processing approaches and extensively study different over-sampling and under-sampling methods used in the literature. Section 3 focuses on the algorithmic approach and the core machine learning methods for learning from large imbalanced datasets. In section 4 an overview of imbalanced learning applications is provided. Finally, we discuss some ideas of future research trends and conclude the paper. Figure 1: Technical Keywords in Imbalanced Learning Literature ## 1 Problem Definition Approaches ### Evaluation Metrics Evaluation is an important part of the learning process. Evaluation metrics are generally used to assess the generalization ability of the learning method on test data. One of the major issues that arise with imbalanced data is the inadequacy of well-known metrics such as accuracy and error rate in the evaluation of classification performance. Appropriate evaluation metrics are important in evaluating the quality of learning. Therefore, several authors have addressed this major issue and a new set of functions have been defined to determine how the classifier performs in classifying imbalanced data [16; 17; 18]. The authors of the paper published by Ferri et al. [19] have used experimental and theoretical analysis to compare and rank the evaluation metrics that work best on evaluating the learned model on imbalance data and analyze the identifiable clusters and relationships between the metrics. These experiments provide recommendations on the metrics that would be more appropriate for any specific application. Evaluation metrics are categorized into three types in the literature; threshold, probability, and ranking metrics [20]. Figure 2: General Approaches in Imbalanced Learning #### 1.1.1 Threshold Evaluation Metrics The threshold type of evaluation metric is defined based on a confusion matrix as it is shown in Table 1. In binary classification, given that the predicted value of test samples in the majority class is denoted as N (Negative), and the predicted value of the test samples in the minority class is denoted as P (Positive), the Confusion matrix is defined. Note that, the definition of a Confusion matrix can be extended to multi-class classification as well. Based on this notation, Accuracy is defined as a measure for performance of a classification algorithm. Accuracy is defined as: \[\begin{array}{c}\text{TP + TN}\\ \text{\emph{Accuracy} =}\end{array} \tag{1}\] \[\begin{array}{c}\text{TP + TN + FP + FN}\end{array}\] Accuracy is easy to use and interpret, however, despite being widely used by practitioners, it cannot provide enough information to ensure a reliable learning method when the data is imbalanced [21]. Classification performance metrics for imbalanced learning based on the confusion metrics are defined as: \[\begin{array}{c}\text{TP}\\ \text{\emph{Precision} =}\end{array} \tag{2}\] \[\begin{array}{c}\text{TP}\\ \text{TP + FP}\end{array} \tag{3}\] and, \[\begin{array}{c}\text{\emph{Recall} =}\end{array} \tag{4}\] Precision and recall have an inverse relationship and when used together can provide valid insight into the performance of the classifier with regard to the minority class. Precision and recall measure how exact and complete the model is, respectively. Also, the precision-recall curve allows us to study the changes in both \begin{table} \begin{tabular}{c|c|c|c|} \multicolumn{3}{c}{} & \multicolumn{3}{c}{Predictive Value} \\ \cline{3-4} \multicolumn{1}{c}{} & & Positive & Negative \\ \cline{2-4} \multicolumn{1}{c}{} & Positive & \(TP\) & \(FN\) \\ \cline{2-4} \multicolumn{1}{c}{} & Negative & \(FP\) & \(TN\) \\ \cline{2-4} \end{tabular} \end{table} Table 1: Confusion Matrix metrics simultaneously. In imbalanced learning, models with high recall on the minority class and high precision on the majority class are desired. Thus, F-measure is a valuable evaluation metric in imbalanced learning defined as: \[F-measure=\frac{(1+\beta)^{2}Recall*Precision}{\beta^{2}Recall+Precision} \tag{4}\] where \(\beta\) is the relative importance of precision versus recall, and it is usually set equal to one. In the presence of rare events a common approach is to maximize the F-measure. Musicant et al. [22] have developed an approach to maximize the F-measure by using SVMs. Geometric mean/ G-mean is an important evaluation metric that is used explicitly for imbalanced learning. \(G-mean=\)(5) TP + FN TN + FP A high G-mean indicates that the model is performing well in both classes. Other metrics used in the literature are but not limited to: \(\begin{array}{c}\text{r}\\ \text{r}\\ \text{r}\\ \text{r}\\ \text{r}\\ \text{r}\\ \text{r}\\ \text{r}\\ \text{r}\\ \text{r}\\ \text{r}\\ \text{r}\\ \text{r}\\ \end{array}\) \(\begin{array}{c}\text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP} \\ \text{TP}\\ \text{TP} \\ \text{TP}\\ \text{TP} \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP} \text{TP}\\ \text{TP}\\ \text{TP} \text{TP}\\ \text{TP} \\ \text{TP}\\ \text{TP} \\ \text{TP}\\ \text{TP} \\ \text{TP}\\ \text{TP} \\ \text{TP} \text{TP}\\ \text{TP}\\ \text{TP}\\ \text{TP} \\ \text{TP} \text{TP}\\ \text{TP}\\ \text{TP} \\ \text{TP} \\ \text{TP} \\ \text{TP}\\ \text{TP} \\ \text{TP}\\ \text{TP} \\ \text{TP} \\ \text{TP} \\ \text{TP} \\ \text{TP} \\ \text{TP} \\ \text{TP} \\ \text{TP} \\ \text{TP}\\ \text{TP}\\ \text{TP} \\ \text{TP}\\ \text{TP} \\ \text{TP} \\ \text{TP} \\ \text{TP} \\ \text{TP} \\ \text{TP} \\ \text{TP} \\ \text{TP}\\ \text{TP} \\ \text{TP}\\ \text{TP} \\ \text{TP} \\ \text{TP} \\ \text{TP} \\ \text{TP} \\ \text{TP} \\ \text{TP} \\ \text{TP} \\ \text{TP}\\ \text{TP} \\ \text{TP} \\ \text{TP}\\ \text{TP} \\ \text{TP} \\ \text{TP}\\ \text{TP} \\ \text several probabilistic evaluation metrics such as Short and Fukunaga Metric, the Value Difference Metric, and Euclidean-Hamming metrics. The Short and Fukunaga, the Value Difference and Euclidean-Hamming metrics are distance functions used in Nearest Neighbor (NN) learning models to measure the distance between two instances, that can determine the associated attribute and classify the instance in the test data [26]. Log Loss is a classification performance metric based on the cross-entropy function. Given that the expected/known probability of an instance in the training data is denoted as P and the predicted probability of an instance in the test data is denoted as Q, the cross-entropy for an instance in binary-classification is defined as: \[H(P,Q)=-(P(class0)*log(Q(class0))+P(class1)*log(Q(class1)) \tag{10}\] In this equation, the probability P is defined based on the Bernoulli distribution for the positive class and natural logarithm. When the instance is known, the cross-entropy is zero, therefore, we try to minimize the cross-entropy of the model. #### 1.1.3 Ranking Evaluation Metrics Sensitivity or TP rate and Specificity or TN rate are used to define the Receiver Operating Characteristic (ROC) curve, which is a visual representation of the classification performance. The Area Under the Curve (AUC) is defined as \(\frac{Sensitivity+Specificity}{2}\). AUC does not depend on the classifier and it is a reliable tool for model comparison because it is scale-invariant, and the output is the ranking of classifiers rather than their absolute value. AUC can also assess the quality of models using a threshold-invariant [27; 28]. Although AUC is widely used for the evaluation and discrimination process of binary classification models, it can be misleading sometimes. AUC uses a different misclassification cost for each classifier. Some researchers have addressed this issue and proposed modifications or alternative metrics such as the H measure which uses a symmetric Beta distribution in the AUC [29; 30]. To summarize, we have presented the evaluation metrics categorized based on their outcome in imbalanced learning studies in Tables 2,3, 4 and Figure 3. \begin{table} \begin{tabular}{l p{142.3pt}} \hline \hline Metrics & Definition \\ \hline Accuracy & The ratio of the correctly classified instances over the total number of classified instances \\ Error Rate & The ratio of misclassification errors over the classified instances \\ Precision & The proportion of instances that were labeled correctly among those with the positive label in the test data \\ Recall & The portion of positive instances in the test data that were labeled correctly \\ F-measure & The trade-off between precision and recall \\ G-mean & The measure to maximize the accuracy of the model over each class by considering both classes for evaluation \\ Sensitivity & The relative performance of the classifier over the minority class \\ Specificity & The relative performance of the classifier over the minority class \\ Negative Predictive Value & The number of TN over the instances with positive label in the test data \\ Mathews Correlation Coefficient & The measure of quality in binary classification \\ Bookmaker & The measure if discrimination capability of the classifier Informedness \\ \hline \hline \end{tabular} \end{table} Table 2: Threshold Metrics in Supervised Learning \begin{table} \begin{tabular}{l p{142.3pt}} \hline \hline Metrics & Definition \\ \hline Accuracy & The ratio of the correctly classified instances over the total number of classified instances \\ Error Rate & The ratio of misclassification errors over the classified instances \\ Precision & The proportion of instances that were labeled correctly among those with the positive label in the test data \\ Recall & The portion of positive instances in the test data that were labeled correctly \\ F-measure & The trade-off between precision and recall \\ G-mean & The measure to maximize the accuracy of the model over each class by considering both classes for evaluation \\ Sensitivity & The relative performance of the classifier over the minority class \\ Specificity & The relative performance of the classifier over the minority class \\ Negative Predictive Value & The number of TN over the instances with positive label in the test data \\ Mathews Correlation Coefficient & The measure of quality in binary classification \\ Bookmaker & The measure if discrimination capability of the classifier Informedness \\ \hline \hline \end{tabular} \end{table} Table 3: Probabilistic Metrics in Supervised Learning Short and Fukunaga A measure of distance between instances in Nearest Neighbor models Euclidean-Hamming A measure of distance between instances in Nearest Neighbor models Log-loss The negative log-likelihood under the Bernoulli distribution \(\vdots\) Ranking Metrics in Supervised Learning Metrics Definition ROC Curve Evaluate and rank several classifiers AUC The probability of correctly classifying the positive instances while the number of false positives is minimized ## 2 Data Processing Approaches ### Resampling Methods Resampling methods are developed to balance the ratio of the classes in imbalanced learning by adjusting the minority class or the majority class and enhancing the performance of the classifier [31]. Generally, basic resampling Figure 3: Evaluation Metrics in Imbalanced Learning methods follow two strategies. The first strategy is removing instances from the majority class known as Random Under-sampling (RUS) [32; 33]. The second is adding new instances to the minority class, known as Random Over-sampling (ROS)[34]. These methods can be utilized on their own or in combination with each other to adjust the distribution of the data before classification. A limitation of RUS and ROS is removing valuable information in the resampling process, therefore, under-fitting or over-fitting the data, respectively. To avoid such issues, advanced resampling methods were developed based on the idea of a guided resampling. Advanced resampling methods include multiple variations of under-sampling and over-sampling methods [35; 36] #### 2.1.1 Under-sampling Methods Under-sampling following the Nearest Neighbor (NN0 rule is the classification of data based on the similarities between the data point and its nearest neighbor. This decision rule has a lower probability of error than several other decision rules. Variation of under-sampling based on NN rule includes condensed NN method, the edited NN method, the repeated edited NN method, and neighborhood cleaning method [37] and other variations [38; 39]. Tomek's links (T-link) is an enhancement of NN rule for under-sampling the majority class in which the pair of data with opposite labels in the same neighborhood create a Tomek link. The data point on the link that belongs to the majority class is removed. This method improves the classification accuracy of the minority class by creating a distinct margin between the two classes [40]. Under-sampling based on clustering utilizes the clustering algorithms such as K-means that show promising performance with imbalanced data [41]. The one-sided selection method is an adaptation of Tomek's link. In this method, a subset of the majority class is selected for classification while the minority class remains untouched [42]. Under-sampling based on Instances Hardness Threshold (IHT) method is used to overcome the problem of imbalanced data. This under-sampling method reduces the size of the majority class by removing the data that has a high hardness threshold, which is the probability of misclassification of the data [43; 44; 45]. #### 2.1.2 Over-sampling Methods An effective way of dealing with the issue of imbalanced data is Over-sampling. Studies suggest that the number of features and imbalance ratio are important factors in determining the best approach [46; 47]. Over-sampling methods such as bootstrap-based over-sampling, over-sampling based on Synthetic Minority Over-sampling Technique (SMOTE), and over-sampling based on Adaptive Synthetic sampling method (ADASYN) are widely used in imbalanced learning [48; 49; 50; 51; 52; 53; 54] Bootstrap-based over-sampling is iteratively replicating the instances of a selected sample, in which the instances are replaced and are probable to be selected more than once. The number of iterations and the sample size is required before oversampling [55; 56]. In over-sampling using SMOTE, the number of instances in the minority class is increased by syntactically creating new instances instead of merely replicating the existing instances. SMOTE generates data in the feature space, and it depends on introducing new instances based on the nearest neighbors [57]. In this method, the new examples are added near the line segment that joins the nearest neighbors of the minority class. The nearest neighbors are selected to create the instances required for over-sampling [58; 59; 60; 61; 62; 63; 64; 65; 66]. Inspired by SMOTE, XiChen et al. [67] proposed a sampling method, in which new synthetic neighborhood samples are generated. Controlling the number of generated samples can improve the balance ratio and promote diversity in the data. Zhou et al. [68] proposed a cost-sensitive SMOTE for data classification. Since the samples are generated in the feature space, creating a new sample in a nonlinear space can improve the results after resampling of the minority class [69]. Among different variations of SMOTE, ADASYN, motivated by SMOTE, is a popular oversampling method, in which the data is synthetically generated to increase the size of the minority class. The size of the generated data is determined by the density distribution criteria defined for each example which is an advantage over SMOTE in that the number of generated data is predetermined [70; 71; 72]. The over-sampling methods are not limited to the ones mentioned in this paper. These methods can be applied alone or in combination with each other to improve classification results [73]. For example, in this paper, the authors combined the ADASYN method with a cost-sensitive base model to improve the results in a study of the transient stability of power systems [74]. Many studies have been carried out to evaluate the effectiveness and efficiency of resampling methods, and provide a guideline on selecting the best one for the specific data [75; 76; 77; 78]. Figure 4 provides a structured overview of resampling methods used in data processing approach. Over the years, different variations of resampling methods are used in combination with algorithmic approaches to enhance the prediction accuracy in imbalanced data. ## 3 Algorithmic Approaches ### Cost-sensitive Methods In real-world applications of imbalanced learning, such as cancer diagnosis, fraud detection, or severe weather prediction; the misclassification cost is different for the minority class and the majority class, respectively. In imbalanced learning where the misclassification cost of the minority class is more important, cost-sensitive methods are used. In cost-sensitive methods, the cost of misclassification is known and defined in a cost matrix based on the cost associated with a false positive and a false negative. The goal is to classify the data while minimizing the expected misclassification cost of making a false prediction. In imbalanced learning, we can benefit from shifting the Figure 4: Data Processing Methods in Imbalanced Learning classification algorithm towards further minimizing the misclassification error of the minority class [79][80]. Cost-sensitive methods are categorized as direct and meta-learning methods. #### 3.1.1 Direct Methods In the direct methods, the classifiers are designed to anticipate different misclassification costs for false positives and false negatives. Cost-sensitive decision trees are an example of such methods that have improved the classification results by incorporating the cost in the model and aiming to minimize the misclassification cost [81; 82; 83; 84; 85; 86]. In other Cost-sensitive methods, weights are used in the classification algorithm [87]. Studies show that iterative weighting of the samples can improve the results as well as achieving computational efficiency [88; 89]. Cost-sensitive boosting methods have also been used to compare the effectiveness of such algorithms on benchmark datasets [90; 91]. Wu et al. [92] used a cost-sensitive multi-set feature learning on multiple samples constructed by partitioning the majority class and combining the blocks with the minority class to obtain balanced datasets. The model is evaluated using benchmark data sets and recommended for highly imbalanced data. One of the challenges of cost-sensitive methods is identifying the misclassification costs. Zhang et al [93] proposed an adaptive differential evolution to find the optimal misclassification costs. #### 3.1.2 Meta-learning Methods Meta-learning methods are used to convert cost-insensitive classifiers into cost-sensitive algorithms without making modifications to the algorithm by thresholding and sampling methods. Thresholding models classify the data by producing probability estimations using a cost-insensitive algorithm and use a threshold to classify the data [94; 95; 96]. Thresholding is an effective method that expands the space and increases the probability of classifying the instances associated with the minority class. Therefore, they often produce the lower misclassification cost comparing to other classification methods [97]. Sampling meta-learning methods modify the class distributions in the dataset, before training the data using a cost-insensitive classifier. Weighing is also a type of sampling method in which a normalized weight based on the misclassification cost is assigned to the data before classification[98; 99]. Cost-sensitive learning is most effective when embedded in the machine learning based model, categorized as Ensemble methods. Ensemble methods are discussed thoroughly in a separate section. ### Machine Learning Based Modeling Various machine learning models have been explored in the attempt to minimize the misclassification error of the minority class such as Logistic Regression (LR), Artificial Neural Networks (ANNs) [100], Random Forest (RF), Decision Trees (DT), Naive Bayes (NB), and Gaussian NB, and K-Nearest-Neighbor (KNN), Support Vector Machines (SVM).Empirical studies on benchmark data suggest different base predictor models [101; 102]. #### 3.2.1 Tree-based Models DT is a classification algorithm that splits the data set into smaller subsets to predict the output value of the test data. The conditions by which the data is split are called leaves, and the decision is known as a branch. The data is split until we have reached the depth of the tree and no further split is possible. DT is a fast and simple algorithm in which the process of classification and inquiries made are clear [103; 104; 105; 106; 107; 108]. RF is a powerful ensemble method, which is an aggregation of less accurate predictive models to create a better model. This model is used for regression or classification. In RF classification, decision trees are used to introduce randomness when selecting the suboptimal splits, and the goal is to aggregate as many uncorrelated trees as possible and improve the accuracy at each step [109; 110; 111; 112; 113]. #### 3.2.2 Probabilistic Models NB is a supervised learning method. The first assumption in this method is that all the data points are independent of one another. This is an unrealistic yet helpful assumption for training the data. In this method, the training data is used to calculate the probability of each class and the conditional probability of each class for a given data point. These two pieces of information are used to predict the class of new data points. Gaussian NB is a modification of the NB method, except that for the input data in real values, a Gaussian distribution is assumed to make calculating the probabilities easier [114; 115] #### 3.2.3 Neighborhood-based Models KNN is a simple yet powerful algorithm that uses the whole dataset for classification. To classify a new data point, KNN uses the data points closer to the designated point based on their Euclidean distance. Then it summarizes their output values and assigns the result as the label of the new data point. In KNN, training, and testing are combined in one step which increases the effectiveness and efficiency of the model which is one of the widely used imbalanced learning models. The papers [116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127] are representative of those methods. #### 3.2.4 Kernel-based Models Linear and logistic regression are probably the most well-known and widely used machine learning algorithms. Linear regression is used for predicting values that are in a range, but logistic regression is appropriate when we are trying to predict categorical output values such as binary classification [128]. LR is presented by a non-linear function, and the data is classified based on the features correlated with the output variable [129; 130; 131]. To further improve the traditional LR, Ohsaki et al. [132] proposed a novel confusion-based kernel logistic regression that utilized a harmonic mean objective function to improve generalization and classification errors of the model. Historically, in the 1950s and 1960s, perceptron algorithms were used for detecting linear relations in the data. Perceptron algorithm is one of the oldest machine learning algorithms. In this method, we need to associate a weight to the data points and define a threshold, known as bias. The weights and the threshold are extracted from the data. The weighted sum of the input data is calculated for predicting the output value. The label is one if the sum is greater than the designated threshold, and zero otherwise. In perceptron algorithm, the goal is to find the set of weights that best classifies the data. Nieminen et al. [133] demonstrated the use of a single layer perceptron based on a multi-criteria optimized MLP as the base model. Although perceptron algorithms were useful for processing linear relations in the data, developing efficient and stable algorithms for detecting nonlinear relations was a major challenge for researchers at the time. In the mid-1980s, back-propagation Neural Networks (NNs) and decision trees revolutionized the field of non-linear pattern analysis. In the mid-1990s, kernel-based methods were developed for nonlinear data analysis while retaining the efficiency and stability of previous linear algorithms. Kernel-based methods apply to a broad range of data types such as sequence, text, image, graph, and vectors. They can detect different types of linear and non-linear relations and they are used for correlation, factor, cluster, and discriminant analysis. Kernel methods have a modular framework, in which first the data is processed into a kernel matrix, then the data is analyzed using various pattern analysis algorithms based on the information contained in the kernel matrix [134; 135; 136; 137; 138; 139]. Kernel matrix is obtained from mapping the data from the input space into a higher dimensional feature space, using a transformation function denoted as \(\varphi(x)\). One of the challenges in kernel method is finding the kernel map, which is computationally expensive and sometimes impossible, therefore, kernel functions are defined by the dot product of the points in the input space. Using this feature, known as the kernel trick, \(K(x_{i}\varphi_{i})=\varphi(x)^{T}.\varphi(x)\), the data is mapped into the feature space, without explicitly defining the map. Different kernel functions have been developed. Kernel methods utilize a higher dimensional feature space to facilitate accurately classifying the minority class [140]. Among different classifiers, kernel-based SVM as introduced by Vapnik [141][142] has been widely used for imbalanced learning. SVM is a family of algorithms that use kernel methods to solve problems in classification and regression [143; 144; 145; 146; 147]. The idea of kernel SVM is to map the data to a higher dimensional feature space using a linear or nonlinear kernel [148; 149]. Then finding a separating hyperplane to maximize the margin of separation while minimizing the misclassification error by solving a quadratic optimization problem. SVMs are commonly used for classifying large data sets. The data is classified based on its location on either side of a hyperplane, which splits the input space. The separating hyperplane is not unique; however, the best hyperplane is the one that maximizes the margin of separation while minimizing the misclassification error. Combining cost-sensitive methods with SVM is a useful method for improving the misclassification cost [150; 151]. Cost-sensitive SVM embedded into the objective function can directly improve the classification performance when the feature set and tuning parameters are optimized [152]. Focusing on ROC and the AUC, Hu et al. [153] proposed the kernel online imbalanced learning algorithm that aims to maximize the AUC score while maintaining the regularization capabilities of the classifier. Weighted under-sampling SVM has improved the classification performance of SVM for imbalanced data [154]. Variations of SVM have been used for many applications such as fraud detection, gene profiling, weather prediction, etc [155; 156]. #### 3.2.5 Deep Imbalanced Learning ANN is a machine learning algorithm developed for separating non-linear data. In this method, a large number of units known as neurons are connected to form a multi-layer neural network. The neurons are divided into three types, input units that receive the information for processing, output units that contain the processing results, and the units in between known as hidden units. The efficiency of ANNs depends on the input units and their corresponding activation functions, the network architecture, and the weights of input connections, and the calculated weights of hidden units updated throughout the learning process. ANNs apply to various real-world problems as well as imbalanced data [157; 158]. ANNs are studied with cost-sensitive methods to improve misclassification cost by moving the classification threshold closer to the majority class, which allows more instances to be classified as the minority class. Other methods are imposing greater weights on the samples associated with the minority class [159; 160]. Ya-Guan et al. [161] proposed an improved ANNs method called the Equilibrium Mini-batch Stochastic Gradient Descent that improves the model's training convergence error. In recent years, Extreme Learning Machine (ELM) for NN structures proposed by Huang et al. [162] has been applied extensively for real-world imbalanced learning problems. Some of the proposed strategies based on ELM are Weighted ELM [163; 164; 165]. Class-specific ELM [166], and Class-specific Kernel ELM [167; 168] that have had promising results. When dealing with imbalanced data, deep learning algorithms face the same issues as traditional machine learning algorithms, and they fail to perform equally well in both classes [169]. Deep imbalanced learning models are developed to address the issue of imbalanced data in image recognition and computer vision [1707; 171]. Lin et al. [172] proposed a hybrid sampling method to remove the between class data points and guide the network to improve the classification results [173]. Dong et al.[174] developed a deep learning model to classify imbalanced datasets by imposing a class rectification loss as a regularization parameter to discover the boundaries in the minority class and reduce the effect of the majority class on the model. Bao et al. [175] introduced a deep learning framework to balance the data in a deeply transformed latent space. The superiority of this model is that feature learning, balancing, and discriminative learning are conducted simultaneously and it performs effectively on multi-classification problems. A cost-sensitive deep NN proposed by Khan et al. [176] is a robust feature representation of both classes. Therefore, it can have improved predictive capability. Lin et al. [177] proposed a deep reinforcement learning model based on the reward function specified for the minority and the majority class. #### 3.2.6 Ensemble Methods Ensemble methods is an approach in machine learning that utilizes multiple machine learning based models to improve predictive accuracy. A group of ensemble methods is formed based on Bootstrap Aggregating (Bagging) in which several bootstrapped subsamples are created and trained using a base model [178; 179]. Later the model aggregates the decision tree models to create the optimal predicting ensemble method [180]. Random forest models are another type of ensemble method which is a variation of Bagging, in which splitting the tree based on different features creates a more accurate model. For imbalanced learning, cost-sensitive decision trees are introduced [181]. Another ensemble method called CSRoutette is introduced that improves the performance by producing samples of different sizes based on a cost-sensitive model, combined with Bagging [182]. An empirical study of ensemble methods and meta-learning methods suggests that although these methods are effective for binary classification of imbalanced data, they might not perform well on multi-class classification problems. For multi-classification problems, a combination of LR and KNN is used. In the LR part of the model, an ensemble of Bagging and Boosting methods has resulted in a promising outcome [183]. Variations of the Boosting algorithms such as Adaptive Boosting (Adaboost) have shown promising results in classifying imbalanced data [184]. A comparison of various ensemble methods suggests that combining data preprocessing approaches, such as RUS with Bagging or Boosting methods can result in higher performance [185]. An ensemble of random subsampling with RF has reasonable performance [186, 187]. Modifying the objective function to anticipate different factors to minimize the misclassification error in combination with evolutionary under-sampling methods is an example of ensemble methods aiming to improve the results of imbalanced learning [188]. Ensemble methods with multi-objective optimization function provide powerful algorithms for imbalanced learning [189, 190]. Ensemble methods are also effective for highly imbalanced data [191]. For image recognition, ensemble deep imbalanced learning with a focus on resampling the data, and a weighted loss function has improved the image classification results [192]. Wu et al. [193] used a genetic algorithm approach with a deep imbalanced learning model to optimize oversampling the minority class. Despite being able to improve the classification results, ensemble methods are computationally complex. A structured overview of the methods used in algorithmic approach is presented in Figure 5. ## 4 Applications Classification of imbalanced data is a challenging task and it is one of the popular research problems with many applications in the real world [194; 195]. In this section we have presented some of the highly impactful imbalanced learning problems, however, the applications of imbalanced learning are not limited to the mentioned examples. Figure 6 provides an overview of areas in which imbalanced learning is used. Figure 5: Algorithmic Approaches in Imbalanced Learning Figure 6: Applications of Imbalanced Learning ### Risk Assessment in Business and Finance In business analytics, bankruptcy prediction is an imbalanced learning problem. Gnip et al. (1966) used multiple ensemble methods to accurately predict the data collected from medium-sized enterprises in the Slovak Republic (197). In banking, credit scoring and evaluating the potential risk posed by applicants' unpaid loans is an important issue, and due to frequency, it is an example of imbalanced data (198; 199; 200). For example, detecting fraudulent transactions using ensemble methods (201) and evaluating loan and credit applications can benefit from imbalanced learning to support the decision-making process. Approval or rejection of loans based on the applicant's credit history is an imbalanced learning problem with unpaid loan creating the minority class (202; 203; 204). Fraud detection is one of the major applications of imbalanced learning algorithms. Bauder et al. (2055) compared the performance of different resampling approaches on highly imbalanced data from Medicare to detect fraudulent cases. ### Behavior Management Imbalanced learning is also applicable to the data collected from Socioeconomic systems. For example, Orooji et al. (2066) predicted the rate of high school dropout in Louisiana, US., which has negative impacts on the well-being of society, and Zheng et al. (207) explored a short tree-based adaptive classification test to assess the risk factors for juvenile delinquency. ### Cybersecurity and Software Management In cybersecurity, spam and software defect detection is an example of imbalanced learning (208; 209; 210; 211). Chen et al. (212) proposed an ensemble model based on Choquet fuzzy integral with an improved SMOTE resampling technique for bug report identification that can prevent damage to software. Developing effective Intrusion Detection systems (IDS) is essential to cybersecurity (213). Karatas et al. (214) used the SMOTE resampling method to improve IDS performance. Feng et al. (215) tackled the issue of imbalanced data in IDS classification using a cost-sensitive feature engineering method based on General Vector Machine(GVM) and Binary Ant Lion Optimizer. Zheng et al. (216) used a modified SVM to improve offline signature verification systems. Pang et al. [217] used an ensemble of SMOTE and SVM to detect malicious apps for android users. ### Natural Disasters and Emergency Management An impactful application of imbalanced learning is predicting rare natural disasters. Fernandez-Gomez et al. [218] studied the use of ensemble methods on predicting rare large magnitude earthquakes with a horizon of prediction of five days in Chile. Seismic capability evaluation of buildings is also an imbalanced learning problem in earthquake engineering [219]. Predicting severe weather events such as tornado is an imbalanced learning problem in meteorology and data mining [220; 221; 222]. Optimizing the available resources in urgent care is important in times of crisis. An ensemble method consisting of Bagging and DT can improve the prediction results for patient readmission to the emergency department of a hospital in Chile [223]. ### Bio-informatics and Bio-engineering Medical diagnosis is an example of imbalanced learning in the field of bioinformatics and bioengineering. Zhang et al. [224], explored the use of an ensemble method of RUS with K-means and SVM to improve the diagnosis accuracy. Zheng et al. [225] used a Convolutional Neural Network (CNN) to detect exudate in optic images, that if detected correctly can prevent diabetic retinopathy and blindness. Jeong et al. [226] addressed the issue of multi-classification of imbalanced kidney data. In this paper, the glomerular rate is defined as target to diagnose chronic kidney disease. The goal is to classify the data into five stages using four methods of multinomial LR, and ordinal LR, RF, and Autoencoder (AE). The comparison of the four models suggests that AE provides better performance and is recommended for similar problems. Farhadi et al. [227] used a deep transfer learning model on constructing medical image data to evaluate the model's efficiency in diagnosing high-grade breast cancer. A breast cancer diagnosis has been improved by advanced imbalanced learning methods introduced in the past decade [228; 229; 230]. Other cost-sensitive methods have also been used to improve the classification accuracy of medical diagnosis [231; 232]. Deng et al. [233] have introduced a dynamic clustering method that iteratively adjusts the cluster based on the weight changes in the cluster. This algorithm is evaluated using gene expression cancer diagnosis data and applies to biological and cyber-physical systems. A deep imbalanced learning framework applies to different fields such as active balancing in biomedical data [234]. ### Computer Vision Image processing and recognizing facial images and other attributes in detail is a challenging task in computer vision, and the difficulties escalate when the data is imbalanced [235; 236; 237]. Various ensemble methods have been explored to classify multimedia data [238]. Pouyanfar et al. [239] proposed an ensemble deep learning framework based on the performance of SVM classifiers on deep feature sets which is evaluated using multi-media data for semantic event detection. In terms of application, different packages exist that can be used to implement the models in Python, R, or other scripting languages [240]. #### Future Research Trends Learning from imbalanced data is one of the challenging tasks in data mining. However, it gets even more difficult when it is combined with other issues. Different studies have been carried out to explore strategies for specific issues of the minority class, such as highly imbalanced cases, noisy data [4], outliers, sparse data [241; 242] and the problem of imbalanced distribution within the minority class [243; 244]. Another category of imbalanced learning problems is multi-class problems that require more advanced techniques to deal with imbalanced data [245; 246]. Some of the proposed strategies such as weighted extreme learning machines, weighted support vector machines [247], and sequential ensemble learning have been relatively effective in the case of highly imbalanced data [248; 249; 250]. However, these methods are computationally complex and further improvement is desired. A real-world application of imbalanced learning is time series analysis with imbalanced and skewed data. This is particularly challenging due to the high dimension of the data and underlying correlations within the data, and further exploration is desired [251; 252; 253]. Imbalanced learning is an often over-looked issue in Online Learning of streaming data [254; 255]. Different methods such as cost-sensitive methods have been explored in various studies evaluated based on the imbalanced learning metrics [256].However, extensive research is required to address the issues in online imbalanced learning of large scale data. The last but not least is the problem of imbalanced Learning in distributed framework [257; 258]. Decentralized data centers often cause skewed class distribution in different classes. Distributed learning has gained more attention in the past few years and tackling the issue of imbalanced data in such framework is essential. ## Conclusion Extensive research has been carried out to improve and identify the best approaches for imbalanced data in different fields from cyber-security to business analytic and bio-informatics. In this paper, we have provided a review of the wide range of methods applied to imbalanced data from a technical perspective. Examples of real-world applications have also been reviewed. We have collected and reviewed the papers published in peer-reviewed journals from 2000 to 2020 to understand the trends and advances in learning from imbalanced data and provide insights for future research trends in this highly anticipated field.
2307.04609
Complex structures on the product of two Sasakian manifolds
A Sasakian manifold is a Riemannian manifold whose metric cone admits a certain K\"ahler structure which behaves well under homotheties. We show that the product of two compact Sasakian manifolds admits a family of complex structures indexed by a complex nonreal parameter, none of whose members admits any compatible locally conformally K\"ahler metrics if both Sasakian manifolds are of dimension greater than $1$. We compare this family with another family of complex structures which has been studied in the literature. We compute the Dolbeault cohomology groups of these products of compact Sasakian manifolds.
Marchidanu Vlad
2023-07-10T14:52:17Z
http://arxiv.org/abs/2307.04609v1
# Complex structures on the product of two Sasakian manifolds ###### Abstract A Sasakian manifold is a Riemannian manifold whose metric cone admits a certain Kahler structure which behaves well under homotheties. We show that the product of two compact Sasakian manifolds admits a family of complex structures indexed by a complex nonreal parameter, none of whose members admits any compatible locally conformally Kahler metrics if both Sasakian manifolds are of dimension greater than 1. We compare this family with another family of complex structures which has been studied in the literature. We compute the Dolbeault cohomology groups of these products of compact Sasakian manifolds. **Keywords: Sasakian manifold, complex structure, Kahler manifold, LCK manifold** **2020 Mathematics Subject Classification:** 53C25, 53C55 ###### Contents * 1 Introduction * 2 Sasakian manifolds * 2.1 Tensorial definition of Sasakian manifolds * 2.2 Sasakian manifolds via the Riemannian cone * 2.3 Basic cohomology of Sasakian manifolds * 2.4 The product of two Sasakian manifolds * 3 The main result * 4 Complex submanifolds of the product of Sasakian manifolds * 4.1 The product of Sasakian manifolds * 4.2 The product of two Sasakian manifolds * 4.3 The main result * 5 Comparison with the Calabi-Eckmann-Morimoto complex structures * 6 The Dolbeault cohomology of the product of Sasakian manifolds ## 1 Introduction Sasakian manifolds are the natural odd-dimensional analogue of Kahler manifolds (see e.g. [4]). In the compact case, they are closely related to both projective and Vaisman manifolds ([17]). Kahler manifolds can be viewed as almost complex manifolds endowed with a Hermitian metric such that the associated fundamental two-form is parallel with respect to the metric connection. Likewise, Sasakian manifolds can be thought of as almost contact manifolds endowed with a compatible Riemannian metric satisfying certain tensorial conditions (see [2] and Section 2.1). Being even dimensional, a product of Sasakian manifolds is susceptible to bear almost complex structures. Indeed, more generally, Morimoto constructed an almost complex structure on the product of two almost contact manifolds ([15]) which proved to be integrable when the the two almost contact structures were normal. If one starts with metric almost contact structures, then the product metric is compatible with Morimoto's almost complex structure. One thus obtains an almost Hermitian structure on the product. The usual complex structure of the Calabi-Eckmann manifold can be viewed this way. In particular, starting with two Sasakian manifolds (whose subjacent almost contact structures are normal), one obtains a Hermitian metric on the product. This product Hermitian structure on a Calabi-Eckmann manifold was later included by Tsukada in a two parameter family of Hermitian structures ([20]). This construction was further generalized in [11] to the product of two Sasakian manifolds. It was recently considered also in [1]. All these constructions use the tensorial definition of Sasakian manifolds and are heavily computational. With these techniques, the authors of [1] can prove that the considered two-parameter family of Hermitian structures is neither Kahler nor locally conformally Kahler. What we propose in the present paper is a shift towards the modern defi nition of a Sasakian manifold as a Riemannian manifold with a Kahler structure on its Riemannian cone (2.2). On the product of two compact Sasakian manifolds we construct a natural family of complex structures indexed by a purely complex parameter which we can prove that does not support neither Kahler nor locally conformally Kahler metrics. Furthermore, we are able to characterize the complex submanifolds of the product. Moreover, we show that our family of complex structures does not coincide, in general, with the one in [11]. Furthermore, we compute the Dolbeault cohomology groups of these complex manifolds. **Acknowledgements.** This paper is largely a result of my stay in Rio de Janeiro. I have learned many things from mathematicians at IMPA, but I would like to thank in the first place Prof. Misha Verbitsky for the insightful discussions we had and for helping me better understand mathematics. I am grateful to my thesis advisor, Prof. Liviu Ornea, who has guided me constantly and helped me with valuable comments and recommendations. ## 2 Sasakian manifolds ### Tensorial definition of Sasakian manifolds The notion of a Sasakian manifold was initially introduced by Shigeo Sasaki in [18] as an odd-dimensional counterpart to Kahler manifolds. We recall the tensorial definition of a Sasakian structure. Given a smooth, odd-dimensional manifold \(S\), a Sasakian structure is given by the data \((g,\eta,\varphi,\xi)\), where \(g\) is a Riemannian metric on \(S\), \(\eta\) is a 1-form, \(\varphi\) is a \((1,1)\)-tensor field and \(\xi\) is a vector field, satisfying the following properties for any \(X,Y\in TS\): \[\eta\circ\varphi =0\] \[\eta(X) =g(X,\xi)\] \[\varphi^{2} =-\mathrm{Id}+\eta\otimes\xi\] \[g(\varphi(X),\varphi(Y)) =g(X,Y)-\eta(X)\eta(Y)\] \[(-2d\eta\otimes\xi)(X,Y) =\varphi^{2}([X,Y])-\varphi([\varphi X,Y])-\varphi([X,\varphi Y] )-[\varphi X,\varphi Y]\] \[\mathrm{Lie}_{g}\xi =0\] \[(\nabla^{g}_{X}\varphi)Y =g(X,Y)\xi-\eta(Y)X\] A well-studied generalisation is that of an _almost contact structure_\((S,\eta,\varphi,\xi)\) which occurs if we omit the presence of the metric \(g\) and keep the first three conditions above, replacing \(\eta(X)=g(X,\xi)\) with \(\eta(X)=1\), \(\varphi(\xi)=0\). This is usually viewed as a counterpart of almost complex geometry. See [2] for details. In this paper we shall use the modern definition which places Sasakian manifolds into the framework of holonomy. This approach was widely spread following the pioneering work of C.P. Boyer and K. Galicki (see [4]). ### Sasakian manifolds via the Riemannian cone Let \((S,g)\) be an odd-dimensional Riemannian manifold and \((C(S):=(S\times\mathbb{R}^{>0},g_{C(S)}=dt\otimes dt+t^{2}g)\), \(t\in\mathbb{R}^{>0}\), its Riemannian cone. **Definition 2.1**.: A Sasakian structure is the data of a Kahler structure \((J,\omega,g_{C(S)})\) on \(C(S)\) such that the homothety map \(h_{\lambda}:C(S)\to C(S)\), \(h_{\lambda}(p,t):=(p,\lambda t)\) is holomorphic and satisfies \(h_{\lambda}^{*}\omega=\lambda^{2}\omega\) for each \(\lambda\in\mathbb{R}^{>0}\). We denote by \(R:=t\frac{d}{dt}\) the Euler field on \(C(S)\) and by \(\xi:=JR\) the Reeb field. By definition \(R\) is holomorphic, so \([R,\xi]=0\). Since \(C(S)\) is Kahler, \(\xi\) is also holomorphic. When referring to \(S\), we also denote by \(\xi\) the vector field \(\xi|_{t=1}\) on \(S\times\{1\}\subset C(S)\). The equivalence of the definition of Sasakian manifolds via their metric cone with the definition formulated in Subsection 2.1 is established in [4, Section 6.5]. For our purposes, we mention that starting with a Sasakian manifold in the above sense, one defines the tensor field \(\varphi\in\mathrm{End}(TS)\): \[\varphi(X):=\mathrm{pr}_{TS}JX,\quad X\in TS\subset TC(S)\] where \(J\) is the complex structure on \(C(S)\). We also define the \(1\)-form on \(C(S)\), \(\eta:=\frac{1}{t}Jdt\), which is readily seen to be equal to \(\frac{1}{t^{2}}i_{R}\omega\). As we did with \(\xi\), we shall also denote \(\eta\) the restriction \(\eta=\eta|_{t=1}\) on \(S\). Then we have: **Proposition 2.2**.: \(S\) _is an almost contact manifold with contact form \(\eta\) and characteristic field \(\xi\). Moreover, \(\varphi^{2}=-\text{Id}+\eta\otimes\xi\)._ Denote by \(D=\langle R,\xi\rangle^{\perp}\) the distribution \(g_{C(S)}\)-orthogonal to \(\langle R,\xi\rangle\) on \(C(S)\). Note that \(t^{2}\) is a Kahler potential for \(\omega\) and \(dd^{c}(\log t)\) vanishes on \(\langle R,\xi\rangle\), the rest of its eigenvalues being positive. It follows that: **Proposition 2.3**.: \(\ker(d\eta)=\langle R,\xi\rangle\) _and \((d\eta)|_{D}=\omega|_{D}\). In particular \((d\eta)|_{D}\) is a Kahler form._ ### Basic cohomology of Sasakian manifolds **Definition 2.4**.: [19, Chapter 4] Let \((M,\mathcal{F})\) be a foliated manifold and consider \(F\subset TM\) to be the subbundle of vectors tangent to leaves of \(\mathcal{F}\). A form \(\eta\in\Lambda^{*}M\) is called **basic** (with respect to \(\mathcal{F}\)) if for any vector field \(X\in\Gamma F\), \(\mathrm{Lie}_{X}\eta=0\) and \(i_{X}\eta=0\). Denote the space of basic forms on a foliated manifold \((M,\mathcal{F},F)\) by \(\Lambda^{*}_{\mathrm{bas}}M\). By Cartan's formula, the exterior differential \(d\) maps basic forms to basic forms. Therefore, \(d\) induces a cohomology on basic forms, which we denote \(H^{*}_{\mathrm{bas}}M\). We are interested in a particular type of foliations: **Definition 2.5**.: Let \((M,\mathcal{F},F)\) be a foliated manifold. Let \(\omega_{0}\in\Lambda^{*}_{\mathrm{bas}}(M)\) with \(d\omega_{0}=0\) and \(g_{0}\in\mathrm{Sym}^{2}_{\mathrm{bas}}(T^{*}M)\) such that \(\omega_{0}|_{F}=0\) and \(\omega_{0},g_{0}\) are positive definite on \(TM/F\). If the complex structure \(J\) obtained from \(\omega_{0}\) and \(g_{0}\) is locally integrable on any open set in \(M\) where the leaf space is defined, \((M,F,g_{0},\omega_{0})\) is called a **transversally Kahler foliation**. On compact Kahler manifolds the following well-known consequence of Hodge decomposition and Dolbeault decomposition holds. **Theorem 2.6**.: ([5, Theorem VI.8.5]) _Let \(M\) be a compact Kahler manifold. Denote by \(H^{p,q}_{\bar{\partial}}M\) the Dolbeault cohomology groups given by \(\bar{\partial}:\Lambda^{p,q}M\to\Lambda^{p,q+1}M\). Then the Hodge decomposition holds:_ \[H^{k}_{DR}M=\bigoplus_{p+q=k}H^{p,q}_{\bar{\partial}}M\] The usefulness of transversally Kahler foliations lies in the following result analogous to Theorem 2.6. **Theorem 2.7**.: ([17, Theorem 30.28]) _Let \(M\) be a compact manifold with a transversally Kahler foliation \((M,F,g_{0},\omega_{0})\) such that \(F\) is generated by Killing vector fields and \(M\) is equipped with a metric \(g\) with \(g|_{TM/F}=g_{0}|_{TM/F}\). Suppose there exists \(\Phi\in\Lambda^{*}(M)\) with \(d\Phi=0\), \(\Phi|_{F}=0\) and \(\Phi\) is a volume form on \(TM/F\)._ _Then \(H_{\text{bas}}(M)\) behaves just like the cohomology of a Kahler manifold with respect to the Kahler form \(\omega_{0}\). In particular, \(H_{\text{bas}}(M)\) admits the Hodge decomposition i.e._ \[H^{k}_{\text{bas}}M=\bigoplus_{p+q=k}H^{p,q}_{\delta_{\text{bas}}}M\] _where \(\bar{\partial}_{bas}\) is the operator given locally on the leaves of the foliation \(F\) by the complex structure \(J\) determined by \(\omega_{0}\) and \(g_{0}\) as in Definition 2.5._ It turns out moreover that the cohomology of Sasakian manifolds is closely related to the basic cohomology of their associated transversally Kahler foliation. More precisely, we have: **Theorem 2.8**.: ([4, Proposition 7.4.13]) _Let \(S\) be a Sasakian manifold of dimension \(2n+1\) with characteristic (Reeb) field \(\xi\). Let \(F=\langle\xi\rangle\) be the transversally Kahler foliation generated by the Reeb field, which satisfies the conditions of the previous theorem. Then:_ \[H^{k}(S)=\frac{H^{k}_{\text{bas}}(S)}{\operatorname{Im}(\omega_{0}\wedge \cdot)},\quad k<n\] ### The product of two Sasakian manifolds In the context of (almost) contact geometry, Morimoto was the first to introduce an almost complex structure on the product of two almost contact manifolds in [15]. He shows that this almost complex structure is integrable if and only if condition (2.1) is satisfied for each factor of the product. Building on Morimoto's ideas, Tsukada introduced in [20] a family of complex structures indexed by a complex nonreal parameter on the product of odd-dimensional spheres, noting that by the same argument as in [15] these structures are all integrable. In the same paper, Hermitian metrics associated with each of these complex structures are introduced. Watson generalised this family of pairs of complex structures and Hermitian metrics to products of Sasakian manifolds ([23]). We recall the definition of this family below. In the nomenclature of [23], we call a structure in this family a Calabi-Eckmann-Morimoto structure, or CEM for short. Let \(S_{1}\), \(S_{2}\) be Sasakian manifolds with \((1,1)\) tensors \(\varphi_{1}\), \(\varphi_{2}\) and Reeb fields \(\xi_{1},\xi_{2}\) respectively. Then there is a family of complex structures \(\{J_{a,b}:a,b\in\mathbb{R},b\neq 0\}\) on \(S_{1}\times S_{2}\): \[J_{a,b}(X_{1}+X_{2}) :=\varphi_{1}(X_{1})-\left(\frac{a}{b}\eta_{1}(X_{1})+\frac{a^{2} +b^{2}}{b}\eta_{2}(X_{2})\right)\xi_{1}\] \[+\varphi_{2}(X_{2})+\left(\frac{1}{b}\eta_{1}(X_{1})+\frac{a}{b} \eta_{2}(X_{2})\right)\xi_{2} \tag{1}\] Let \(g_{i}\) denote the Riemannian metric on the Sasakian manifold \(S_{i}\), \(i=\overline{1,2}\). For each pair \((a,b),b\neq 0\) there is an associated Hermitian ([20]) metric \(g_{a,b}\) given by \[g_{a,b}(X_{1}+X_{2},Y_{1}+Y_{2}) :=g_{1}(X_{1},Y_{1})+a\eta_{1}(X_{1})\eta_{2}(Y_{2})+a\eta_{1}(Y_{1 })\eta_{2}(X_{2})\] \[+(a^{2}+b^{2}-1)\eta_{2}(X_{2})\eta_{2}(Y_{2})+g_{2}(X_{2},Y_{2}) \tag{2}\] The metric data given by \(g_{a,b}\) has been studied. It is shown in [11, Theorem 1] that the metric \(g_{a,b}\) is Einstein if and only if \(a=0\), \(S_{1}\) is Einstein, and \(S_{2}\) is \(\eta_{2}\)-Einstein with some specific constants (see [16], [18] for \(\eta\)-Einstein manifolds). The authors of [11] also consider the property of weak \(*\)-Einsteiniainty for the product, which involves the interplay of \(J_{a,b}\) with \(g_{a,b}\). In showing that \((S_{1}\times S_{2},J_{a,b},g_{a,b})\) is never weakly \(*\)-Einstein, they also prove that \((J_{a,b},g_{a,b})\) is never Kahler. Further exploring this interplay, the authors of [1] study whether and when the pairs \((J_{a,b},g_{a,b})\) satisfy a number of natural conditions which are weaker than Kahlerianity, building on previous work in [12] and [6]. The results known about \((J_{a,b},g_{a,b})\) are summarized in the following **Theorem 2.9**.: ([12],[6],[1]) _Let \(S_{1}\) and \(S_{2}\) be Sasakian manifolds of dimensions \(2n_{1}+1\) and \(2n_{2}+1\) respectively. Consider the complex structure \(J_{a,b}\) (1) and the Hermitian metric \(g_{a,b}\) (2). Then:_ 1. _If_ \(n_{1}+n_{2}\geq 1\) _then_ \((J_{a,b},g_{a,b})\) _is not balanced (see_ _[_14_]__)._ 2. \((J_{a,b},g_{a,b})\) _is LCK (see_ _[_22_]_ _as well as_ _[_17_, Chapter 3]_ _for equivalent definitions) if and only if_ \(n_{1}=0\) _and_ \(n_{2}\geq 1\) _or_ \(n_{2}=0\) _and_ \(n_{2}\geq 1\)_; if it is LCK, then it is also Vaisman._ 3. \((J_{a,b},g_{a,b})\) _is SKT (see_ _[_8_]__) if and only if either_ \(n_{1}=1\) _and_ \(n_{2}=0\) _or_ \(n_{1}=0\) _and_ \(n_{2}=1\) _or_ \(a=0\) _and_ \(n_{1}=n_{2}=1\)_._ 4. _If_ \(n_{1}+n_{2}\geq 2\) _then the condition_ \[n_{1}(n_{1}-1)+2an_{1}n_{2}+n_{2}(n_{2}-1)(a^{2}+b^{2})=0\] _holds if and only if_ \((J_{a,b},g_{a,b})\) _is_ \(1\)_-Gauduchon (see_ _[_7_]_ _for_ \(k\)_-Gauduchon) if and only if_ \((J_{a,b},g_{a,b})\) _is astheno-Kahler (see_ _[_9_]__)._ 5. _If_ \(n_{1}+n_{2}\geq 3\) _and_ \(2\leq k\leq\dim_{\mathbb{C}}(S_{1}\times S_{2})-1\)_, then_ \((J_{a,b},g_{a,b})\) _is_ \(k\)_-Gauduchon if and only if the following holds:_ \[(n_{1}+n_{2}-k)\left(n_{1}(n_{1}-1)+2an_{1}n_{2}+n_{2}(n_{2}-1)(a^{2}+b^{2}) \right)=0\] ## 3 The main result **Theorem 3.1**.: _Let \(S_{1}\), \(S_{2}\) be compact Sasakian manifolds of respective dimensions \(2n_{i}+1\), with \(n_{i}>1\). Then \(S_{1}\times S_{2}\) has a family of complex structures indexed by a complex nonreal parameter, none of whose members admits any Kahler or LCK metrics._ Proof.: **Step 1. Definition of the complex structure on the product.** To define the complex structure, we consider the following generalisation of Calabi-Eckmann manifolds. Let \(S\) be a Sasakian manifold and define an action of \((\mathbb{C},+)\) on the open cone \(C(S)\) by putting for \(a+bi\in\mathbb{C}\): \[(a+b\sqrt{-1})\cdot p:=\phi_{1}^{aR+bJR}(p), \tag{3}\] where \(\phi_{t}^{X}\) denotes the flow of the vector field \(X\) at time \(t\). Since \(R\) and \(\xi\) commute, we have \[\phi_{1}^{(cR+d\xi)+(aR+b\xi)}(p)=\phi_{1}^{cR+d\xi}(\phi_{1}^{aR+b\xi}(p))\] In other words \[((c+d\sqrt{-1})+(a+b\sqrt{-1}))\cdot p=(c+\sqrt{-1}d)\cdot((a+b\sqrt{-1})\cdot p)\] showing that indeed (3) defines a group action. This action is a holomorphic map \(\mathbb{C}\times C(S)\to C(S)\). Indeed, the Reeb and Euler fields act by biholomorphisms. Further, let \(x\in C(S)\) and \(v\in\mathbb{C}\), \(X\in T_{v}\mathbb{C}\) and \(\gamma(t)\) be a curve with tangent vector \(X\) at \(v\). Then \(JX=\frac{d}{dt}|_{t=0}\left(\sqrt{-1}\gamma(t)\right)\). Since \([R,\xi]=0\), one vector field is invariated by the flow of the other. Therefore: \[d_{v}(w\mapsto(w\cdot x))\left(JX\right) =\frac{d}{dt}|_{t=0}\left((\sqrt{-1}\gamma(t))\cdot x\right)\] \[=\frac{d}{dt}|_{t=0}\left(\phi_{1}^{(-\mathrm{Im}(\gamma(t))) \mathrm{R}+\mathrm{Re}(\gamma(t))\xi}(x)\right)\] \[=\frac{d}{dt}|_{t=0}\left(\phi_{\mathrm{Re}(\gamma(t))}^{\xi} \left(\phi_{-\mathrm{Im}(\gamma(t))}^{R}(x)\right)\right)\] \[=-\mathrm{Im}(\mathrm{X})\mathrm{R}+\mathrm{Re}(\mathrm{X})\xi= \mathrm{J}(\mathrm{Re}(\mathrm{X})\mathrm{R}+\mathrm{Im}(\mathrm{X})\xi)\] \[=Jd_{v}(w\mapsto(w\cdot x))\left(X\right)\] which shows that the map \(w\mapsto(w\cdot x)\) is holomorphic for every fixed \(x\in C(S)\). Let now \(S_{i}\), \(i=1,2\), be compact Sasakian manifolds with Euler fields \(R_{i}\), Reeb fields \(\xi_{i}:=J_{i}R_{i}\), and consider the diagonal action of \(\mathbb{C}\times\mathbb{C}\) on the product of the cones \(C(S_{1})\times C(S_{2})\). Fix some \(\alpha\in\mathbb{C}\) with \(\mathrm{Im}\alpha\neq 0\) and define the subgroup \(G_{\alpha}:=\{(t,\alpha t):t\in\mathbb{C}\}\) of \((\mathbb{C}\times\mathbb{C},+)\). Clearly, \(G_{\alpha}\) is isomorphic with \(\mathbb{C}\) and acts on \(C(S_{1})\times C(S_{2})\). We analyze \((C(S_{1})\times C(S_{2}))/G_{\alpha}\). Let \(r_{i}:C(S_{i})\to\mathbb{R}^{>0}\) be the projections on the radial directions. **Claim 3.2**.: For any \((a,b)\in\mathbb{R}^{>0}\times\mathbb{R}^{>0}\) and any \(x=(\tilde{p}_{1},\tilde{p}_{2})\in C(S_{1})\times C(S_{2})\) there exists a unique \(v\in\mathbb{C}\simeq G_{\alpha}\) such that \(r_{1}(v\cdot x)=a\) and \(r_{2}(v\cdot x)=b\). Proof.: For \(\tilde{p}_{i}=(p_{i},t_{i})\), \(p_{i}\in S_{i}\), \(t_{i}\in\mathbb{R}^{>0}\) we have: \[v\cdot p_{1} =\phi_{1}^{\mathrm{Re}(v)R_{1}+\mathrm{Im}(v)\xi_{1}}(\tilde{p}_{ 1})=\phi_{1}^{\mathrm{Im}(v)\xi_{1}}(\phi_{1}^{\mathrm{Re}(v)R_{1}}(\tilde{p} _{1}))\] \[=\phi_{1}^{\mathrm{Im}(v)\xi_{1}}((p_{1},e^{\mathrm{Re}(v)}t_{1}))\] Since \(\mathrm{Im}(v)\xi_{1}\) acts only on the level sets of the cone, when we project to the radial direction we get: \[r_{1}(v\cdot\tilde{p}_{1})=e^{\mathrm{Re}(v)}t_{1}\] Similarly \(r_{2}(\alpha v\cdot\tilde{p}_{2})=e^{\mathrm{Re}(\alpha v)}t_{2}\). Therefore, what we need to show is that for any strictly positive \(a,b,t_{1},t_{2}\) there exists a unique \(v\in\mathbb{C}\) such that: \[\begin{cases}e^{\mathrm{Re}(v)}t_{1}=a\\ e^{\mathrm{Re}(\alpha)\mathrm{Re}(v)-\mathrm{Im}(\alpha)\mathrm{Im}(v)}t_{2}=b \end{cases}\] or \[\begin{cases}\mathrm{Re}(v)=\log(a)-\log(t_{1})\\ \mathrm{Re}(\alpha)\mathrm{Re}(v)-\mathrm{Im}(\alpha)\mathrm{Im}(v)=\log(b)- \log(t_{2})\end{cases}\] Since \(\mathrm{Im}\alpha\neq 0\), we have \[\mathrm{Im}v=\frac{\mathrm{Re}(\alpha)(\log(a)-\log(t_{1}))+\log(t_{2})-\log( b)}{\mathrm{Im}(\alpha)},\] and hence the solution \(v\) exists and is unique. Claim 3.2 provides an identification of the quotient \((C(S_{1})\times C(S_{2}))/G_{\alpha}\) with \(S_{1}\times S_{2}\), given by an explicit formula for \(\pi:C(S_{1})\times C(S_{2})\longrightarrow S_{1}\times S_{2}\). Denote, as in Claim 3.2, \[v(t_{1},t_{2}):=-\log(t_{1})+\frac{\sqrt{-1}}{\mathrm{Im}(\alpha)}(-\mathrm{ Re}(\alpha)\log t_{1}+\log t_{2}). \tag{4}\] Now the map \(\pi:C(S_{1})\times C(S_{2})\longrightarrow S_{1}\times S_{2}\) can be described as: \[\pi((p_{1},t_{1}),(p_{2},t_{2}))=\left(\phi_{1}^{\rm Im(v(t_{1},t_{2}))\xi_{1}}( p_{1}),\ \phi_{1}^{\rm Im(\alpha v(t_{1},t_{2}))\xi_{2}}(p_{2})\right). \tag{5}\] Since the action of \(G_{\alpha}\) defines a holomorphic map \[G_{\alpha}\times C(S_{1})\times C(S_{2})\to C(S_{1})\times C(S_{2})\] We conclude that \[(C(S_{1})\times C(S_{2}))/G_{\alpha}\simeq S_{1}\times S_{2}\] admits a complex structure compatible with the smooth product structure on \(S_{1}\times S_{2}\) and making the projection map \(\pi:C(S_{1})\times C(S_{2})\longrightarrow S_{1}\times S_{2}\) a holomorphic submersion. **Step 2.** We now aim to better understand the complex structure induced by \(\pi\). More precisely, we show that on the transverse distributions of each Sasakian, it acts like the complex structure on the cone, while it takes each Reeb field to the span of the two Reeb fields. To keep notation simple, we will deliberately use the same notation \(\xi_{i}\) for the Reeb field(s) both on the product of the Kahler cones and on the product of the Sasakian manifolds. Let \(X\in T_{p_{1}}S_{1}\) and \(x\in\pi^{-1}(p_{1},p_{2})\subset C(S_{1})\times C(S_{2})\) for some \(p_{2}\in S_{2}\). We see \(X\) as tangent in \(x\) to \(C(S_{1})\times C(S_{2})\). \(X\) can be extended to a vector field \(\tilde{X}\) on \(C(S_{1})\times C(S_{2})\), such that \(\tilde{X}\) is tangent to \(S_{1}\) and moreover \(\tilde{X}\) commutes with \(\xi_{1}\) (and hence with all multiples of \(\xi_{1}\)) in a neighborhood of \(x\). We can obtain such an extension by considering a chart on \(S_{1}\) in which \(\xi_{1}\) is a standard coordinate vector field, extending the expression of \(X\) in this chart to a constant vector field and multiplying it with a bump function. An extension \(\tilde{X}\) of \(X\) with \([\tilde{X},\xi_{1}]=0\) guarantees that \(d_{x}\phi_{1}^{\xi_{1}}(\tilde{X}_{x})=\tilde{X}_{\phi_{1}^{\xi_{1}}(x)}\) Further, we have: \[d_{x}\pi(\tilde{X}) =\frac{d}{dt}|_{t=0}\left(\pi((\phi_{t}^{\tilde{X}}(p_{1}),t_{1} ),\quad(p_{2},t_{2}))\right)\] \[=\frac{d}{dt}|_{t=0}\left(\phi_{1}^{\rm Im(v(t_{1},t_{2}))\xi_{1 }}(\phi_{t}^{\tilde{X}}(p_{1})),\quad\phi_{1}^{\rm Im(\alpha v(t_{1},t_{2})) \xi_{2}}(p_{2})\right)\] \[=\left(d_{p_{1}}\left(p\mapsto\phi_{1}^{\rm Im(v(t_{1},t_{2})) \xi_{1}}(p)\right)(\tilde{X}_{p_{1}}),\quad 0\right)_{\pi(x)}\] \[=X\] Similarly, for \(X\in T_{p_{2}}S_{2}\) we have: \[d_{x}\pi(\tilde{X})=\frac{d}{dt}|_{t=0}\left(\phi_{1}^{\rm{Imv}(t_{1},t_{2}) \xi_{1}}(p_{1}),\quad\phi_{1}^{\rm{Im}(\alpha v(t_{1},t_{2}))\xi_{2}}(\phi_{t}^ {\tilde{X}}(p_{2}))\right)=X\] For the first Euler field: \[d_{x}\pi(R_{1}) =\frac{d}{dt}|_{t=0}\pi\left((p_{1},e^{t}t_{1}),\quad(p_{2},t_{2})\right)\] \[=\frac{d}{dt}|_{t=0}\left(\phi_{1}^{\rm{Im}(v(e^{t}t_{1},t_{2})) \xi_{1}}(p_{1}),\quad\phi_{1}^{\rm{Im}(\alpha v(e^{t}t_{1},t_{2}))\xi_{2}}(p_ {2})\right)\] \[=\frac{d}{dt}|_{t=0}\left(\phi_{1}^{\xi_{1}}(p_{1},e^{t_{1}},t_{2 }))(p_{1}),\quad\phi_{1}^{\xi_{2}}(\alpha v(e^{t}t_{1},t_{2}))(p_{2})\right)\] \[=\frac{d}{dt}|_{t=0}\left(\rm{Im}(v(e^{t}t_{1},t_{2}))\right)( \xi_{1})_{\pi(x)}+\frac{d}{dt}|_{t=0}\left(\rm{Im}(\alpha v(e^{t}t_{1},t_{2})) \right)(\xi_{2})_{\pi(x)}\] Denote from now \(a=\rm{Re}\alpha,b=\rm{Im}\alpha\). According to (4), \(v(e^{t}t_{1},t_{2})=v(t_{1},t_{2})-t\left(1+\frac{a}{b}\sqrt{-1}\right)\). Hence \[d_{x}\pi(R_{1})=-\frac{1}{b}\left(a\xi_{1}+(a^{2}+b^{2})\xi_{2}\right)_{\pi(x)}\] For the second Euler field, since, by (4), \(v(t_{1},e^{t}t_{2})=v(t_{1},t_{2})+\frac{t}{b}\sqrt{-1}\), we deduce as before: \[d_{x}\pi(R_{2}) =\frac{d}{dt}|_{t=0}\pi\left((p_{1},t_{1}),\quad(p_{2},e^{t}t_{2} )\right)\] \[=\frac{d}{dt}|_{t=0}\left(\phi_{1}^{\rm{Im}(v(t_{1},e^{t}t_{2})) \xi_{1}}(p_{1}),\quad\phi_{1}^{\rm{Im}(\alpha v(t_{1},e^{t}t_{2}))\xi_{2}}(p_ {2})\right)\] \[=\frac{1}{b}\left(\xi_{1}+a\xi_{2}\right)_{\pi(x)}\] In summary, we have: \[d_{x}\pi(X) =X,\quad x=((p_{1},1),(p_{2},1)),\quad X\in T_{p_{1}}S_{1}\sqcup T _{p_{2}}S_{2} \tag{6}\] \[d_{x}\pi(R_{1}) =-\frac{1}{b}\left(a\xi_{1}+(a^{2}+b^{2})\xi_{2}\right)_{\pi(x)},\quad d_{x}\pi(R_{2})=\frac{1}{b}\left(\xi_{1}+a\xi_{2}\right)_{\pi(x)} \tag{7}\] **Step 3. The above family of complex structures does not admit any compatible Kahler metric**. **Step 3.1.** Let \(\eta_{1}\) be the pullback of the contact form on \(S_{1}\) through \(S_{1}\times S_{2}\to S_{1}\). Then \(d\eta_{1}\) is a semipositive \((1,1)\)-form. Indeed, to see that \(d\eta_{1}\) is \((1,1)\), it's enough to check that \(d\eta_{1}(z\pi_{*}X,z\pi_{*}Y)=z\bar{z}d\eta_{1}(\pi_{*}X,\pi_{*}Y)\) for \(z\in\mathbb{C}\). By (6) and holomorphicity of \(\pi\): \[d\eta_{1}(z\pi_{*}X,z\pi_{*}Y)= \mathrm{Re}(z)^{2}d\eta_{1}(X,Y)+\mathrm{Im}(z)^{2}\mathrm{d} \eta_{1}(\pi_{*}\mathrm{JX},\pi_{*}\mathrm{JY})\] \[+\mathrm{Re}(z)\mathrm{Im}(z)\left(\mathrm{d}\eta_{1}(\mathrm{X},\pi_{*}\mathrm{JY})+\mathrm{d}\eta_{1}(\pi_{*}\mathrm{JX},\mathrm{Y})\right)\] Suppose \(X\) is orthogonal to \(\langle R,\xi\rangle\) on its respective Sasakian manifold. If \(Y\) is also orthogonal, we are done since \(\pi_{*}JY=JY\) and \(d\eta_{1}\) is transversally Kahler on the cone. Otherwise \(Y\) is a multiple of a Reeb vector \(\xi_{i}\), so \(\pi_{*}JY\in\langle\xi_{1},\xi_{2}\rangle\), so \(\pi_{*}JY\in\ker d\eta_{1}\) and since also \(Y\in\ker d\eta_{1}\), the wanted equality checks trivially. Finally, if \(X\) is a multiple of a Reeb vector, the wanted equality checks trivially because again \(\{X,\pi_{*}JX\}\subset\ker d\eta_{1}\). Now checking semipositivity is equivalent by the holomorphicity of \(\pi\) to checking that for \(X\in TC(S_{1})\times TC(S_{2})\) we have \(d\eta_{1}(\pi_{*}X,\pi_{*}JX)\geq 0\). If \(X\) is tangent to either \(S_{1}\) or \(S_{2}\) and is transverse to the Euler and Reeb fields, then \(JX\) stays outside the distribution generated by the Euler and Reeb fields, and so by (6) \(d\eta_{1}(\pi_{*}X,\pi_{*}JX)=d\eta_{1}(X,JX)\) and the latter is a nonegative quantity because \(d\eta_{1}\) is semipositive on the cone. If \(X\) is either \(\xi_{1}\) and \(\xi_{2}\) then \(d\eta_{1}(\pi_{*}X,\pi_{*}JX)=0\) by (7). **Step 3.2.** Suppose \(S_{1}\times S_{2}\) is Kahler with Kahler form \(\omega\). \[d(\eta_{1}\wedge\omega^{\dim_{\mathbb{C}}(S_{1}\times S_{2})-1})=(d\eta_{1}) \wedge\omega^{\dim_{\mathbb{C}}(S_{1}\times S_{2})-1}\] because \(d\omega=0\). So by Stokes' Theorem \[\int_{S_{1}\times S_{2}}(d\eta_{1})\wedge\omega^{\dim_{\mathbb{C}}(S_{1} \times S_{2})-1}=0 \tag{8}\] Because \(d\eta_{1}\) is semipositive, \(d\eta_{1}\wedge\omega^{\dim_{\mathbb{C}}(S_{1}\times S_{2})-1}\) is a semipositive volume form, which vanishes if and only if \(d\eta_{1}\) vanishes. But then \(d\eta_{1}\) vanishes by (8), which contradicts the fact that \(d\eta_{1}\) is positive on the distribution transverse to \(\ker d\eta_{1}\). **Step 4.** Let \(S_{1}\), \(S_{2}\) be Sasakian manifolds of respective dimensions \(2n_{i}+1\) with \(n_{i}>1\). By the Kunneth formula \(H^{1}(S_{1}\times S_{2})=H^{1}(S_{1})\oplus H^{1}(S_{2})\). In view of Theorem 2.8, we can represent forms in \(H^{1}(S_{i})\) with basic forms. Hence, in view of Theorem 2.7, we can represent \([\eta]\in H^{1}(S_{1}\times S_{2})\) as \(\eta^{1,0}+\eta^{0,1}\), where \(\eta^{1,0}\) is holomorphic and closed and \(\eta^{0,1}\) is antiholomorphic and closed. To see that this is the case, suppose that \(\alpha\) is a holomorphic representative of a basic class on one of the Sasakian manifolds, say \([\alpha]\in H^{*}_{\rm bas}(S_{1})\). The fact that \(\alpha\) is a basic holomorphic form implies that \(\pi_{1}^{*}\alpha\) is holomorphic, where \(\pi_{1}:C(S_{1})\to S_{1}\) is the projection. We need to check that this implies that \(\alpha\) is holomorphic as a form on \(S_{1}\times S_{2}\) with the complex structure induced by the projection \(\pi\) from \(C(S_{1})\times C(S_{2})\). But by (6) and (7), we obtain \(\pi^{*}\alpha=\pi_{1}^{*}\alpha\) up to a constant, and hence \(\alpha\) is holomorphic. **Step 5.** Assuming \(S_{1}\times S_{2}\) is LCK, we represent the Lee form \(\theta\) as \(\theta=\theta^{1,0}+\theta^{0,1}\) with \(\theta^{1,0}\) holomorphic and closed and \(\theta^{0,1}\) antiholomorphic and closed. Thus we get \(dd^{c}\theta=0\). Then \(dd^{c}(\omega^{n-1})=\omega^{n-1}\wedge\theta\wedge J\theta\), so \[\int_{M}\omega^{n-1}\wedge\theta\wedge J\theta=0\] Combined with the fact that \(\theta\wedge J\theta\) is semipositive \((1,1)\), the above equality shows that \(\omega^{n-1}\wedge\theta\wedge J\theta=0\). So \(\theta\wedge J\theta=0\). Hence \(\theta=0\) since \(\theta\) and \(J\theta\) are linearly independent. This shows that \(S_{1}\times S_{2}\) is GCK, but then it also admits a Kahler structure, which is a contradiction by Step 3. \(\blacksquare\) **Remark 3.3**.: The same proof as in Step 3 shows that \(S_{1}\times S_{2}\) does not admit balanced metrics i.e. metrics with Hermitian form \(\omega\) satisfying \(d\omega^{\dim_{\mathbb{C}}(S_{1}\times S_{2})-1}=0\), since in that case we also obtain that \((d\eta_{1})\wedge\omega^{\dim_{\mathbb{C}}(S_{1}\times S_{2})-1}\) is exact. **Remark 3.4**.: The argument developed in Steps 3 through 5 also shows that the CEM complex structure defined by (1) does not admit any compatible locally conformally Kahler metric. ## 4 Complex submanifolds of the product of Sasakian manifolds Let \(S_{1},S_{2}\) be compact Sasakian manifolds with \(\dim_{\mathbb{R}}S_{i}=2n_{i}+1\) and with contact forms \(\eta_{1},\eta_{2}\). Let \(S_{1}\times S_{2}\) be their product with the complex structure induced by the action of \(G_{\alpha}\) on the product of their cones as in the proof of Step 1 of Theorem 3.1. **Theorem 4.1**.: _Let \(Z\subset S_{1}\times S_{2}\) be a complex submanifold of \(\dim_{\mathbb{C}}Z=k\) where the complex structure on \(S_{1}\times S_{2}\) is induced by the Calabi-Eckmann action on the product of the cones. Then \(Z\) is tangent to \(\ker(d\eta_{1}+d\eta_{2})\)._ Proof.: Let \(\eta=\eta_{1}+\eta_{2}\). Then We have: \[d(\eta\wedge(d\eta)^{k-1})=d\eta\wedge(d\eta)^{k-1}=(d\eta)^{k}\] So by Stokes' theorem we have: \[\int_{Z}(d\eta)^{k}=0\] Since outside \(\ker(d\eta)\), \(d\eta\) is strictly positive, and \(Z\) is a complex submanifold, we must thus have \(TZ\subset\ker(d\eta)\). ## 5 Comparison with the Calabi-Eckmann-Morimoto complex structures Consider again the principal \(G_{\alpha}=\{(v,\alpha v):v\in\mathbb{C}\}\)-bundle \[\pi:C(S_{1})\times C(S_{2})\to S_{1}\times S_{2}\] where \(\alpha\in\mathbb{C}\setminus\mathbb{R}\). The natural question arises whether \(J_{a,b}\) defined by (1) coincides with the complex structure induced by \(\pi\). **Theorem 5.1**.: _For every fixed \(\alpha\in\mathbb{C}\setminus\mathbb{R}\), the complex structure induced by \(G_{\alpha}\) does not in general coincide with the complex structure \(J_{a,b}\) for any \(a,b\in\mathbb{R}\), \(b\neq 0\)._ Proof.: For general Sasakian manifolds \(S_{1}\) and \(S_{2}\), by uniqueness of the complex structure making \(\pi:C(S_{1})\times C(S_{2})\to S_{1}\times S_{2}\) a holomorphic submersion, the complex structures on \(S_{1}\times S_{2}\) coincide if and only if for any \(x=(p_{1},t_{1},p_{2},t_{2})\) and any \(X_{i}\in T_{(p_{i},t_{i})}C(S_{i})\), \(i=\overline{1,2}\), we have \[J_{a,b}d_{x}\pi(X_{1}+X_{2})=d_{x}\pi(J(X_{1}+X_{2}))\] For \(X_{1}=\xi_{1},X_{2}=0\) we have by (6) \[J_{a,b}\pi_{*}\xi_{1}=J_{a,b}\xi_{1}=\frac{1}{b}\left(-a\xi_{1}+\xi_{2}\right)\] while by (7) \[\pi_{*}(J\xi_{1})=-\pi_{*}(R_{1})=\frac{1}{\mathrm{Im}(\alpha)}\left(\mathrm{Re}( \alpha)\xi_{1}+((\mathrm{Re}(\alpha))^{2}+(\mathrm{Im}(\alpha))^{2})\xi_{2})\right)\] Furthermore, \[J_{a,b}\pi_{*}\xi_{2}=\frac{1}{b}\left(-(a^{2}+b^{2})\xi_{1}+a\xi_{2}\right)\] and \[\pi_{*}(J\xi_{2})=-\pi_{*}(R_{2})=-\frac{1}{\mathrm{Im}(\alpha)}(\xi_{1}+ \mathrm{Re}(\alpha)\xi_{2}).\] So if \(J_{a,b}\) coincides with the structure induced by \(\pi\) we obtain the following system of equations: \[\begin{cases}\mathrm{Re}(\alpha)b=-a\mathrm{Im}(\alpha),\quad\mathrm{Im}( \alpha)=\mathrm{b}(\mathrm{Re}(\alpha))^{2}+(\mathrm{Im}(\alpha))^{2})\\ b=(a^{2}+b^{2})\mathrm{Im}(\alpha),\quad\mathrm{Re}(\alpha)\mathrm{b}=-a \mathrm{Im}(\alpha)\end{cases}\] This leads to the equation: \[(\mathrm{Im}(\alpha))^{4}+(2(\mathrm{Re}(\alpha))^{2}-1)(\mathrm{Im}(\alpha)) ^{2}+(\mathrm{Re}(\alpha))^{4}=0\] which implies that \(|\mathrm{Im}(\alpha)|\leq 1\) and that \((\mathrm{Re}(\alpha))^{2}\leq\frac{1}{4}\). Hence, for \(\alpha\) such that these conditions are not met, we cannot find \((a,b)\) such that \(J_{a,b}\) coincides with the complex structure induced by \(\pi\). However, on \(\langle\xi_{1},\xi_{2}\rangle\)\(J\) is \(-J_{\mathrm{Re}\alpha,\mathrm{Im}\alpha}^{T}\) where the superscript is matrix transpose. \(\blacksquare\) ## 6 The Dolbeault cohomology of the product of Sasakian manifolds Let \(S_{1},S_{2}\) be compact Sasakian manifolds with the action of \(G_{\alpha}\) as in Step 1 of Theorem 3.1, \(\alpha=a+b\sqrt{-1}\), \(a\in\mathbb{R},b\in\mathbb{R}\setminus\{0\}\). Denote from now \(M:=S_{1}\times S_{2}\). Consider \(\eta_{1},\eta_{2}\) the two contact forms on \(M\). Let \(\eta:=\eta_{1}+\eta_{2}\), \(\omega_{0}:=d\eta\) and \(\eta^{0,1},\eta^{1,0}\) be the \((0,1)\) and \((1,0)\) parts of \(\eta\), respectively. Since \(\omega_{0}\) is a \((1,1)\)-form and \(\omega_{0}=d\eta=\partial\eta^{0,1}+\partial\eta^{1,0}+\bar{\partial}\eta^{0, 1}+\bar{\partial}\eta^{1,0}\), we obtain that \(\partial\eta^{1,0}=0\) and \(\bar{\partial}\eta^{0,1}=0\). Endow \(M\) also with a Hermitian metric such that the two Reeb fields are Killing, as follows. Consider \(V:=\langle\xi_{1},\xi_{2}\rangle\) with the frame \(\{\xi_{1},\xi_{2}\}\) and \(J_{\alpha}\) the complex structure induced by \(\pi_{\alpha}\) as in Step 1 of Theorem 3.1. Recall also that \(J_{a,b}\) is defined as in (1) and the metric \(g_{a,b}\) defined as in (2) is Hermitian with respect to \(J_{a,b}\). On \(V\) we have \(J_{\alpha}|_{V}=-(J_{a,b}|_{V})^{T}\) for \(a=\mathrm{Re}(\alpha),b=\mathrm{Im}(\alpha)\). Hence \(J_{\alpha}|_{V}\) is the negative of the morphism induced on \(V^{*}\) by \(J_{a,b}|_{V}\), so \(J_{\alpha}|_{V}\) is Hermitian with respect to \((g_{a,b}|_{V})^{-1}\). Since on \(V^{\perp}\)\(J_{a,b}\) coincides with \(J_{\alpha}\), the metric \[g_{\alpha}:= g_{1}+g_{2}-ab^{-2}\left(\eta_{1}\otimes\eta_{2}+\eta_{2}\otimes \eta_{1}\right)\] \[+\left(b^{-2}(a^{2}+b^{2})-1\right)\eta_{1}\otimes\eta_{1}+(b^{- 2}-1)\eta_{2}\otimes\eta_{2}\] is Hermitian on \(M\) with respect to \(J_{\alpha}\), where \(\eta_{i}\) and \(g_{i}\) are the contact forms and Riemannian metrics respectively on \(S_{i}\), extended with \(0\) on the Sasakian manifold they are not initially defined on. Moreover, \(\xi_{1},\xi_{2}\) are Killing with respect to \(g_{\alpha}\) because each \(\xi_{i}\) is Killing with respect to \(g_{i}\) and \(\mathrm{Lie}_{\xi_{i}}\eta_{i}=0\) (because \(S_{i}\) is contact with characteristic field \(\xi_{i}\) and by Cartan's formula). By a theorem of Myers and Steenrod ([13]), \(\mathrm{Iso}_{g_{\alpha}}(M)\) is a Lie group, which is compact since both Sasakian manifolds are compact. Consider \(K\) to be the closure of the subgroup generated by \(\phi_{t}^{\xi_{1}}\) and \(\phi_{t}^{\xi_{2}}\) inside \(\mathrm{Iso}_{g_{\alpha}}(M)\). By the closed subgroup theorem, \(K\) is also a (compact) Lie group. Take \(\Lambda^{*}(M)^{\mathrm{inv}}\) to be all forms on \(M\) which are invariant under \(K\). A standard continuity argument shows that \((\Lambda^{*}(M))^{\mathrm{inv}}=\{\alpha\in\Lambda^{*}(M):\mathrm{Lie}_{\xi_ {1}}\alpha=\mathrm{Lie}_{\xi_{2}}\alpha=0\}\). Consider also \((\Lambda^{*}(M))_{\mathrm{bas}}\) to be all the basic forms with respect to the foliation \(\langle\xi_{1},\xi_{2}\rangle\); clearly \((\Lambda^{*}(M))_{\mathrm{bas}}\subset(\Lambda^{*}(M))^{\mathrm{inv}}\) (see Definition 2.4). Locally, basic forms come from the leaf space of the foliation. Put \(\Lambda_{B,\eta^{0,1}}^{p,q}:=(\Lambda^{p,q})_{\mathrm{bas}}\oplus\left(\eta^ {0,1}\wedge\Lambda_{\mathrm{bas}}^{p,q-1}\right)\). Since \(\bar{\partial}\eta^{0,1}=0\), for each \(p\geq 0\) the restriction of \(\bar{\partial}\) gives a complex \[\bar{\partial}:\Lambda_{B,\eta^{0,1}}^{p,*}\rightarrow\Lambda_{B,\eta^{0,1}}^{ p,*+1}.\] Now consider the operator \(L_{\omega_{0}}:\Lambda^{*}(M)\rightarrow\Lambda^{*}(M)\) to be wedge product with \(\omega_{0}\). **Remark 6.1**.: Because \(\bar{\partial}\omega_{0}=0\), for \(p\geq 0\) we have that \(L_{\omega_{0}}\) is a morphism of complexes \[L_{\omega_{0}}:(\Lambda^{p,*},\bar{\partial})\rightarrow(\Lambda^{p+1,*+1}, \bar{\partial})\] The restriction and corestriction of \(L_{\omega_{0}}\) to invariant forms, \[L_{\omega_{0}}:\Lambda^{*}(M)^{\text{inv}}\to\Lambda^{*}(M)^{\text{inv}}\] is well defined because \(\text{Lie}_{\xi_{1}}\omega_{0}=\text{Lie}_{\xi_{2}}\omega_{0}=0\). In fact, \(L_{\omega_{0}}\) is a well defined morphism \(L_{\omega_{0}}:\Lambda^{p,q}_{B,\eta^{0,1}}\to\Lambda^{p+1,q+1}_{B,\eta^{0,1}}\), which follows because \(\omega_{0}\) is a basic \((1,1)\)-form, so whenever \(\beta\in(\Lambda^{p,q})_{\text{bas}}\), then \(\omega_{0}\wedge\beta\in(\Lambda^{p+1,q+1})_{\text{bas}}\). Together with Remark 6.1, this shows that for each fixed \(p\geq 1\), \(L_{\omega_{0}}\) is a morphism of complexes \[L_{\omega_{0}}:(\Lambda^{p-1,*}_{B,\eta^{0,1}},\bar{\partial})\longrightarrow( \Lambda^{p,*+1}_{B,\eta^{0,1}},\bar{\partial})\] Note also that \(\bar{\partial}\) takes invariant form to invariant forms since if \(\beta\) is an invariant \((p,q)\)-form then \(0=d\text{Lie}_{\xi_{i}}\beta=\text{Lie}_{\xi_{i}}d\beta=\text{Lie}_{\xi_{i}} \partial\beta+\text{Lie}_{\xi_{i}}\bar{\partial}\beta\) and so \(\text{Lie}_{\xi_{i}}\bar{\partial}\beta=0\) because \(\text{Lie}_{\xi_{i}}\bar{\partial}\beta\in\Lambda^{p,q+1}\) and \(\text{Lie}_{\xi_{i}}\partial\beta\in\Lambda^{p+1,q}\). Recall the following definition: **Definition 6.2**.: Let \((C^{*},d_{C}),(D^{*},d_{D})\) be complexes and \(f:C^{*}\to D^{*}\) be a morphism of complexes. The **cone of the morphism \(f\)** is defined to be the complex \((C(f),d_{f})\) with \(C(f)_{i}:=C_{i+1}\oplus D_{i}\) and for \(c\in C_{i+1},d\in D_{i}\), \(d_{f}(c,d):=(d_{C}(c),f(c)-d_{D}(d))\). **Lemma 6.3**.: _For each fixed \(p\geq 0\), the complex \(((\Lambda^{p,*}(M))^{\text{inv}},\bar{\partial})\) is isomorphic to the cone of_ \[L_{\omega_{0}}:(\Lambda^{p-1,*}_{B,\eta^{0,1}},\bar{\partial})\longrightarrow( \Lambda^{p,*+1}_{B,\eta^{0,1}},\bar{\partial})\] _shifted by \(-1\) i.e. to \(C(L_{\omega_{0}})[-1]\)._ Proof.: Forms on the tangent space of the foliation \(\langle\xi_{1},\xi_{2}\rangle\) are spanned by \(\eta_{1},\eta_{2}\), and hence by \(\eta^{0,1},\eta^{1,0}\). Therefore \[(\Lambda^{p,q})^{\text{inv}} =(\Lambda^{p,q}_{\text{bas}})\oplus\left(\Lambda^{p-1,q}_{\text{bas }}\wedge\eta^{1,0}\right)\oplus\left(\Lambda^{p,q-1}_{\text{bas}}\wedge\eta^{ 0,1}\right)\oplus\left(\Lambda^{p-1,q-1}_{\text{bas}}\wedge\eta^{0,1}\wedge \eta^{1,0}\right)\] \[=\Lambda^{p,q}_{B,\eta^{0,1}}\oplus\left(\Lambda^{p-1,q}_{B,\eta ^{0,1}}\wedge\eta^{1,0}\right)\] The differential \(\bar{\partial}\) acts on \(\left(\Lambda^{p-1,q}_{B,\eta^{0,1}}\wedge\eta^{1,0}\right)\) as \(\bar{\partial}_{\text{bas}}+L_{\omega_{0}}\), where \[\bar{\partial}_{\text{bas}}:\Lambda^{p-1,q}_{B,\eta^{0,1}}\wedge\eta^{1,0}\to \Lambda^{p-1,q+1}_{B,\eta^{0,1}}\wedge\eta^{1,0}\] is \(\bar{\partial}\) applied to the \(\Lambda^{p-1,q}_{B,\eta^{0,1}}\) part, while \(L_{\omega_{0}}\) is multiplication of forms in \(\Lambda^{p-1,q}_{B,\eta^{0,1}}\) with \(\bar{\partial}\eta^{1,0}=\omega_{0}\). This suggests seeing the complex \((\Lambda^{p-1,*}_{B,\eta^{0,1}},\bar{\partial})\) as identified with \((\Lambda^{p-1,*}_{B,\eta^{0,1}}\wedge\eta^{1,0},\bar{\partial}_{\rm bas})\); this identification is immediately obtained by simply dropping \(\eta^{1,0}\). Seeing \(L_{\omega_{0}}\) after this identification as a morphism of complexes \[L_{\omega_{0}}:(\Lambda^{p-1,*}_{B,\eta^{0,1}}\wedge\eta^{1,0},\bar{\partial}_ {\rm bas})\longrightarrow(\Lambda^{p,*+1}_{B,\eta^{0,1}},\bar{\partial}),\] the cone of \(L_{\omega_{0}}\) is in degree \(q-1\): \[\left(C(L_{\omega_{0}})[-1]\right)_{q}=\left(C(L_{\omega_{0}})\right)_{q-1}= \left(\Lambda^{p-1,q}_{B,\eta^{0,1}}\wedge\eta^{1,0}\right)\oplus\Lambda^{p,q }_{B,\eta^{0,1}}\] Thus \(\left(C(L_{\omega_{0}})[-1]\right)_{q}=\left(\Lambda^{p,q}\right)^{\rm inv}\). At position \(q-1\) of the cone, the cone differential takes an \(\alpha\wedge\eta^{1,0}\in\left(\Lambda^{p-1,q}_{B,\eta^{0,1}}\wedge\eta^{1, 0}\right)\) and a \(\beta\in\Lambda^{p,q}_{B,\eta^{0,1}}\) to \[\left(\bar{\partial}_{\rm bas}(\alpha\wedge\eta^{1,0}),\ L_{\omega_{0}}( \alpha\wedge\eta^{1,0})-\bar{\partial}\beta\right)\] Now \[\bar{\partial}_{\rm bas}(\alpha\wedge\eta^{1,0})=\left(\bar{\partial}\alpha \right)\wedge\eta^{1,0}\] and by the identification, \(L_{\omega_{0}}(\alpha\wedge\eta^{1,0})=\omega_{0}\wedge\alpha\in\Lambda^{p,q+1} _{B,\eta^{0,1}}\). Therefore, the action of the differential of the cone is precisely the same as that of \(\bar{\partial}\) and the complex of invariant forms is identified with the \(-1\) shift of the cone of \(L_{\omega_{0}}\). \(\blacksquare\) Furthermore, whenever a compact group acts by holomorphic isometries on a Hermitian manifold, its action on Dolbeault cohomology is trivial: **Theorem 6.4**.: [10, Theorem 3.3] _Let \(G\) be a compact Lie group acting on a compact Hermitian manifold \(M\) by holomorphic isometries. Then the action of \(G\) on Dolbeault cohomology, given by \(g\cdot[\alpha]:=[g^{*}\alpha]\) for \(g\in G\) and \([\alpha]\in H^{p,q}_{\bar{\partial}}(M)\), is trivial._ Consider the unique bi-invariant top form \(\nu\) on the compact Lie group \(K\) (defined above) with \(\int_{K}\nu=1\). For any \(\alpha\in\Lambda^{*}(M)\) consider \(\overline{\alpha}:=\int_{K}(k^{*}\alpha)d\nu(k)\). Then \(\bar{\alpha}\) is an invariant ([21, Proposition 13.11]) smooth ([21, Proposition 13.13]) form of the same degree as \(\alpha\). By Theorem 6.4, taking \(\alpha\) to be \(\bar{\partial}\)-closed, we have for some forms \(\beta(k)\) \[\int_{K}(k^{*}\alpha)d\nu(k) =\int_{K}\left(\alpha+\bar{\partial}\beta(k)\right)d\nu(k)\] \[=\alpha+\int_{K}(\bar{\partial}\beta(k))d\nu(k)=\alpha+\bar{ \partial}\left(\int_{K}\beta(k)d\nu(k)\right)\] Hence, the cohomology groups \(H^{p,q}_{\bar{\partial}}(M)\) are the same as the cohomology groups of \((\Lambda^{p,*}(M)^{\mathrm{inv}},\bar{\partial})\), and hence, by Lemma 6.3, \[H^{p,q}_{\bar{\partial}}(M)=H^{q}\left(\left(C\left(L_{\omega_{0}}:\Lambda^{p- 1,*}_{B,\eta^{0,1}}\to\Lambda^{p,*+1}_{B,\eta^{0,1}}\right)\right)[-1]\right). \tag{9}\] Now we can prove the following theorem (which has an analogue in the Vaisman setting, [10, Theorem 4.12]). **Theorem 6.5**.: _Let \(M\) be the product of two compact Sasakian manifolds with complex structure given by (5). The Dolbeault cohomology groups of \(M\) are computed as:_ \[H^{p,q}_{\bar{\partial}}(M)=\begin{cases}\frac{H^{p,q}_{\underline{\mathrm{s} }\underline{\partial}}\oplus[\eta^{0,1}]\wedge H^{p,q-1}_{\underline{\mathrm{s }}\underline{\partial}}(M)}{\mathrm{im}(\mathrm{L}_{\omega_{0}})},&p+q\leq \dim_{\mathbb{C}}(M)\\ \ker(L_{\omega_{0}})|_{H^{p,q}_{\underline{\mathrm{s}}\underline{\partial}} \oplus[\eta^{0,1}]\wedge H^{p,q-1}_{\underline{\mathrm{s}}\underline{ \partial}}(M)},&p+q>\dim_{\mathbb{C}}(M)\end{cases}\] Proof.: The cone of the morphism \(L_{\omega_{0}}\) gives a short exact sequences of complexes: \[0\longrightarrow\Lambda^{p,*+1}_{B,\eta^{0,1}}\longrightarrow C(L_{\omega_{0 }})\longrightarrow\left(\Lambda^{p-1,*}_{B,\eta^{0,1}}\right)[1]\] which gives rise to a long exact sequence in cohomology with connecting map \(L_{\omega_{0}}\): \[\cdots\longrightarrow H^{i-1}_{\bar{\partial}}\left(\left( \Lambda^{p-1,*}_{B,\eta^{0,1}}\right)[1]\right) \xrightarrow{L_{\omega_{0}}}H^{i}_{\bar{\partial}}\left(\Lambda ^{p,*+1}_{B,\eta^{0,1}}\right)\longrightarrow H^{i}(C(L_{\omega_{0}}))\longrightarrow\] \[\longrightarrow H^{i}_{\bar{\partial}}\left(\left(\Lambda^{p-1,*}_{B, \eta^{0,1}}\right)[1]\right)\xrightarrow{L_{\omega_{0}}}\cdots\] Taking into account shifts, degrees and (9) we thus have: \[\cdots\longrightarrow H^{i}_{\bar{\partial}}\left(\Lambda^{p-1,* }_{B,\eta^{0,1}}\right) \xrightarrow{L_{\omega_{0}}}H^{i}_{\bar{\partial}}\left(\Lambda ^{p,*+1}_{B,\eta^{0,1}}\right)\longrightarrow H^{p,i+1}_{\bar{\partial}}(M)\longrightarrow\] \[\longrightarrow H^{i+1}_{\bar{\partial}}\left(\Lambda^{p-1,*}_{B, \eta^{0,1}}\right)\xrightarrow{L_{\omega_{0}}}\cdots\] Now since \(\bar{\partial}\eta^{0,1}=0\), \[H^{i}_{\bar{\partial}}(\Lambda^{p-1,*}_{B,\eta^{0,1}})=H^{p-1,i}_{\rm bas}(M) \oplus[\eta^{0,1}]\wedge H^{p-1,i-1}_{\rm bas}(M)\] By Theorem 2.7, basic cohomology behaves just like the cohomology of a Kahler manifold with Kahler form \(\omega_{0}\). Hence, since \(M\) is compact, by the Hodge isomorphism theorem and the fact that the Kahler form is harmonic, the operator \[H^{t}_{\bar{\partial}}\left(\Lambda^{s,*}_{B,\eta^{0,1}}\right)\xrightarrow{L _{\omega_{0}}}H^{t}_{\bar{\partial}}\left(\Lambda^{s+1,*+1}_{B,\eta^{0,1}}\right)\] is injective whenever \(s+t\leq\dim_{\mathbb{C}}M-1\); by Poincare duality, it is surjective whenever \(s+t>\dim_{\mathbb{C}}M-1\). Hence, for \(p+i\leq\dim_{\mathbb{C}}M\), we obtain the short exact sequence \[0\longrightarrow H^{i-1}_{\bar{\partial}}\left(\Lambda^{p-1,*}_{B,\eta^{0,1} }\right)\xrightarrow{L_{\omega_{0}}}H^{i}_{\bar{\partial}}\left(\Lambda^{p,* }_{B,\eta^{0,1}}\right)\longrightarrow H^{p,i}_{\bar{\partial}}(M)\longrightarrow 0 \tag{10}\] while for \(p+i>\dim_{\mathbb{C}}M+1\) we obtain the short exact sequence \[0\longrightarrow H^{p,i}_{\bar{\partial}}(M)\longrightarrow H^{i}_{\bar{ \partial}}\left(\Lambda^{p-1,*}_{B,\eta^{0,1}}\right)\xrightarrow{L_{\omega_{ 0}}}H^{i+1}_{\bar{\partial}}\left(\Lambda^{p,*}_{B,\eta^{0,1}}\right)\longrightarrow 0 \tag{11}\] Finally, when \(p+i=\dim_{\mathbb{C}}M+1\), by the Hard Lefschetz theorem \(H^{i-1}_{\bar{\partial}}\left(\Lambda^{p-1,*}_{B,\eta^{0,1}}\right) \xrightarrow{L_{\omega_{0}}}H^{i}_{\bar{\partial}}\left(\Lambda^{p,*}_{B,\eta ^{0,1}}\right)\) is an isomorphism, so in particular surjective. Since, as mentioned above, \(H^{i}_{\bar{\partial}}\left(\Lambda^{p-1,*}_{B,\eta^{0,1}}\right) \xrightarrow{L_{\omega_{0}}}H^{i+1}_{\bar{\partial}}\left(\Lambda^{p,*}_{B, \eta^{0,1}}\right)\) is also surjective, we have the short exact sequence (11) also for the case when \(p+i=\dim_{\mathbb{C}}M+1\).
2304.02728
Full Resolution Deconvolution of Complex Faraday Spectra
Polarized synchrotron emission from multiple Faraday depths can be separated by calculating the complex Fourier transform of the Stokes' parameters as a function of the wavelength squared, known as Faraday Synthesis. As commonly implemented, the transform introduces an additional term $\lambda_0^2$, which broadens the real and imaginary spectra, but not the amplitude spectrum. We use idealized tests to investigate whether additional information can be recovered with a clean process restoring beam set to the narrower width of the peak in the real ``full" resolution spectrum with $\lambda_0^2=0$. We find that the $\lambda_0^2$ choice makes no difference, except for the use of a smaller restoring beam. With this smaller beam, the accuracy and phase stability are unchanged for single Faraday components. However, using the smaller restoring beam for multiple Faraday components we find a) better discrimination of the components, b) significant reductions in blending of structures in tomography images, and c) reduction of spurious features in the Faraday spectra and tomography maps. We also discuss the limited accuracy of information on scales comparable to the width of the amplitude spectrum peak, and note a clean-bias, reducing the recovered amplitudes. We present examples using MeerKAT L-band data. We also revisit the maximum width in Faraday depth to which surveys are sensitive, and introduce the variable $W_{max}$, the width for which the power drops by a factor of 2. We find that most surveys cannot resolve continuous Faraday distributions unless the narrower full restoring beam is used.
Lawrence Rudnick, William D. Cotton
2023-04-05T20:12:52Z
http://arxiv.org/abs/2304.02728v2
# Full Resolution Deconvolution of Complex Faraday Spectra ###### Abstract Polarized synchrotron emission from multiple Faraday depths can be separated by calculating the complex Fourier transform of the Stokes' parameters as a function of the wavelength squared, known as Faraday Synthesis. As commonly implemented, the transform introduces an additional term \(\lambda_{0}^{2}\), which broadens the real and imaginary spectra, but not the amplitude spectrum. We use idealized tests to investigate whether additional information can be recovered with a clean process restoring beam set to the narrower width of the peak in the real "full" resolution spectrum with \(\lambda_{0}^{2}=0\). We find that the \(\lambda_{0}^{2}\) choice makes no difference, _except for the use of a smaller restoring beam_. With this smaller beam, the accuracy and phase stability are unchanged for single Faraday components. However, using the smaller restoring beam for multiple Faraday components we find a) better discrimination of the components, b) significant reductions in blending of structures in tomography images, and c) reduction of spurious features in the Faraday spectra and tomography maps. We also discuss the limited accuracy of information on scales comparable to the width of the amplitude spectrum peak, and note a _clean-bias_, reducing the recovered amplitudes. We present examples using MeerKAT L-band data. We also revisit the maximum width in Faraday depth to which surveys are sensitive, and introduce the variable \(W_{max}\), the width for which the power drops by a factor of 2. We find that most surveys cannot resolve continuous Faraday distributions unless the narrower _full_ restoring beam is used. keywords: Magnetic fields, techniques: polarimetric, galaxies: magnetic fields ## 1 Introduction The technique of Faraday Synthesis, introduced by Burn (1966) and developed into a formal tool by Brentjens & de Bruyn (2005) (hereinafter **BdB**), allows one to separate polarized emission coming from regions of differing Faraday depths that are combined in the observed Stokes parameters. Since almost all optically thin radio sources are depolarized, i.e., their fractional polarization decreases as the wavelength increases, this implies that they have a range of Faraday depths within the solid angle of an individual observing beam. These can result from variations in the Faraday depth through different individual lines of sight, leading to what is termed "beam" depolarization, or through the interleaving of Faraday and synchrotron emitting regions along the line of sight, leading to "internal" depolarization. In either case, the range of Faraday depths in the polarized emission can be a powerful diagnostic of multiple emitting regions and the associated magnetized thermal plasmas. The ability to effectively separate multiple Faraday depths has therefore been a subject of intense interest. It has led to the deployment of wideband receiving systems and surveys, e.g., ASKAP (Heywood et al., 2016), MeerKAT (Jonas & MeerKAT Team, 2016), LOFAR (van Haarlem et al., 2013), VLASS (Lacy et al., 2020), and uGMRT (Sureshkumar, 2014), which provide the requisite coverage in \(\lambda^{2}\) space. It has spawned multiple efforts to deconvolve the direct or "dirty" Faraday spectrum, to remove the sidelobes arising from the incomplete \(\lambda^{2}\) coverage (e.g., Heald, 2009; Frick et al., 2011; Andrecut et al., 2012; Noriritu et al., 2021). Also, there is a strong interest in detecting "complexity" in Faraday spectra, i.e., the presence of more than one Faraday component (Brown et al., 2019; Alger et al., 2021; Cooray et al., 2021; Pratley et al., 2021). A parallel set of efforts have used parametric methods, e.g. Q-U fitting, based on prior knowledge of the form of the Faraday spectrum, (e.g., Farnsworth et al., 2011; O'Sullivan et al., 2012). Sun et al. (2015) compare the performance of a variety of different techniques for extracting information about complexity in the Faraday spectra. The simplest formulation of Faraday synthesis, as introduced by Burn (1966), used the kernel \(e^{2i\phi\lambda^{2}}\) in the transform, where \(\phi\) is the Faraday depth. **BdB** introduced a different kernel, \(e^{2i\phi\langle\lambda^{2}-\lambda_{0}^{2}\rangle}\) with \(\lambda_{0}^{2}=\langle\lambda^{2}\rangle\), where \(\lambda\) ranges over the observed wavelengths; the stated goal was to increase the stability of the recovered phases/polarization angles in the deconvolution process. This formulation is widely used today. This smooths out the variations in the complex Faraday beam, as intended, while leaving the amplitude beam unchanged. During the deconvolution process, the clean components are then restored using the width of the amplitude beam. The question being asked in this paper is whether information is lost through this process, and can be recovered using a restoring beam matched to the narrower width of the main real lobe of the original, (Burn, 1966), complex spectrum. Throughout, the Faraday spectrum produced using \(\lambda_{0}^{2}=\langle\lambda^{2}\rangle\) will be called _nominal_, \(\mathcal{F}_{nom}\), while the spectrum produced with no \(\lambda_{0}^{2}\) (effectively, \(\lambda_{0}^{2}=0\)) will be called _full_, \(\mathcal{F}_{full}\). ## 2 Faraday synthesis - nominal and full resolution Both the full and nominal resolution spectra and deconvolution were implemented in the Obit package (Cotton, 2008)1. This implementation allows usage of Q and U images unequally spaced in frequency, which preserves some of the frequency resolution while meeting other needs, such as more uniform coverage in \(\lambda^{2}\) space. The task MFImage directly transforms the input Q and U data, without normalizing by I. To accurately recover the Faraday spectrum for a single component, the spectral dependence must be removed, e.g., by using Q/I and U/I; for multiple components in a single beam, with potentially different spectra, this may not be possible. In this paper, we assume only flat spectra for our simulated signals. Footnote 1: [http://www.cv.nrao.edu/](http://www.cv.nrao.edu/)\(\sim\)bocton/Obit.html The Faraday spectrum is approximated using the Fourier series \[F_{k}(x,y)\ =\ K\sum_{j=1}^{n}W_{j}\ e^{-2i\phi_{k}(x_{j}^{2}-x_{0}^{2})}\left[Q_{ j}(x,y)+iU_{j}(x,y)\right] \tag{1}\] for Faraday depth \(\phi_{k}\) where \(W_{j}\) is the weight for frequency sub-band \(j\) of \(n\), \(\lambda_{j}\) is the wavelength of frequency sub-band \(j\), \(\lambda_{0}\) is the reference wavelength, \(i\) is \(\sqrt{-1}\) and \(Q_{j}\) and \(U_{j}\) are the Stokes Q and U sub-band images at frequency \(j\). The normalization factor \(K\) is \(1/\sum_{j=1}^{n}W_{j}\). \(W_{j}\) may also include a correction for spectral index2, \(\alpha\): Footnote 2: The spectral index is defined as \(I_{V}\propto\nu^{\alpha}\). \[W_{j}\ =\ w_{j}e^{-\alpha\log(\nu_{j}/\nu_{0})} \tag{2}\] where \(\nu_{j}\) is the frequency of channel \(j\), \(\nu_{0}\) is the reference frequency (corresponding to \(\lambda_{0}\)) and the weight for sub-band \(j\), \(w_{j}\), is derived from the off-source RMS in the \(Q_{j}\) and \(U_{j}\) images. \(w_{j}\) is zero for frequency bins totally blanked due to RFI filtering; in our simulations the noise is the same in all channels, so \(w_{j}=1\) for the non-blanked channels. The task RMSyn optionally allows a correction for a default spectral index for an entire image; this was not used in our flat-spectrum simulations. A deconvolution over the entire frequency band is done by OBIT task RMSyn; it works on a pixel-by-pixel basis using a complex Hogbom CLEAN (Hogbom, 1974) similar in implementation to Heald et al. (2009). The CLEAN proceeds using a user specified loop gain (default 0.1) up to a user specified maximum number of iterations and/or a maximum residual to collect a set of complex delta functions in bins of \(\phi_{k}\). The Faraday beam is calculated over twice the extent in \(\phi\) as the Faraday spectrum to allow deconvolution over its full range. The complex Faraday spectrum is convolved with a Gaussian restoring beam, the choice of which is described in the next section. The following simulations use the MeerKAT L-band frequency coverage, 68 channels of 1% (varying) bandwidth, from 890 to 1681 MHz, with channels removed where MeerKAT experienced RFI, so as to more closely approximate realistic data sets. The trimmed data set contained 49 channels. The resulting coverage in \(\lambda^{2}\) space is shown in Figure 1. ### Choice of Reference Wavelength and "Resolution" We start by looking at the Faraday spectra created for the default case \(\mathcal{F}_{full}\), effectively with \(\lambda_{0}^{2}=0\), and for the **BdB** implementation, with \(\lambda_{0}^{2}=\langle\lambda^{2}\rangle\). In practice, to avoid numerical problems, we set \(\lambda_{0}^{2}=10^{-6}\) for \(\mathcal{F}_{full}\), but will refer to this as \(\lambda_{0}^{2}=0\) hereinafter. For \(\mathcal{F}_{nom}\), we use \(\langle\lambda\rangle^{2}\) (0.065 \(m^{2}\)) instead of \(\langle\lambda^{2}\rangle\) (0.067 \(m^{2}\)); this very small difference does not affect any of the results. The Faraday beams for these two choices are shown in Figure 2. The amplitude beams for the two methods are identical, since only the phases of the complex spectrum have been shifted. For \(\mathcal{F}_{nom}\), the main lobe of the real beam is considerably broader, by design, since **BdB** intended to minimize the changes in phase as a function of Faraday depth, \(\phi\). Below, we examine whether this shift of reference wavelength accomplishes its intended purpose. The widths of the main lobes in the spectra determine both the accuracy of Faraday depth determinations as well as the identification of complex structure (i.e., emission at more than a single Faraday depth). However, we specifically avoid using the term "resolution" at this point, because it presumes that we have established how the performance depends on the widths of the amplitude and real beams. Instead, we give empirically-based names to these widths, namely: \[\Phi_{nom}\equiv FWHM\ real\ peak\ for\ \lambda_{0}^{2}\ =\ \langle\lambda \rangle^{2},and \tag{3}\] \[\Phi_{full}\equiv FWHM\ real\ peak\ for\ \lambda_{0}^{2}\ =0 \tag{4}\] The Faraday amplitude spectrum is calculated from its convolved real and imaginary parts and is identical for \(\mathcal{F}_{full}\) and \(\mathcal{F}_{nom}\). For \(\mathcal{F}_{nom}\), the widths of the real beam and the amplitude beam are almost the same, by construction. For \(\mathcal{F}_{full}\), the real beam has a much narrower main peak, and using this narrower width for the restoring beam forms the basis for the experiments in this paper. See Figure 2. The exact values of \(\Phi\) depend on the details of coverage in \(\lambda^{2}\) space, gaps in coverage, any weighting, etc. Approximate expressions are very useful, however, and for \(\mathcal{F}_{nom}\) is given by Dickey et al. (2019) as: \[\Phi_{nom}\approx\frac{3.8}{\lambda_{max}^{2}-\lambda_{min}^{2}}. \tag{5}\] since for \(\mathcal{F}_{nom}\), where the widths of the real and amplitude peaks are almost the same (see Figure 2), then \(\Phi_{nom}\sim\phi\phi\), as given by Dickey et al. (2019). To approximate the value of \(\Phi_{full}\), we note that using the integral version of Eq. 1, zero values in \(\mathbb{R}(\mathcal{F}_{full}\ )\) occur when \[2\phi\lambda_{max}^{2}=2\phi\lambda_{min}^{2},\quad and \tag{6}\] \[2\phi\lambda_{max}^{2}=\pi-2\phi\lambda_{min}^{2} \tag{7}\] The first condition (\(\lambda_{max}=\lambda_{min}\)) is not meaningful, and the second Figure 1: Wavelength\({}^{2}\) coverage for the simulations presented in this paper. It approximates that available for actual data from MeerKAT L-band, (e.g., Knowles et al., 2021). Lines denote the central wavelengths for each channel and the coverage is continuous except in the large gaps. condition is satisfied for \[\phi=\frac{\pi}{2(\lambda_{max}^{2}+\lambda_{min}^{2})}. \tag{8}\] The FWHM of one half-cycle of a sine wave occurs at approximately the value of the first zero crossing. Fitting a Gaussian to the main lobe of yields a slightly different value, which we adopt here, of \[\Phi_{full}\approx\frac{2}{\lambda_{max}^{2}+\lambda_{min}^{2}}. \tag{9}\] We fit Gaussians to the central peak in the real spectra (Figure 2). We find \(\Phi_{nom}=45\) rad m\({}^{-2}\) and \(\Phi_{full}=16\) rad m\({}^{-2}\), similar to the values calculated from Eqs. 5 and 9 of 42 and 14 rad m\({}^{-2}\), respectively. Table 1 summarizes the Faraday spectrum parameters for a variety of telescope/receivers/surveys that are used for polarization studies. One key parameter is \(\rho=\frac{\lambda_{max}}{\lambda_{min}}\), which can affect the impact of using \(\Phi_{full}\) and the ability to resolve complex structures. We can write \[\frac{\Phi_{nom}}{\Phi_{full}}\ =\ 1.9\ \frac{\rho^{2}+1}{\rho^{2}-1} \tag{10}\] For very wide bands, where \(\rho>>1\), the reduction in width using \(\Phi_{full}\) is \(\approx\)1.9. For narrow band observations, where \(\rho\) approaches 1, \(\Phi_{full}\) approaches a constant value \(\lambda_{max}^{-2}=\lambda_{min}^{-2}\) while \(\Phi_{nom}\) becomes very large. The relative performance of \(\mathcal{F}_{full}\) and \(\mathcal{F}_{nom}\) shown in this paper is for the case of \(\rho\ \sim\ 2\), similar to other wideband surveys; whether the results are applicable to much narrower band observations, such as for Apertif and the WSRT 92cm studies of **BdB**, would need further study. Of particular interest is whether a survey can have sensitivity to a continuous distribution of Faraday depths - quantified here by a new parameter, _maximum-width_, \(W_{max}\approx\ 0.67*\lambda_{min}^{-2}(1+\rho^{-2})\)3, and simultaneously have a sufficiently narrow Faraday beam to resolve the structure. As we will derive in the Appendix, this simultaneous condition is marginally met in most cases when \(\Phi_{full}\) is used, but for \(\Phi_{nom}\), this requirement is satisfied only for \(\rho\ >\ 2.4\); this is true only for SKA1-Mid and SKA1-Low in this survey compilation. Thus, none of the other surveys will be able to resolve continuous Faraday structures if \(\Phi_{nom}\) is used. Footnote 3: Previously, the quantity _max-scale_, as defined by **BdB**, was used to address this issue. In the Appendix, we show that \(max-scale\ =\ \pi\ \lambda_{min}^{2}\) significantly overestimates the sensitivity to broad Faraday depth distributions. Before comparing results with these two methods, and their two different restoring beams, we compare their outputs with a single fixed restoring beam. If we start with cubes of \(Q(\lambda^{2}),U(\lambda^{2})\), the amplitude cubes \(\mathcal{F}_{full}\) and \(\mathcal{F}_{nom}\) produced from them will be identical, prior to deconvolution. The phases, and thus the real and imaginary spectra, will differ, since \(\mathcal{F}_{full}\) phases are at \(\lambda_{0}=0\), and \(\mathcal{F}_{nom}\) phases are at \(\lambda_{0}=\langle\lambda\rangle\). We performed a variety of different experiments with simulated Q and U distributions similar to those in the following sections where they are described in more detail; the experiments included pure signals and ones with added random noise. We found that, after deconvolution, the results were still identical for \(\mathcal{F}_{full}\) and \(\mathcal{F}_{nom}\), _as long as the same restoring beam was used_. The results of one such test are shown in Figure Figure 3: Faraday spectra, one in each row, for a series of simulated signals. Each simulation includes two input delta-function Faraday components at various separations, indicated by the black lines, with the addition of random noise in each \(\lambda^{2}\) channel. The same noise is used for both \(\mathcal{F}_{full}\) and \(\mathcal{F}_{nom}\). Results are shown for both, _but using the same restoring beam corresponding to \(\Phi_{full}=16\) rad m\({}^{-2}\)_. Top frames show the amplitude and phase for \(\mathcal{F}_{nom}\), spectra, while bottom frames show the same for \(\mathcal{F}_{full}\). By default, for \(\mathcal{F}_{full}\), the phases are those at 0 wavelength, while for \(\mathcal{F}_{nom}\), the phases are at \(\lambda_{0}\). Note that even for \(\mathcal{F}_{full}\), there are phase gradients across the spectra of each component that arise during the clean process. This will become relevant in other experiments described below. Figure 2: The complex Faraday spectra for \(\mathcal{F}_{full}\) and \(\mathcal{F}_{nom}\). The amplitudes, real and imaginary spectra are shown in black, red, and blue, respectively. “X” symbols denote the \(\mathcal{F}_{nom}\) results, while open circles denote the \(\mathcal{F}_{full}\) results. These same symbols will be used where needed in all subsequent figures. Note that the amplitude spectrum is identical for the two methods; only the real and imaginary parts differ. 3. The top frames show the Faraday amplitude and phase spectra for \(\mathcal{F}_{nom}\), with a different experiment in each row, while the bottom frames show the corresponding results for \(\mathcal{F}_{full}\). The amplitudes are identical, within rounding errors, as are the phases, after de-rotation for \(\mathcal{F}_{nom}\) by \(2\phi 4_{0}^{2}\). _Thus, the comparison between \(\mathcal{F}_{nom}\) and \(\mathcal{F}_{full}\), as discussed in the rest of the paper, is equivalent to a comparison of only the different restoring beams which are used, and not the choice of \(\lambda_{0}\)._ ## 3 Recovery of single Faraday components _Key Findings: \(\mathcal{F}_{nom}\) and \(\mathcal{F}_{full}\) synthesis/beams produce nearly identical results in the recovery of the Faraday depth, polarization angle and amplitude of a single \(\delta\) component. Despite the smaller real beam, there is no increased accuracy for \(\mathcal{F}_{full}\). At the same time, the benefits of the suggested phase stability for \(\mathcal{F}_{nom}\) do not result in increased accuracy for the polarization angles._ Hereinafter, all results from \(\mathcal{F}_{full}\) (\(\mathcal{F}_{nom}\)) use \(\mathcal{P}_{full}\) (\(\Phi_{nom}\)) for their respective restoring beams. Our first test was to compare the ability of \(\mathcal{F}_{full}\) and \(\mathcal{F}_{nom}\) resolution Faraday spectra to recover the true parameters of a polarized signal with a single Faraday depth, in the presence of noise. To that end we simulated signals with Faraday depths ranging from 60 rad m\({}^{-2}\) to 160 rad m\({}^{-2}\). For each depth, we added noise to Q and U at each sampled frequency, for 101 different realizations of the noise (see Figure 4). The signal:noise per frequency channel was very low, 1.5 in each realization. With 49 frequency channels, the expected signal:noise in the Faraday spectrum was 10.5. The cleaned, restored Faraday spectra for a single realization at Faraday depth \(\phi=90\) rad m\({}^{-2}\) are shown in Figure 5. The observed signal:noise values (peak over mean off-peak) were 9.7 for \(\mathcal{F}_{nom}\), almost exactly as expected, and 17.6 for \(\mathcal{F}_{full}\). The \(\mathcal{F}_{nom}\) amplitude spectrum also appears to be a (not quite perfect) convolution of the \(\mathcal{F}_{full}\) amplitude spectrum. Some differences occur because the convolution by the broader beam happens in the complex space, not in the amplitude spectrum. Below, we will examine how this _apparent_ higher signal:noise and especially the narrower FWHM (\(\Phi\)), translates into the uncertainty \(\sigma_{\phi}\), as well as the amplitude and phase of the recovered signals. The expected uncertainty in \(\phi\) is given \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline Survey & Freq. & Wavelength\({}^{2}\) & Nom. Res. & Full Res. & Ratio & \(W_{max}\) & \(\frac{d_{max}}{d_{min}}\) \\ & MHz & cm\({}^{2}\) & rad m\({}^{-2}\) & rad m\({}^{-2}\) & (Nom/Full) & rad m\({}^{-2}\) & \\ \hline MeerKAT L-band & 900 - 1600 & 318 - 1135 & 45* & 16* & 2.8* & 25 & 1.9 \\ \hline POSSUM Band 1 & 800 - 1088 & 760 - 1406 & 59 & 9 & 6.4 & 14 & 1.4 \\ \hline POSSUM Band 2 & 1152 - 1440 & 434 - 678 & 156 & 18 & 8.7 & 25 & 1.3 \\ \hline VLASS & 2000 - 4000 & 56- 225 & 225 & 71 & 3.2 & 148 & 2.0 \\ \hline LOFAR (HBA) & 120 - 240 & 15625- 62500 & 0.8 & 0.3 & 3.2 & 0.5 & 2.0 \\ \hline uGMRT Band 3 & 250 - 500 & 3600 - 14400 & 3.5 & 1.1 & 3.2 & 2 & 2.0 \\ \hline uGMRT Band 4 & 550 - 850 & 1146 - 2975 & 22.0 & 4.7 & 4.6 & 8 & 1.5 \\ \hline Apertif & 1130 - 1430 & 440 - 705 & 144 & 17.5 & 8.2 & 25 & 1.3 \\ \hline WSRT (92cm) & 319-365 & 6560-9025 & 15.2 & 1.3 & 11 & 2 & 1.18 \\ \hline SKA1 Low & 50 - 350 & 7347 - 36000 & 0.11 & 0.05 & 2.0 & 0.9 & 7.0 \\ \hline SKA1 Mid 1 & 350 - 1050 & 816 - 7347 & 5.8 & 2.5 & 2.4 & 9 & 3.0 \\ \hline SKA1 Mid 2 & 950 - 1760 & 291 - 997 & 53.8 & 15.5 & 3.5 & 30 & 1.9 \\ \hline SKA1 Mid 3 & 1650 - 3050 & 97 - 331 & 163 & 47 & 3.5 & 89 & 1.8 \\ \hline \end{tabular} Note: * indicates measured values for the MeerKAT coverage discussed here. All other Faraday spectrum parameters use the approximate calculations described in the text. \end{table} Table 1: Faraday spectrum parameters Figure 4: Q, U data in black and red, respectively, used for the single Faraday depth experiment at depth \(\phi\)=90 rad m\({}^{-2}\). Solid curves show the input model, and each point indicates the average over 101 realizations. Vertical bars indicate the rms scatter in Q and U among the 101 realizations. by \[\sigma\phi\ =\ \Phi/(2\times SNR) \tag{11}\] where \(\Phi\) is the full width at half maximum of the actual resolution and \(SNR\) the actual signal to noise ratio. We compare the results from the two methods over a large range of input depths. Figure 6 shows the observed Faraday depth as a function of input depth, averaged over the 101 realizations at each depth. The results show that the peak locations of \(\phi\) are practically identical for the two methods, as was expected, but had not been previously demonstrated. Looking now more closely at the scatter in the recovered depths among the 101 realizations at each depth, Eq. 11 predicts \(\sigma_{\phi}\)=2.3 (0.45) rad m\({}^{-2}\) for \(\mathcal{F}_{nom}\) (\(\mathcal{F}_{full}\)). The observed scatter was 2.1 (2.0) rad m\({}^{-2}\) for \(\mathcal{F}_{nom}\) (\(\mathcal{F}_{full}\)), on average. This is the first important result - the two methods generate the **same** error in measuring the Faraday depth in the presence of noise. The fact that \(\Phi_{full}<\epsilon\mathbf{0}_{nom}\) does _not_ improve the accuracy \(\sigma_{\phi}\). The recovered amplitudes are also well-correlated between the two methods, although their average values differ, as shown in Figure 6. On average, the \(\mathcal{F}_{nom}\) amplitudes are \(\approx 4\%\) low, while the \(\mathcal{F}_{full}\) amplitudes are \(\approx 10\%\) low. By looking at the amplitudes in the dirty spectra, we have verified that this is a _clean bias_, as described by Condon et al. (1998), where spurious clean components on the sidelobes, whether positive or negative, reduce the amplitude of the peak. This bias is on the order of the rms scatter, here in the Faraday spectrum, and depends on the amplitude of the sidelobes and the depth of cleaning. This bias will require correction in any catalogs created using cleaned Faraday spectra. Since the amplitude of the correction depends on the details of the data and processing, simulations will be required in each project. Finally, we turned to the recovered values for the polarization angle \(\chi_{0}\). Figure 6 plots the average values of the error in \(\chi_{0}\), \(\delta\chi_{0}\), and the rms scatter for the 101 realizations at each Faraday depth (\(\delta\chi_{0}=\chi_{0}\) since the input \(\chi_{0}=0\)). The key question is whether the rms scatter in \(\chi_{0}\) among the 101 realizations at each depth is smaller for \(\mathcal{F}_{nom}\) as a result of the improved phase stability suggested by **BdB**. For \(\mathcal{F}_{full}\), the scatter is 8.4 rad m\({}^{-2}\). For \(\mathcal{F}_{nom}\), we calculated the rms scatter in two different ways. First, we rotated \(\chi_{0}\) back to zero wavelength assuming the _input_ Faraday depth, as was done for the averaged points. This results in an rms scatter of 3.3 degrees; this is much less than for \(\mathcal{F}_{full}\), and is the phase stability claimed by **BdB**. However, in an actual experiment, the true Faraday depth would not be known, so we did a second rms calculation based on correcting the observed values of \(\chi_{0}\) to zero wavelength using the _observed_ Faraday depth. This results in a scatter of 7.4 degrees, close to the rms scatter for \(\mathcal{F}_{full}\). We thus conclude that \(\mathcal{F}_{nom}\), despite its superior phase stability at a wavelength of \(\lambda_{0}\), offers little or no advantage in the accuracy with which \(\chi_{0}\) can be recovered at \(\lambda=0\). As mentioned earlier, this conclusion for the MeerKAT bandpass would need to be verified for narrowband surveys that have a much larger ratio of \(\Phi_{nom}/\Phi_{full}\) (see Table 1). ## 4 Recovery of complex Faraday structure In this section, we examine the ability of nominal and full resolution Faraday synthesis to detect the presence of complex Faraday structure, i.e., where more than one isolated Faraday component is present. The most important regimes are those with Faraday structure on the order of the nominal resolution. We examine two limiting, idealized noise-free cases: a continuous distribution of polarized components with constant amplitude, extending over a finite width in Faraday depth i.e., a "tophat" distribution, and the more restricted example of two Faraday components separated in depth by \((<1-\approx 5)\times\) the nominal resolution. ### Continuous distributions _Key findings: For a constant amplitude distribution of Faraday depths, at small widths the Faraday spectrum appears as a broadened Gaussian, and at large widths, as two Faraday peaks at the edges. \(\mathcal{F}_{full}\) is \(\approx 2\times\) more powerful than \(\mathcal{F}_{nom}\) in its ability to detect the input Faraday complexity for widths up to at least \(\Phi_{full}\). For \(\mathcal{F}_{full}\) (but not for \(\mathcal{F}_{nom}\) ), suggestions of the shape of the input distribution are apparent in the spectra for widths from \(\approx 0.6-1.5\times\Phi_{full}\)._ We approximate a continuous Faraday distribution with constant amplitude in Faraday depth (a "tophat") by summing in Q and U a series individual Faraday delta functions separated by 0.25 rad m\({}^{-2}\). The amplitudes of the components are normalized so that total input signal flux is constant independent of the tophat width. We consider cases both where the polarization angle is constant across all components, and also where a gradient in angle is incorporated. Figure 7 shows the resulting Faraday spectra for several different widths, centered on a Faraday depth of 60 rad m\({}^{-2}\). The immediate realization is that there is _no_ width at which a clear tophat shape is seen for \(\mathcal{F}_{nom}\). When the width is too small, the Faraday spectrum appears as a broadened Gaussian. When the width is too large, the tophat structure is lost, and is replaced by a pair of narrow components at the edges of the distribution. This behavior was identified by **BdB**, who noted (in their Eq. 64) that in order to resolve complex structure, the Faraday resolution must be less than the maximum width where the signal can be recovered. This is discussed further, below. A more comprehensive display of the responses to the tophat continuous distribution is shown in Figure 8. At each tophat width, we averaged 100 different tophat distributions with different gradients in polarization angle, ranging from a change in position angle across the distribution from 0 to 90 degrees. The behavior seen in the 1D plots of Figure 7 can also be seen here; a broadened Gaussian separating into two branches as the width increases. For \(\mathcal{F}_{full}\) only, the range of widths from \(\approx\)10-25 rad m\({}^{-2}\) (\(0.6-1.5\times\Phi_{full}\)) shows suggestions of the input tophat distribution. We now look quantitatively at the observed loss of power as a function of the width of the Faraday distribution. We start by looking Figure 5: \(\mathcal{F}_{full}\) and \(\mathcal{F}_{nom}\) deconvolved, restored Faraday spectra for one realization at depth \(\phi\)=90 rad m\({}^{-2}\). The peaks have been normalized to unity. Here and throughout, \(\circ\)’s circles are used for \(\mathcal{F}_{full}\) and \(X\)’s for \(\mathcal{F}_{nom}\). at the peak amplitude in the Faraday spectrum, and find that it falls by a factor of 2, for both \(\mathcal{F}_{nom}\) and \(\mathcal{F}_{full}\), at \(\approx 20\) rad m\({}^{-2}\) (Figure 9). This is much less than the _max-scale_ of 109 rad m\({}^{-2}\) predicted by **BdB**. We can even look at the average power over the entire spectrum, which would be difficult to do in practice, and find that the power drops by 2 at \(\approx 40\) rad m\({}^{-2}\). Our new estimate of the appropriate width to half-power, \(W_{max}\), is 25 rad m\({}^{-2}\), comparable to what is observed. We now address the critical question about whether \(\mathcal{F}_{full}\) is superior to \(\mathcal{F}_{nom}\) in identifying Faraday complexity, i.e., the presence of more than a single \(\delta\) function component. We examine this by simply finding the peak in each spectrum and then subtracting it using the restoring beam of width \(\Phi_{nom}\) or \(\Phi_{full}\), as appropriate. This is equivalent to not restoring any clean components at and adjacent to the peak in the spectrum. 4 One example of this process, for an input width just slightly above \(\Phi_{full}\) is shown in Figure 10. We then measure the total residual signal in the spectrum as a percentage of the total signal before subtraction. As can be seen in Figure 10, the percentage residual is significantly smaller for \(\mathcal{F}_{nom}\) than for \(\mathcal{F}_{full}\), as expected. This is unavoidable because the peak amplitude in \(\mathcal{F}_{nom}\) represents an integration over a larger range of Faraday depths than for \(\mathcal{F}_{full}\). In this particular example using the MeerKAT L-band, with \(\Phi_{nom}/\Phi_{full}\approx 3\), this leads to a factor of \(\approx\)2 improvement in fractional residual power, i.e., the ability to detect complexity; this improvement applies for input widths \(\leq\Phi_{full}\), as shown in Figure 11 Footnote 4: An alternative process to search for complexity could use a loop gain of 1 in cleaning, and subtract out only one component; we did not explore that option here. #### 4.1.1 Recovery of extended distributions For continuous distributions of Faraday depth, the interference between components at different depths causes both the input power and the power recoverable in the Faraday spectrum to fall as a func Figure 8: Faraday spectra along the horizontal axis at each width as indicated along the vertical axis. The cyan boxes show selected _input_ tophat widths. Top: \(\mathcal{F}_{nom}\); Bottom: \(\mathcal{F}_{full}\). The spectra for each width have been averaged over 100 different realizations, each with a ramp of polarization angle ranges extending from 0 degrees across the tophat to 90 degrees across the tophat. At widths \(\gtrsim 25\) rad m\({}^{-2}\), the spectra are dominated by peaks at the edges of the input tophat, as opposed to its actual continuous distribution. Figure 6: Left: Observed Faraday depths, as a function of input Faraday depth, each averaged over 101 noise realizations. Center: Average amplitude recovered at each Faraday depth, comparing results from \(\mathcal{F}_{nom}\) and \(\mathcal{F}_{full}\). Right: Average polarization angle \(x_{0}\) recovered at each Faraday depth, comparing results from \(\mathcal{F}_{nom}\) correcting back to \(\lambda=0\) based on the _input_ depth and \(\mathcal{F}_{full}\). The rms scatter in \(x_{0}\) among the 101 realizations at each Faraday depth is shown as the black horizontal line for \(\mathcal{F}_{full}\) and vertical lines showing two different calculations of \(x_{0}\) for \(\mathcal{F}_{nom}\), as described in the text. Figure 7: Faraday spectra of noiseless simulations of tophat functions of various widths in Faraday depth, with a constant polarization angle. Left: \(\mathcal{F}_{full}\); Right: \(\mathcal{F}_{nom}\). At 9 rad m\({}^{-2}\) there is a suggestion of the tophat shape for \(\mathcal{F}_{full}\), but not as clearly for \(\mathcal{F}_{nom}\). tion of width. The width at which the power falls by a factor of two is defined here as \(W_{max}\), and its derivation is presented in the Appendix. To simultaneously have sufficient resolution to determine that the distribution has a finite width, and to have sufficient power to detect it, requires that \(\Phi<W_{max}\). Using Eqs. 5, A3 and, again, \(\rho=\frac{A_{max}}{A_{min}}\), this requires for \(\mathcal{F}_{nom}\): \[0.18\ \frac{\rho^{4}-1}{\rho^{2}}>1, \tag{12}\] or \[\rho>6. \tag{13}\] The equivalent conditions for \(\mathcal{F}_{full}\), with Eq. 9 are: \[0.335\ \frac{(\rho^{2}+1)^{2}}{\rho^{2}}>1, \tag{14}\] which is satisfied for all values of \(\rho\). Thus, using \(\mathcal{F}_{full}\), continuous extended distributions in Faraday depth can always be at least marginally resolved, e.g., as seen in Fig. 7. However, among the surveys listed in Table 1, using the \(\mathcal{F}_{nom}\) resolution, only SKA1-Low will be able to resolve detectable broad structures. The details of the shapes recovered with \(\mathcal{F}_{full}\) depend on what variations are present in the polarization angle across the Faraday distribution. In the simplest physical case of a spatially unresolved foreground patchy Faraday screen in front of a uniform polarization angle source, the polarization angles would be constant. However, there can be arbitrary changes in polarization angle as a function of depth for mixtures of thermal and synchrotron emitting material. ### Two Faraday Components _Key Findings: Two Faraday components can often be distinguished at separations less than \(\Phi_{nom}\), with improved detectability using \(\mathcal{F}_{full}\). In this regime, the separations, amplitudes and polarization angles are, however, not accurately recovered. At some separations, spurious emission at the mean depth reappears, but is much weaker in \(\mathcal{F}_{full}\) than in \(\mathcal{F}_{nom}\)._ We now turn to the second simple case, two separated Faraday components with equal amplitudes. We vary both the separation in depth between the components and the difference in their polarization angles. Again, since the science often requires us to maximize the amount of Faraday structure we can see, we test separations that are both smaller and larger than \(\Phi_{nom}\). A global view of the results in shown in Figure 12. At the bottom, where the depth separations are 0, the spectrum peaks at the mean Faraday depth of 60 rad m\({}^{-2}\), as expected. At large separations, at the top of the figure, the two individual components are easily visible, with respective depths that track the input depths, as expected. The behavior in between these two extremes is quite complex. We first look at the observed power at the mean depth, which should start at the sum of the amplitudes of the two components and then fall to Figure 11: Percentage residual signal as a function of input tophat width, integrated over Faraday spectrum after removal of single Faraday component, as described in the text, for both \(\mathcal{F}_{nom}\) and \(\mathcal{F}_{full}\). The vertical bar indicates a factor of 2. Figure 12: Faraday spectra along the horizontal axis, with increasing separations between two components with the same polarization angle at each higher row, this is similar to the display in Figure 8. The left column shows the results from \(\mathcal{F}_{nom}\), and the right column from \(\mathcal{F}_{full}\). Note the spurious power that sometimes appears at the mean depth.Such spurious components are the result of the interference that occurs in the presence of phase variations with Faraday depth, such as noted in Figure 3. Figure 10: Faraday spectra for tophat input distribution showing original and residual spectra after subtracting out a single Faraday component with the observed peak amplitude. Left: \(\mathcal{F}_{full}\); Right: \(\mathcal{F}_{nom}\). Figure 9: Recovered signal strength as a function of tophat width for \(\mathcal{F}_{nom}\) and \(\mathcal{F}_{full}\), as described in the text. zero as they are well separated. If the combined Faraday spectrum were simply the sum of the two input _amplitude_ spectra (which it's not), then this falloff would follow a Gaussian shape. Instead, since it is the complex spectra that are combined, there is interference between the two components which depends on their relative phase. This power at the mean depth was studied by Kumazaki et al. (2014), who called it a "false signal"; they showed that its strength was a function of the separation, relative phase and relative amplitudes of the two components, similar to the findings from these studies. The additional information shown here is that the spurious power at the mean depth also depends on the restoring beam. After averaging over all relative phases between the two components, we show the observed spurious power at the mean Faraday depth in Figure 13. Our results are similar to those of Kumazaki et al. (2014), although we average all the power in the restoring beam centered at the mean depth, while they select only limited clean components. There is a region around \(\sim 40\) rad m\({}^{-2}\) where the two components interfere to produce power at the mean depth; this spurious power is much stronger in \(\mathcal{F}_{nom}\) than in \(\mathcal{F}_{full}\), and also extends over a much larger range in depth separation.Such interference arises in the presence of phase gradients as a function of Faraday depth, which can even arise in the cleaning process, such as noted for Figure 3. In addition to the presence of spurious signals, there are also deviations in the observed parameters from the input parameters for separations up to scales of \(\approx\Phi_{nom}\). This can be seen most clearly in the non-monotonically increasing separations of the two components of the \(\mathcal{F}_{full}\) spectra of Figure 12; similar behavior is present in \(\mathcal{F}_{nom}\) although it is less obvious. These deviations from the input separations are also shown quantitatively in Figure 14, where the problems at separations less than \(\Phi_{nom}\) are clear. Similarly, the rms deviations in observed amplitude and position angle) are much higher at small separations - \(\sim 33\%\) (and \(20^{\circ}\)) for separations \(<\Phi_{nom}\), dropping to \(\sim 10\%\) ( and \(3^{\circ}\)) for separations \(>\Phi_{nom}\). The results are similar for both \(\mathcal{F}_{nom}\) and \(\mathcal{F}_{full}\). The bottom line from all this is that two components with separations less than \(\Phi_{nom}\) are detectable some of the time, depending on their relative polarization angles but their observed parameters are not trustworthy. Above \(\Phi_{nom}\), \(\mathcal{F}_{nom}\) and \(\mathcal{F}_{full}\) perform equally well. At separations comparable to \(\Phi_{nom}\), \(\mathcal{F}_{nom}\) shows considerably stronger spurious signals. #### 4.2.1 Detectability of multiple Faraday components We can again ask the simpler question most relevant to the analysis that would be performed in surveys, i.e. what is the detectability of Faraday structure as the separation between two components increases? We again subtract out a single component from the peak in the clean spectrum and measure the percentage of power remaining from \(\rightarrow\)\(\Phi_{full}\) to \(+\Phi_{full}\) (i.e., two beam-widths), and the same for \(\mathcal{F}_{nom}\), using \(-\Phi_{nom}\) to \(+\Phi_{nom}\). The results are shown in Figure 15. At very small separations, the two components overlap and the expected residual is 0%, as observed for both restoring beams. At sufficiently large separations, we expect 50% of the power to remain after subtraction of a single component, as observed. Between these two extremes. \(\mathcal{F}_{full}\) shows a higher percentage of residual Figure 14: Deviation from the observed separation in Faraday depths of the peaks in the spectrum as a function of the input separation. The black and red lines show the mean, averaged over all relative polarization angles between the two components, and the error bars indicate the rms scatter. Figure 13: “Spurious” power for \(\mathcal{F}_{nom}\) and \(\mathcal{F}_{full}\), observed at the mean Faraday depth as a function of the separation in depth between two components with amplitudes=50; the horizontal line shows the true signal strength that should be observed away from the central peak. Figure 15: Residual power remaining in the clean spectra, as a function of component separation, after removal of a single component, expressed as percentage of the original power. power, up to a factor of \(\sim\)2, thereby increasing the detectable range for additional Faraday structure. This result is the natural, and perhaps obvious, consequence of using a smaller restoring beam; nonetheless, this quantifies the advantages to that approach for separations from \(\approx 0.5-1.5\times\Phi_{nom}\). ## 5 Faraday mapping In the simplest case, only single Faraday components are present at each position in an image, and the spatial variations in Faraday depth \(\Delta\phi\) are small with respect to the resolution \(\Phi\) over scales comparable to the angular resolution (the "beam"). In this case, all of the information is present in maps of the peak Faraday depth in the spectrum for each pixel. Even in this idealized case, however, a series of two-dimensional maps at each Faraday depth (tomography images), are often useful to detect the spatial patterns. Two-dimensional images in Faraday depth vs. position space (\(\phi\) vs. \(x\)), where the orthogonal position has been fixed, provide another useful diagnostic. Both of these techniques are exploited, e.g, in the recovery of the 3D structure of 3C40B using MeerKAT observations (Rudnick et al., 2022). These mapping techniques become essential when the variations \(\Delta\phi\) are comparable to \(\Phi\) over scales of the beam; in this case, there is no longer a single Faraday depth at each position, and the Faraday spectra will be subject to the distortions examined earlier. In this section, we use simulations to compare how Faraday tomography mapping and depth vs. position mapping appear in \(\mathcal{F}_{nom}\) and \(\mathcal{F}_{full}\), and look at real Faraday structure data from MeerKAT. ### Simple Faraday Depth gradients _Key findings: Faraday tomography maps are_ **spatially** _broadened by the effects of finite Faraday resolution in the case of spatial gradients in Faraday depth._ We start with a simple cartoon to illustrate how, in the presence of Faraday depth variations, broadening of the Faraday spectrum leads to broadening in the plane of the sky. Figure 16 presents the amplitude of \(\mathcal{F}(\phi)\) as a function of a single position coordinate. The Faraday spectrum is a simple delta function, whose depth changes linearly with position. The small bar in the upper right shows the "beam" in this depth vs. position space. One can examine the spatial distribution at a given Faraday depth by taking a 1-D slice at that depth as a function of position (the horizontal magenta line in Fig. 16 - this is the equivalent of a tomography plane in 2-D). We then measure the spatial width between the half-power points (the vertical cyan lines). On the left, the width of the feature at \(\phi=0\) rad m\({}^{-2}\) is \(5^{\prime\prime}\), as set by the spatial beam size. On the right, the Faraday spectrum has been broadened to 10 rad m\({}^{-2}\). Despite the spatial beam still remaining at \(5^{\prime\prime}\), as can be seen in the beam in the upper right of the frame, the _observed_ width at \(\phi=0\) rad m\({}^{-2}\) is now \(\approx\)20\({}^{\prime\prime}\). This spatial broadening occurs because the tomography cut (plane) at a depth of 0 rad m\({}^{-2}\) is actually measuring the emissions from \(\pm\) 5 rad m\({}^{-2}\), which come from different positions. This broadening therefore blends and masks features in tomography images, compromising the hard-won spatial resolution set by the telescope. Since the Faraday beam broadening is convolved with the original spatial beam, the effective size of the spatial beam in a tomography image, designated here as \(\theta_{t}\), can be approximated as \[\theta_{t}\approx\sqrt{\theta_{0}^{2}+(\frac{dx}{d\phi}\Phi)^{2}} \tag{15}\] where \(\theta_{0}\) is the spatial beam size, \(x\) is the position coordinate, \(\frac{dx}{d\phi}\) is the local spatial gradient in Faraday depth and \(\Phi\) is the Faraday restoring beamwidth. In the above case, \(\frac{dx}{d\phi}\) is \(2^{\prime\prime}\frac{\mu}{radm^{-2}}\) and \(\Phi=10\) rad m\({}^{-2}\), so \(\theta_{t}\approx 20.6^{\prime\prime}\), as observed. If the Faraday depth variations are not a simple gradient, then the spatial effects due to Faraday depth variations may not be easily recognizable, as we will see with actual data, below. ### Faraday Variation Grid _Key findings: In Faraday tomography mapping, spurious spatial structures appear when multiple Faraday depths are present within the spatial beam. Use of the narrower restoring beam \(\Phi_{full}\) allows a larger range of Faraday widths to be free of such spurious structures._ The above cartoon, Figure 16, illustrates the overall broadening effect, but, for simplicity, assumes that the broadening took place in the Faraday _amplitude_ spectrum. More accurately, the broadening occurs in the complex Faraday space, and so the resulting patterns are more complicated. As a simple illustration of what happens in the map plane when there is mixing of different Faraday components, we create cubes in (x, y, frequency) space, where every pixel has a single Faraday depth. The top panel in Figure 17 shows the Faraday depth at each position. Along each row, the Faraday depth changes sinusoidally between the values of -20 and +40 rad m\({}^{-2}\). The spatial wavelength increases from the top row to the bottom row so that there are \(\sim 8\) full cycles on the top row, and zero cycles on the bottom row. We then smoothed the values of Q and U along the X-axis by a Gaussian with 3 pixel FWHM. Each smoothed pixel thus has contributions from a range of Faraday depths. Along the bottom row all of the pixels have a depth of 20 rad m\({}^{-2}\), so the smoothed Q and U have only that single Faraday depth. Along the top row, the range of Faraday depths in each pixel varies from 6 rad m\({}^{-2}\) to 36 rad m\({}^{-2}\). The maximum Faraday width in each row is given along the Y-axis. This simulated Faraday grid is equivalent to having \(\sim\)3 dominant Faraday components in each pixel. Its effects differ in detail from the idealized continuous distributions and two-component cases discussed above. Nonetheless, it gives a simple overview of how \(\mathcal{F}_{nom}\) and \(\mathcal{F}_{full}\) behave in Faraday tomography images with variations in the amount of Faraday structure. The middle and bottom panels show a single tomography image Figure 16: Cartoon showing \(\delta\)-function Faraday spectra with depth as a function of position on the sky. In both panels, the spatial resolution \(\theta=5^{\prime\prime}\). Left: Faraday resolution \(\Phi=1\) rad m\({}^{-2}\); Right: \(\Phi=10\) rad m\({}^{-2}\). The vertical lines show the observed _spatial extent of the emission in the \(\phi=0\) Faraday tomography (1-D) image. The 2D beam is shown in the upper right portion of each panel_. plane at the Faraday depth of 20 rad m\({}^{-2}\), for \(\mathcal{F}_{nom}\) and \(\mathcal{F}_{full}\), respectively, cleaned and restored as with our earlier experiments. The black lines with white borders show the the locations of input Faraday depth of 20 rad m\({}^{-2}\), for the middle cycle of the sine waves. In the idea world, the tomography images at 20 rad m\({}^{-2}\)would simply trace the black/white lines; they do not. The first finding is the observed bands in the tomography images are much broader than the 3-pixel smoothing beam, the same width as the black/white line. That broadening, present in all rows, is because the 20 rad m\({}^{-2}\) image actually samples emission from \(\sim\)12 - 28 rad m\({}^{-2}\) (\(\sim\)2 - 42 rad m\({}^{-2}\) ) for \(\mathcal{F}_{full}\) (\(\mathcal{F}_{nom}\)), respectively. In the row where the Faraday mixing width is \(\sim\)10 rad m\({}^{-2}\), the broadening causes the band in the \(\mathcal{F}_{nom}\) (\(\mathcal{F}_{full}\)) image to be 16 (8) pixels wide, respectively, instead of the spatial beamwidth of 3 pixels. This is another example of the broadening described by Equation 15. Another consequence of the broadening is that the ratio between the brightest and the faintest features are reduced in the tomography image. This is because the broadening is a function of position on the image, since the local Faraday depth gradients vary. At a Faraday mixing width of 10 rad m\({}^{-2}\), the ratio of brightest/faintest is 5.2 (575) for \(\mathcal{F}_{nom}\) (\(\mathcal{F}_{full}\)), respectively. For mapping purposes, \(\mathcal{F}_{full}\) is clearly superior. In some ways, these broadening effects are similar to what happens in total intensity images. Based on our sampling of the intensity restoring beam, adjacent pixels are not independent, but represent the emission at the exact location of that pixel as well as emission occurring at adjacent pixels. This is universally understood in the radio astronomy community, so there is no confusion. However, in tomography images, what appears in any pixel in a tomography plane also reflects the emission in the planes between \(\pm\Phi/2\), where \(\Phi\) is the Faraday beam. If one were examining all the planes together, as in a cube or a movie display, the origins of the broad structures would be apparent. In a single tomography plane, there is no way to distinguish between intrinsically broad spatial structures and observed spatial extent due to the Faraday broadening. The situation in this Faraday/spatial broadening is actually more complicated, however, by the fact that the blending takes place in the vector space of Q and U, so that both constructive and destructive interference can appear. This results in spurious structures in the Faraday spectra, as described in the previous two sections, mapping into spurious structures in the Faraday tomography image planes. The spurious structures are apparent by noting that the observed bands in the 20 rad m\({}^{-2}\) tomography images track the 20 rad m\({}^{-2}\) input locations, but only when the mixing widths are sufficiently small. As the widths increase, increasing amounts of power are found between the locations of the 20 rad m\({}^{-2}\) inputs, until finally all the power is at the spurious locations of the 0 and 40 rad m\({}^{-2}\) inputs. Equal or greater power at the spurious locations is found for widths above 20 rad m\({}^{-2}\) (30 rad m\({}^{-2}\)) for \(\mathcal{F}_{nom}\) (\(\mathcal{F}_{full}\)), respectively. Thus, there is a considerably larger range of Faraday widths where \(\mathcal{F}_{full}\) is free of spurious structures. Thus, in real maps, when significant Faraday variations occur within a spatial beam, spurious structures can appear in the tomography images. Since Faraday beam depolarization is almost ubiquitous, the potential for spurious structures in tomography images is very high. These spurious structures do not correspond to any real structure at the respective Faraday depth. In addition, as seen earlier, the breadth of the Faraday beam will also cause structure to appear at Faraday depths where they are not present. While this is true for \(\mathcal{F}_{full}\) as well, it is several times worse for \(\mathcal{F}_{nom}\). ### MeerKAT polarization mapping _Key findings: In the presence of complicated Faraday structure in both depth and the plane of the sky, images of the peak depth of the Faraday spectrum are almost identical using \(\mathcal{F}_{nom}\) and \(\mathcal{F}_{full}\). However, the broader \(\mathcal{F}_{nom}\) beam blurs out detailed Faraday variations that are seen clearly in \(\mathcal{F}_{full}\). In addition, \(\mathcal{F}_{nom}\) produces spatial structures in Faraday tomography planes where \(\mathcal{F}_{full}\) shows there is no emission._ MeerKAT observations of the cluster of galaxies J0627.2-5428 (Abell 3395) were reported by Knowles et al. (2021). More detailed analysis of this field, and comparison with X-ray data from eROSITA, were presented by Reiprich et al. (2021) and Bruggen et al. (2021). The observations consisted of approximately 9 hours duration, including calibration, at L band (856-1712 MHz). Calibration was described in Knowles et al. (2021). The data were imaged in Obit/MFImage with 0.3% fractional bandwidth (123 spectral channels) using joint Q/U deconvolution. The beam size was 6.8%.637\({}^{\prime\prime}\)at an angle of 88\({}^{\circ}\), and the pixel size was 1.194\({}^{\prime\prime}\). A Faraday spectrum cube was generated using RMSyn with 2 rad m\({}^{-2}\) sampling between -600 and +600 rad m\({}^{-2}\) and decon deconvolved/f restored as described above for both \(\mathcal{F}_{nom}\) and \(\mathcal{F}_{full}\). Following the standard analysis, we created 2D images by finding the depth and amplitude of the peak in the Faraday amplitude spectrum for each pixel. The rms scatter in the peak amplitude image was \(\sim\)2.5\(\mu\)Jy for both \(\mathcal{F}_{nom}\) and \(\mathcal{F}_{full}\). Figure 18 shows the Faraday depths at the peak in the Faraday spectrum (\(\phi_{peak}\)) for each pixel, wherever the signal:noise was \(>14\) Figure 17: Top: Faraday depth in plane of sky. Middle: Single tomography plane at depth of 20 rad m\({}^{-2}\) from \(\mathcal{F}_{full}\), intensity in heat, each row normalized to the same average. Bottom: Same for \(\mathcal{F}_{nom}\). The X and Y coordinates are positions in the sky; the magnitude of Faraday depth variations changes with Y, as described in the text. The black lines with white borders shows the location of input depths of 20 rad m\({}^{-2}\), for the first two sinusoidal patterns The results for \(\mathcal{F}_{nom}\) and \(\mathcal{F}_{full}\) were virtually identical, as seen in this figure and in Table 2; the small differences come largely from the handful of isolated pixels due to enhanced noise around the bright sources. Sources S1 and S3 show much larger scatters in \(\phi_{peak}\) than in S2. This is consistent with S1 and S3 being embedded or behind the broad band of X-ray emission that connects them in projection; there is, however, no independent confirmation available. On larger scales, the mean RM in this region 5 is influenced by cluster sources themselves, so the Galactic foreground is not well constrained. Footnote 5: From the CIRADA RM cutout server, [http://cutouts.cirada.ca/rmcutoutout/](http://cutouts.cirada.ca/rmcutoutout/) In Figure 19 we show one example of how a broader beam can create "spurious" structures in Faraday tomography images, similar to some of those seen in the simulations discussed earlier. Structure at the ellipse appears in the \(\mathcal{F}_{nom}\) 42 rad m\({}^{-2}\) tomography image, but not in the \(\mathcal{F}_{full}\) image. The spectrum at this location, in the top right of the figure, shows why. At this location, there is a bright peak in the Faraday spectrum at \(\phi\)=21 rad m\({}^{-2}\). At \(\phi\)=42 rad m\({}^{-2}\) there is still power in \(\mathcal{F}_{nom}\) from the \(\phi\)=21 rad m\({}^{-2}\) peak, so the tomography map shows a bright patch there. However, in \(\mathcal{F}_{full}\), the emission has dropped to near 0 by \(\phi=\)42 rad m\({}^{-2}\), so there is no feature at the ellipse. If one were viewing the full Faraday cube, it would be obvious that this 42 rad m\({}^{-2}\) emission comes from a different depth. One could avoid this problem by only sampling the Faraday tomography images 3\(\times\) more sparsely. However, and this is the key issue, this comes at the expense of losing information about the spatial/spectral Faraday structure that is present in the cube. The use of smaller restoring beams, as in \(\mathcal{F}_{full}\), does not remove the problem of the smearing of structures from one depth map to another, it simply reduces the range of depths over which this is a problem. In all cases, claims of emission at a specific Faraday depth require examination of the full cube. An example of lost information using \(\mathcal{F}_{nom}\) can be seen in the southern lobe of 3C40B, which is analysed in Rudnick et al. (2022), using similar procedures to Abell 3395, as discussed above.. There, using \begin{table} \begin{tabular}{c c c c c} \hline \hline Source & \multicolumn{2}{c}{\(\mathcal{F}_{nom}\)} & \multicolumn{2}{c}{\(\mathcal{F}_{full}\)} \\ \hline & \((\phi_{pk})\) & \(\sigma_{\phi}\) & \((\phi_{pk})\) & \(\sigma_{\phi}\) \\ & rad m\({}^{-2}\) & rad m\({}^{-2}\) & rad m\({}^{-2}\) & rad m\({}^{-2}\) \\ \hline S1 & 25 & 84 & 29 & 88 \\ S2 & 56 & 9 & 50 & 4 \\ S3 & 43 & 57 & 39 & 55 \\ \hline \end{tabular} \end{table} Table 2: Peak Faraday depths in Abell 3395 Figure 19: North end of S3. Top left: Peak Faraday depth; the depth at the ellipse is 21 rad m\({}^{-2}\); Top right: \(\mathcal{F}_{nom}\) and \(\mathcal{F}_{full}\) spectra at the position of the ellipse. Vertical line indicates depth of 42 rad m\({}^{-2}\). Bottom left: \(\mathcal{F}_{full}\) tomography plane at 42 rad m\({}^{-2}\). No emission is seen in the ellipse, as expected from spectrum. Bottom right: \(\mathcal{F}_{nom}\) tomography plane at 42 rad m\({}^{-2}\). Emission seen at ellipse from emission peaking at 21 rad m\({}^{-2}\). The peak visible in the \(\mathcal{F}_{full}\) spectrum at -15 rad m\({}^{-2}\) comes from a strong component at that depth just to the east of the ellipse and not visible here; a small amount extends into the ellipse for \(\mathcal{F}_{full}\), although not for \(\mathcal{F}_{nom}\), reflecting the slightly different interference in the two cases. Figure 18: Faraday depth at the peak amplitude in the northern section of Abell 3395, showing all pixels with brightness \(>35\)\(\mu\)Jy/beam. The results are for \(\mathcal{F}_{nom}\) (top) and \(\mathcal{F}_{full}\) (bottom). The underlying heat image shows the X-ray brightness from eROSITA (Reiprich et al., 2021; Brüggen et al., 2021), with this version courtesy of Angie Veronica. \(\mathcal{F}_{full}\), movies of the Faraday cube show that the lobe is comprised of long thin coherent structures at different Faraday depths, which likely indicate different distances along the line of sight, allowing us to interpret the 3D structure. Several views of this lobe are shown in Figure 20. On the top left is the image of the peak depth at each pixel; this is the standard way depth (RM) maps are shown, and it obscures all of the detail in the lobe. Such a display is most useful when the variations in Faraday depth are dominated by a foreground screen, and the details of the lobe are irrelevant. On the top right is the same image of peak depth in color, where the brightness shows the intensity of the amplitude at the peak depth (polarized intensity). The various structures are immediately visible, because in this case, the depth is connected to the lobe structures themselves, as can be clearly seen in the movies in Rudnick et al. (2022). It is important to realize that this type of display could be misleading if, in fact, the depth variations were due to a foreground screen. Examination of the full Faraday cubes is essential to separate foreground from local effects. Three \(\mathcal{F}_{full}\) tomography planes, from the low Faraday depth part of the distribution are shown in the bottom left. Most of the same structures in \(\mathcal{F}_{full}\) are also visible in a smoothed version of the Faraday cube, in the bottom right, showing what would be observed at \(\mathcal{F}_{nom}\) resolution. The color scale is the same for \(\mathcal{F}_{nom}\) and \(\mathcal{F}_{full}\). However, the colors are heavily muted in \(\mathcal{F}_{nom}\) because each tomography plane is sampling a broader range in Faraday depth. In addition, features in the NW part of the lobe are visible in \(\mathcal{F}_{nom}\), although \(\mathcal{F}_{full}\) shows that they are not actually present at the depths displayed. As before, there is no difference in the information available from \(\mathcal{F}_{nom}\) and \(\mathcal{F}_{full}\), if one examines the full cubes. only that the smaller restoring beam in \(\mathcal{F}_{full}\) allows things to be seen more clearly. The three tomography planes in the bottom left panel are a subset of those used to make the \(\mathcal{F}_{full}\) movie of the full 3D structure of this lobe in Rudnick et al. (2022). The 3D structure is heavily blurred with \(\mathcal{F}_{nom}\) resolution, as illustrated in a single Declination vs. Faraday depth plane, at fixed R.A.. in Figure 21. Note that since there is a single dominant Faraday depth at each position, the uncertainty in depth is identical for \(\mathcal{F}_{full}\) and \(\mathcal{F}_{nom}\). However, the changes in depth with Declination are much clearer in \(\mathcal{F}_{full}\), simply because of the smaller restoring beam. This highlights a problem with how we display 2D information, whether here in depth vs. position space, or in a regular position-position image. Small shifts in the centroid position may be quite significant when the signal:noise is high, but this will be obscured by using the same restoring beam as when the accuracy is lower. This was the motivation behind the "maximum-entropy" technique introduced by Wernecke & D'Addario (1977), although it is currently not being used for interferometry images. It also motivates "adaptive smoothing," commonly used in X-rays, as introduced by Bohringer et al. (1994). ## 6 Discussion Faraday depth variations in extended sources carry information about the magnetized thermal plasmas in foreground screens, and in the medium local to the synchrotron source, including regions where the thermal and relativistic plasmas are mixed on macroscopic scales. There are various "figures of merit" for such studies, including the basic interferometer properties of sensitivity and angular resolution. For the Faraday emission itself, two additional parameters of importance, the resolution in Faraday depth (\(\Phi\)) and the maximum detectable breadth in Faraday depth \(W_{max}\), which are set by the coverage in wavelength.6 Footnote 6: The third parameter of interest, the largest detectable Faraday depth, depends on the bandwidth of individual channels, and is often configurable in the backend receiver systems. As we have shown in this paper, the commonly used Faraday synthesis procedures do not exploit the full information available in the complex Faraday spectrum; our goal was to explore what additional measurements are available. For both the commonly used Faraday Figure 21: Polarized intensity from 3C40B’s southern lobe, showing the Faraday depth distribution at each Declination, at the fixed Right Ascension of 01h25m47s. Left: \(\mathcal{F}_{full}\) resolution. Right: smoothed to \(\mathcal{F}_{nom}\) resolution. Figure 20: Faraday structure in the southern lobe of 3C40B. Top left: Peak Faraday depth. Top right: Peak depth color coded, as in top left panel, brightness corresponding to amplitude at peak depth. Bottom left: Partial structure as seen in Faraday tomography images at 2 (10, 18) and m\({}^{-2}\) in red (green, blue), for \(\mathcal{F}_{full}\). Bottom right: same as left, but smoothed to \(\mathcal{F}_{nom}\) resolution. synthesis procedure, using \(\lambda_{0}^{2}=\langle\lambda\rangle^{2}\approx\langle\lambda^{2}\rangle\) and our "full" synthesis, using \(\lambda_{0}^{2}=0\), the clean components are identical; the essential difference between the two is the use of a narrower restoring beam for \(\mathcal{T}_{full}\), corresponding to the width of the peak in the real component of the spectrum. To focus on the role of the restoring beams, we summarize what we've learned in terms of the two different beam widths, \(\Phi_{nom}\) and \(\Phi_{full}\). In the case of the MeerKAT L-band system, \(\Phi_{nom}\approx 3\times\Phi_{full}\), with the corresponding values for other surveys summarized in Table 1. There are a number of important lessons learned from these experiments. The most relevant figures for each are given in brackets. \(\bullet\) In the idealized case of a single Faraday depth in each pixel, both \(\Phi_{full}\) (\(\approx\frac{2}{\lambda_{max}^{2}\times\lambda_{min}^{2}}\)) and \(\Phi_{nom}\) (\(\approx\frac{2.8}{\lambda_{max}^{2}-\lambda_{min}^{2}}\)) give the same results in terms of derived values and their accuracy [Figure 6]. \(\bullet\) There is a bias in the recovered amplitudes due to cleaning, which is comparable to the rms in the Faraday spectra. This needs to be simulated and corrections applied in each individual use of Faraday clean [Figure 6]. \(\bullet\) In mapping applications, the use of \(\Phi_{full}\) reduces spatial smearing in tomography images [Figures 16, 21] and provides distinct advantages for significant regions of parameter space _viz.,_ the structure on scales between \(\Phi_{full}\) and \(\Phi_{nom}\), in a) tracing of spatial patterns related to Faraday depth [Figure 20], b) the detection of Faraday complexity, [Figures 10, 15] and c) the isolation of structures to their proper Faraday tomography image [Figure 18]. \(\bullet\) "Spurious features", i.e., peak emission in the Faraday spectrum where no true power is present, can arise from interference between components in the complex Fourier space [Figure 12]; the problem is significantly worse for \(\Phi_{nom}\) than it is for \(\Phi_{full}\)[Figure 13]. \(\bullet\) Although Faraday complexity can be detected on scales \(<\Phi_{nom}\), the detectability is a function of the phases of the underlying components, and the details of the recovered structures are not accurate in this range [Figure 14]. \(\bullet\) We introduce the quantity \(W_{max}=0.67(\lambda_{max}^{-2}+\lambda_{min}^{-2})\) rad m\({}^{-2}\), which represents the extent of a continuous distribution in Faraday depth beyond which the power in the Faraday spectrum drops by over a factor of 2 [Figure A2]. Based on their respective values of \(\frac{2_{max}}{\lambda_{min}}\), most current surveys will not be able to both simultaneously resolve continuous spectra and have the sensitivity to detect them [Table 1]. Mapping applications at \(\Phi_{full}\) resolution, instead of \(\Phi_{nom}\), offer perhaps the greatest potential. In the case where all variations in Faraday depth are due to patchy foreground screens, all of the useful information is found in the images of peak amplitude Faraday depth, supplemented by fractional polarization or depolarization information. However, as has been shown by de Gasperin et al. (2022) (their Fig. 16), there are coherent patterns in Faraday depth linked directly to total intensity structures in the northern relic of Abell 3667. These imply a Faraday medium _local_ to the synchrotron source, and thus enables the study of the 3D structures and the relationship between the thermal and relativistic plasmas. Even more dramatic examples are shown by Rudnick et al. (2022), where the 3D structure of radio filaments and lobes are shown through tomography maps and movies. As we showed in Figures 20 and 21, the improved _spatial_ resolution of \(\Phi_{full}\), in the presence of Faraday variations, is critical to understanding the underlying structures. Conversely, the standard way that Faraday results are shown in the literature is through maps of peak amplitude Faraday depth maps. _Even in the case of simple Faraday spectra_, with one component along each line of sight, the Faraday structures are not easily visible (see upper left panel of Fig. 20 and the left panel of Fig. 16 in de Gasperin et al. (2022), unless a narrower restoring beam is used. Just as investigations commonly are enriched by the highest possible spatial resolution data, Faraday studies will become more powerful with better resolution in Faraday depth space. Beyond this general argument, we can ask whether actual sources will have Faraday complexity in the newly accessible regimes. In Figure 11, we showed that the residual power, indicating complexity, was approximately 2\(\times\) higher in \(\Phi_{full}\) for continuous distributions with Faraday widths in the range 6 - 27 rad m\({}^{-2}\). In comparison, the values of \(\sigma_{RM}\), derived by Osinga et al. (2022) based on the depolarization of 819 sources, fell into this range 70% of the time, with a median value of 10 rad m\({}^{-2}\). In addition to the \(\Phi_{full}\) reduction of spatial broadening from Faraday structure, and its ability to identify Faraday complexity, it also provides a significant improvement in the elimination of "spurious" features. Such features represent the appearance of power in the Faraday spectrum at depths where no true power is present. Instances of "spurious" power can be seen in Figure 12, the simulation with two Faraday components. Looking at the second row, e.g., we see that near zero separation (the bottom of the panel), the spectral power peaks at the middle Faraday depth, as expected. As the separation increases, the twin peaks become dominant, again as expected. However, as the separation increases further, there again appears power at the central depth, where none actually exists. The separation at which this spurious power appears is a function of the relative phase of the two components. Figure 13 further shows that this spurious power is considerably stronger for \(\Phi_{nom}\), as opposed to \(\Phi_{full}\). Figure 17 shows spurious features where there is strong mixing within each beam; substantial power is seen at the depth of 20 rad m\({}^{-2}\) where none is actually present (between the black lines). The existence of these spurious features can/will confuse our interpretation of both spectra and tomography maps, and in some cases, even maps of the peak amplitude Faraday depth. We also found that our sensitivity to continuous distributions of Faraday depth, fell off at much smaller widths than expected, and we introduce a new variable, \(W_{max}\), to characterize the width at which the recovered power falls to half of the value it would have for the same input power, but a narrower width. If the underlying emission has variations in polarization angle, or if the Faraday and synchrotron media are intermixed, then the maximum width will be reduced still further. With a ratio of \(\frac{d_{max}}{\lambda_{min}}\sim 2\), we were unable to detect an input "topbar"-like structure with \(\Phi_{nom}\), and only marginally with \(\Phi_{full}\). In our survey compilation (Table 1), only SKA-low offers the potential to properly resolve such structures. All of these findings suggest a cautionary approach to our interpretation of Faraday data. Where quantitative results are necessary, it will likely be necessary to perform "forward-modelling," i.e., to assume a series of underlying models, propagate them through the observing and analysis setup, and see which of the results are consistent with the observations. Complex observed Faraday spectra provide exceptional challenges in this regard. In source S1 in Abell 3395 shown above, some of the locations had spectra with multiple peaks, while others had simple single-peaked spectra. Whether the multiple peaks were due to distinct components within a single spatial beam, broad Faraday distributions beyond \(W_{max}\), or were spurious interference features would require extensive additional data at different wavelengths. There are also a number of other innovative techniques being developed for better deriving information from wideband observations. Note that early attempts, such as fitting the Q,U spectra directly (Farnsworth et al., 2011; O'Sullivan et al., 2012) do allow use of the full resolution, but at the expense of requiring prior knowledge of the form of the Faraday spectrum (e.g., number of Faraday thin components). Recently, Pratley et al. (2021) have introduced a nonparametric method for Q,U fitting that circumvents this difficulty and can reconstruct more complex spectra. Ndiritu et al. (2021) use Gaussian Process Modeling which reduces sidelobe problems from gaps in wavelength coverage and utilizes the full resolution complex spectral information. They demonstrated equivalent performance, e.g., to the Q,U fitting methods, but without needing the prior knowledge. Cooray et al. (2021) use an iterative reconstruction algorithm which preserves the full resolution available. With their simulated band spanning 300 MHz - 3000 MHz (which would require combining all three SKA1 Mid bands, e.g.), they achieve a factor of \(\sim\)2 in effective resolution, (see their Figure 2) as expected here using Equations 5 and 9. Their reconstructions are also facilitated by the very high ratio of \(\frac{\partial_{\mathit{min}}}{\partial_{\mathit{min}}}=10\). ## 7 Concluding remarks and future work Through simulations and examinations of real data, we have learned important lessons about the intrinsic reliability of Faraday spectra to recover the true underlying Faraday structure. These lessons, summarized in Section 6, have important implications for our design of polarization experiments, for the interpretation of spectra and for our use of the powerful Faraday tomography techniques. For multiple applications, we have found that these problems are reduced, and diagnostic power is increased, by using the "full" Faraday resolution. It is therefore important that \(\mathcal{F}_{\mathit{full}}\) spectra are routinely used in the pipelines of polarization surveys, and in individual investigations. At this stage of our knowledge, it would be prudent to produce these in parallel with \(\mathcal{F}_{\mathit{nom}}\) spectra and imaging, to improve our understanding of their reliability. Since this involves changing only the restoring beam in the deconvolution process, it is trivial to implement. A variety of investigations should be done to extend the initial work presented here. In particular, the processing pipeline for the POSSUM survey (van Eck et al., in preparation) uses the quantity \(\sigma_{\mathit{add}}\) as a measure of Faraday complexity. \(\sigma_{\mathit{add}}\) is the value of additional power, measured by a maximum likelihood scheme, to explain the residual fluctuations in Q(\(\lambda^{2}\)), U(\(\lambda^{2}\)), after subtraction of the best-fit \(\delta\)-function Faraday component. It would be extremely useful to compare the detectability of complexity using the residuals in the main lobe of \(\mathcal{F}_{\mathit{full}}\), as presented in Figure 11, compared to using \(\sigma_{\mathit{add}}\), for a variety of simulated cases. Further work is important to understand whether the advantages of using \(\Phi_{\mathit{full}}\), as shown here, also apply to other surveys, especially those with a higher ratio of \(\frac{\Phi_{\mathit{nom}}}{\Phi_{\mathit{full}}}\), such as the POSSUM surveys. It is possible that the phase instabilities which motivated **BdB** to adopt \(\mathcal{F}_{\mathit{nom}}\) will reappear when this ratio is significantly higher than the value of 2.8 as studied here. We have also done some very simple experiments to examine how the performance changes as a function of the number of Faraday depths sampled across \(\Phi_{\mathit{full}}\). Our tentative results are that performance is not significantly affected as long as there are at least four samples across the beamwidth. This deserves more thorough study. Additional experiments probing the influence of the sidelobe magnitude on stability and the generation of spurious features, would also be of great value, and could influence how gaps in coverage are treated, whether tapering/weighting in \(\lambda^{2}\) space is of use, etc. All of this assumes, in addition, that spectral dependencies have been removed; when there are multiple components within a beam, that may not be possible, and the effects on the Faraday reconstruction must be understood. It would be quite useful to explore the use of variable width restoring beams for features at different S:N levels, similar to the adaptive-smoothing commonly used in X-ray imaging. Finally, well-designed direct comparisons of the other Faraday reconstruction techniques mentioned above, with fiducial models and realistic observing parameters (including wavelength gaps, noise variations across the band, etc.), and real data, are important and timely. Such comparisons may show that different methods are more practical/effective for different types of sources, or for surveys as opposed to individual source studies. We also note that although \(\Phi_{\mathit{full}}\) is a "natural" choice in some sense, not depending on some arbitrary choice of parameters, there is nothing in principle that would prevent using even smaller restoring beams. We have not explored the associated advantages and problems here. Given the enormous investments being made in polarization surveys, all of these types of investigative work will provide substantial returns in scientific productivity. ## Acknowledgment We thank Sasha Plavin, Anna Scaife, George Heald, Cameron van Eck, Shane O'Sullivan, Miguel Carmano, Shinsuke Ideguchi, and Yoshimitsu Miyashita for helpful comments. The anonymous referee provided critical feedback that allowed us to determine that the restoring beam was the key to the differences in results between the two methods, pointed us to the clean bias explanation for reduced recovered amplitudes, and provided a number of other useful suggestions for improving the paper. The MeerKAT telescope is operated by the South African Radio Astronomy Observatory, which is a facility of the National Research Foundation, an agency of the Department of Science and Innovation. W.C.acknowledges support from the National Radio Astronomy Observatory, which is a facility of the U.S. National Science Foundation operated under cooperative agreement by Associated Universities, Inc.. ## Data availability The software utilized in the paper are available at [http://www.cv.nrao.edu/](http://www.cv.nrao.edu/)\(\sim\)bcotton/Obit.html. FITS files of the various experiments will be made available upon reasonable request. MGCLS products are publicly available ([https://doi.org/10.48479/7epd-w356](https://doi.org/10.48479/7epd-w356)).
2307.03166
VideoGLUE: Video General Understanding Evaluation of Foundation Models
We evaluate existing foundation models video understanding capabilities using a carefully designed experiment protocol consisting of three hallmark tasks (action recognition, temporal localization, and spatiotemporal localization), eight datasets well received by the community, and four adaptation methods tailoring a foundation model (FM) for a downstream task. Moreover, we propose a scalar VideoGLUE score (VGS) to measure an FMs efficacy and efficiency when adapting to general video understanding tasks. Our main findings are as follows. First, task-specialized models significantly outperform the six FMs studied in this work, in sharp contrast to what FMs have achieved in natural language and image understanding. Second,video-native FMs, whose pretraining data contains the video modality, are generally better than image-native FMs in classifying motion-rich videos, localizing actions in time, and understanding a video of more than one action. Third, the video-native FMs can perform well on video tasks under light adaptations to downstream tasks(e.g., freezing the FM backbones), while image-native FMs win in full end-to-end finetuning. The first two observations reveal the need and tremendous opportunities to conduct research on video-focused FMs, and the last confirms that both tasks and adaptation methods matter when it comes to the evaluation of FMs. Our code is released under: https://github.com/tensorflow/models/tree/master/official/projects/videoglue.
Liangzhe Yuan, Nitesh Bharadwaj Gundavarapu, Long Zhao, Hao Zhou, Yin Cui, Lu Jiang, Xuan Yang, Menglin Jia, Tobias Weyand, Luke Friedman, Mikhail Sirotenko, Huisheng Wang, Florian Schroff, Hartwig Adam, Ming-Hsuan Yang, Ting Liu, Boqing Gong
2023-07-06T17:47:52Z
http://arxiv.org/abs/2307.03166v2
# VideoGLUE: Video General Understanding Evaluation of Foundation Models ###### Abstract We evaluate existing foundation models' video understanding capabilities using a carefully designed experiment protocol consisting of three hallmark tasks (action recognition, temporal localization, and spatiotemporal localization), eight datasets well received by the community, and four adaptation methods tailoring a foundation model (FM) for a downstream task. Moreover, we propose a scalar VideoGLUE score (_VGS_) to measure an FM's efficacy and efficiency when adapting to general video understanding tasks. Our main findings are as follows. First, task-specialized models significantly outperform the six FMs studied in this work, in sharp contrast to what FMs have achieved in natural language and image understanding. Second, video-native FMs, whose pretraining data contains the video modality, are generally better than image-native FMs in classifying motion-rich videos, localizing actions in time, and understanding a video of more than one action. Third, the video-native FMs can perform well on video tasks under light adaptations to downstream tasks (e.g., freezing the FM backbones), while image-native FMs win in full end-to-end finetuning. The first two observations reveal the need and tremendous opportunities to conduct research on video-focused FMs, and the last confirms that both tasks and adaptation methods matter when it comes to the evaluation of FMs. ## 1 Introduction Foundation models (FMs) are a term coined by Bommasani et al. [7], referring to "any model that is trained on broad data that can be adapted (e.g., finetuned) to a wide range of downstream tasks." Some representative FMs include but are not limited to BERT [13], GPT-3 [8], CLIP [41], and ALIGN [26]. This work primarily investigates the video understanding capabilies of six visual and multimodal FMs: CoCa [59], CLIP [41], FLAVA [47], VideoMAE [48], VATT [1], and InterVideo [55]. These models are selected because they are amendable for the video understanding of our interest and make their checkpoints accessible to us. It is nontrivial to evaluate FMs. In contrast to "specialist" models developed for a particular task, FMs are considered as "generalists" that learn shareable meta-knowledge across tasks so that one can quickly adapt them to achieve superior performance on various downstream tasks. Hence, _both the tasks and adaptation methods matter when it comes to evaluation_. However, the community has not reached a consensus on these two aspects. FM developers select their own different sets of downstream tasks -- interestingly, often covering no video or only appearance-rich video classification tasks [9; 30]. Moreover, they rely on distinct adaptation methods, making apples-to-apples comparisons challenging and causing mismatches with the FMs' actual use cases. To this end, we propose to evaluate FMs' video understanding capabilities using a carefully designed experiment protocol, named VideoGLUE, consisting of three hallmark tasks (action recognition, temporal localization, and spatiotemporal localization), eight datasets well received by the research community, and four model adaptation methods tailoring a foundation model for downstream tasks. The tasks examine an FM from various aspects needed for understanding video. The "all-around" adaptations represent the main use cases of FMs in the literature and, more importantly, allow us to thoroughly probe an FM's potential in video understanding. Why do we specifically focus on videos? The main motivation is to promote video understanding in the evaluation of FMs. More concretely, we test the following conjectures through this work. First, FMs' high performance on existing evaluation suites does not necessarily indicate their potential in video since these suites either lack video-specific tasks or selectively choose video tasks whose appearance feature is more important than motion -- InternVideo [55] is an exception as discussed in the next paragraph. Second, existing FMs cannot heed motion in video, given that they learn primarily from static images (and corresponding text descriptions) or short video clips containing limited motion. Third, popular adaptation methods (e.g., finetuning all weights) cannot supplement FMs with all the cues needed to recognize motion-rich actions and localize entities temporally and/or spatiotemporally. While our work is not the first to emphasize the evaluation of FMs, it is unique on multiple fronts. Unlike Elevater [32]'s target of evaluating language-augmented FMs, we consider all FMs adaptable to video understanding. Unlike Perception Test [4]'s coverage of a broad spectrum of perception tasks, we focus on video, allowing us to cover various aspects of this vertical domain. Interestingly, many of our datasets also appear in InternVideo [55], a video-oriented FM. However, we promote model adaptation methods as an inherent part of the evaluation protocol -- a consistent set of diverse adaptation methods is necessary to provide FMs ample opportunities to expose their video understanding capabilities. Moreover, unlike InternVideo's focus on their single FM, we evaluate FMs developed by different research groups in an uniform experiment protocol -- the first of its kind for visual and multimodal FMs, to the best of our knowledge. Figure 1: FMs vs. state-of-the-art task-specialized models on video understanding. Unlike natural language and image understanding, video tasks are where FMs generally fall behind “specialists”. Our main findings are as follows. First, task-specialized models still significantly outperform the six FMs studied in this work (see Figure 1), in sharp contrast to what FMs have achieved in natural language and image understanding. Hence, there is a need and tremendous opportunities to research video-focused FMs. Second, video-native FMs, whose pretraining data contains the video modality, are generally better than image-native FMs in classifying motion-rich videos, localizing actions in time, and understanding a video of more than one action. Third, the video-native FMs can perform well on video tasks under light adaptations to downstream tasks (e.g., freezing the FM backbones), while image-native FMs win in full end-to-end finetuning. This observation confirms that both tasks and adaptation methods matter when it comes to the evaluation of FMs. ## 2 Related work FMs.One common type of FMs are Large Language Models (LLMs) trained to acquire generic, transferable, and diverse representations that can enable sample-efficient learning and knowledge transfer across a broad range of downstream tasks. FMs are often trained with simple self-supervised learning objectives such as predicting the next token in a sentence (e.g., GPT-3 [8], PaLM [12]), or denoising the masked tokens (e.g., BERT [13], UNILM [14], and BEiT [5]). An intriguing characteristic of FMs is their ability to gradually acquire new capabilities as the model grows and the training data size increases, despite being trained on simple learning objectives [56]. For example, PaLM [12; 3], a massive LM with 540 billion parameters has started to show new capabilities in tasks such as explaining jokes, solving math, and performing common-sense reasoning when scaled to over 100B parameters. In addition to self-supervised transformers, FMs in computer vision also encompass transformers specifically trained to align image-text paired data. These FMs use learning objectives include contrastive learning (e.g., CLIP [41]), denoising masked tokens (e.g., BEiT-3 [53]), predicting the next token in a single modality (e.g., DALL-E [43]) or in the interleaved image-text sequence (e.g., Flamingo, KOSMOS-1 [24]). Recent FMs are also trained on a mixture of these objectives (e.g., CoCa [59], FLAVA [47], MAE [22]). For example, MAE combines autoencoder reconstruction objective jointly with the denoising objective [22] that was extended to video [18; 48]. In our study, we choose six representative FMs (i.e., CoCa [59], CLIP [41], FLAVA [47], VideoMAE [48], VATT [1], and InternVideo [55]) due to their amendability on video understanding and accessibility of checkpoints. Evaluation of FMs.As the mission of FMs is to enable sample-efficient knowledge transfer, the design of downstream tasks is critical to evaluate the capabilities and limitations of these models. The evaluation of FMs is pioneered by the NLP researchers. For example, GLUE [50] and SuperGLUE [49] introduced a suite of tools for evaluating language understanding tasks. The authors utilized established public benchmarks and provided tools for evaluating, probing, and benchmarking pretrained FMs, allowing for a comparison to human baselines. ELEVATER [32] introduced this concept to vision FMs along with a toolkit for evaluating vision-language tasks, including knowledge augmentation, hyperparameter tuning, and three adaptation techniques. In parallel, there have been attempts to establish a diagnostic benchmark for perceptual understanding of the world. For instance, Perception Test [4] crowd-sourced 11K videos in which about 100 users performed scripted activities. This benchmark [4] comprises videos filmed by only about 100 participants, which may not provide the same level of domain coverage and diversity as the other FM evaluation works mentioned earlier. Evaluation of video FMs.While some vision-language FMs have incorporated video tasks, their evaluation typically follows that of static images and neglects the unique aspects of video spatial-temporal modeling and reasoning. To our knowledge, no previous work has been solely dedicated to evaluating Video FMs. The closest work to ours are InternVideo [55] and VideoMAE [48], which introduce new FMs and show their superiority over several dozen video datasets. There are two key differences to the prior works. First, our evaluation is video-centric using the tasks that require motion understanding or long-term temporal reasoning. Second, instead of promoting new video FMs, our work proposes no new models and is solely dedicated to evaluating current and future Video FMs in an impartial reproducible experimental setup. Concretely, our goal is to provide tools for probing and benchmarking FMs on motion tasks in various setting include using the parameter-efficient adapter. ## 3 FMs for video understanding In this paper, we are interested in examining which FMs are good at solving video tasks, what makes them better than others in the video domain, and how to best adapt them to video understanding. Table 1 shows the six FMs we gained access to via public repositories or personal communications. ## 4 Tasks and adaptation methods both matter to the evaluation of FMs This section describes our video general understanding evaluation (VideoGLUE) benchmark: video-focused downstream tasks and methods to adapt an FM to the tasks. The former concretizes the video understanding capabilities we want to evaluate from an FM, while the latter provides various paths for an FM to showcase the corresponding capabilities. ### Video understanding tasks Like objects' role in image understanding, actions are the core of video understanding, leading us to select tasks and datasets that _recognize_ and _localize_ actions in time and space. Table 2 provides a quick summary. Next, we explain the rationale behind the particular choices of datasets and postpone the datasets' details to the supplementary materials. #### 4.1.1 Recognizing actions **General actions.** We first include the action recognition datasets of Kinetics400 (K400) [28], Moments-in-Time (MiT) [38], and Charades [46], considering their popularity that they are being complementary to each other. Regarding data sources, K400 videos are from Youtube, MiT draws videos from different Web venues, while Charades contains scripted videos. Regarding action labels, the datasets differ in granularities and real-life scenarios, a verb defines an action in MiT, K400 groups actions by verb-subject pairs, and Charades actions are about indoor activities. Regarding the average length, K400 and MiT videos are between 3 and 10 seconds, each with one action label, while Charades videos are about 30 seconds, each with multiple actions. **Fine-grained motion-focused actions.** We also include Something-something-v2 (SSv2) [20] and Diving48 (D48) [34] as another two action recognition datasets, whose actions are fine-grained and \begin{table} \begin{tabular}{l l l l l} \hline \hline \multicolumn{1}{l}{\begin{tabular}{l} Foundation Model \\ \end{tabular} } & \multicolumn{1}{l}{\begin{tabular}{l} Modality \\ \end{tabular} } & \multicolumn{1}{l}{\begin{tabular}{l} Pretraining Date \\ \end{tabular} } & \multicolumn{1}{l}{ \begin{tabular}{l} Pretraining Objective \\ \end{tabular} } \\ \hline CoCa [59] & Image + Text & JFT3B [60] + ALIGN [26] & Contrastive + Captioning \\ CLIP [41] & Image + Text & WebImageText [41] & Contrastive \\ FLAVA [47] & Image + Text & PMD [47] & Contrastive + MIM + MLM \\ \hline VideoMAE [18] & Video & K400 [28] & MVM \\ InternVideo [55] & Video & UnlabeledHybrid [55] & MVM + Contrastive \\ VATT [1] & Video + Audio + Text & HT100M [37] & Contrastive \\ \hline \hline \end{tabular} \end{table} Table 1: Foundation models (FMs) studied in this work (MxM stands for Masked {Image, Language, or Video} Modeling). \begin{table} \begin{tabular}{l l l l l l} \hline \hline \multicolumn{1}{l}{\begin{tabular}{l} Task \\ \end{tabular} } & \multicolumn{1}{l}{\begin{tabular}{l} Dataset \\ \end{tabular} } & \multicolumn{1}{l}{\begin{tabular}{l} Num. videos \\ \end{tabular} } & \multicolumn{1}{l}{\begin{tabular}{l} Avg. length \\ \end{tabular} } & \multicolumn{1}{l}{\begin{tabular}{l} Data source \\ \end{tabular} } & \multicolumn{1}{l}{\begin{tabular}{l} Note \\ \end{tabular} } \\ \hline \multirow{2}{*}{STAL} & AVA v2.2 [21] & \(210,634\) / \(57,371\) & \(15\) mins & Movie & spatiotemporal, instance \\ & AVA-Kinetics [31] & \(354,201\) / \(91,919\) & \(10\) seconds & Web & spatiotemporal, instance \\ \hline \multirow{2}{*}{\begin{tabular}{l} TAL \\ \end{tabular} } & ActivityNet v1.3 [16] & \(10,002\) / \(4,926\) & \(5\)-\(10\) mins & Web & temporal \\ \hline \multirow{4}{*}{ \begin{tabular}{l} VC \\ \end{tabular} } & Kinetics400 [28] & \(235,693\) / \(19,165\) & \(10\) seconds & Web & holistic, appearance \\ & Moments-in-Time [38] & \(791,246\) / \(33,898\) & \(3\) seconds & Web & holistic, appearance \\ & Sth-sht v2 [20] & \(168,913\) / \(24,777\) & \(2\)-\(6\) seconds & Crowd-source & holistic, motion \\ & Diving48 [34] & \(15,027\) / \(1,970\) & \(5\) seconds & Web & holistic, motion \\ & Charades [46] & \(110,905\) / \(4,985\) & \(30\) seconds & Crowd-source & multi-label, long-clip \\ \hline \hline \end{tabular} \end{table} Table 2: Summary of statistics, video properties, and data sources of each dataset. Tasks involved are spatiotemporal action localization (STAL), temporal action localization (TAL), and video classification (VC). Column ”Num. videos” contains video examples in train/evaluation splits, respectively. motion-focused. SSv2 contains 174 human hand gestures as action labels, such as putting something into something, turning something upside down, and covering something with something. D48 is all about competitive diving. Notably, the foreground objects' motion is a more significant discriminative cue than their appearance. #### 4.1.2 Localizing actions The videos in action recognition are trimmed, but actions could occur anywhere in a video in the wild. Hence, temporal and spatiotemporal action localization is also crucial to video understanding. Accordingly, we choose three datasets for the experiments: the action localization track of ActivityNet v1.3 (ANet) [16], Atomic Visual Actions (AVA) [21], and AVA-Kinetics (AVA-K) [31]. The last two require a model to localize (and recognize) actions in both time and space, and their underlying videos are movies and general Youtube videos, respectively. ### Adaptation methods In this section, we detail the task-specific neural architecture design and adaptation methods when applying FMs to downstream tasks. #### 4.2.1 Modifying FM architectures for downstream tasks Given a \(\textsc{fm}(\cdot)\), we can apply \(\textsc{fm}(\cdot)\) to a video clip \(C\) to extract a set of \(k\) feature maps \(\{F\}^{k}=\textsc{fm}(C),F\in\mathbb{R}^{n\times h\times w\times c}\), where \(k\) is the number of endpoint layers from a FM, and \(n,h,w,c\) are respectively a feature map's length, height, width, and number of channels. For video classification tasks, we cast a feature map \(F\) as \(n\times h\times w\) tokens and aggregate them into a global representation using a learnable query token \(\tau\) and lightweight cross-attention layers [15]. For spatiotemporal action localization, following the standard practice [19; 48], we first detect humans on key-frames using a human detector [44], producing a set of human bounding boxes \(B\). We then apply the RoI pooling operation [25] that takes both the feature map \(F\) and box coordinates \(B\) as inputs and outputs one feature vector per box as the query token, \(\tau=\textsc{RoIPool}(F,B)\), followed by the same cross-attention layers as in video classification. For both groups of tasks, we stack a linear classifier on top of the task token's last-layer encoding for final classification: \[p=\textsc{LinearClassifier}(\textsc{CrossAttention}(\tau,F)). \tag{1}\] For temporal action localization, we first perform feature extraction in a sliding window manner, resulting in a sequence of globally average pooled features \(\{\textsc{AvgPool}(F_{1}),\cdots,\textsc{AvgPool}(F_{t})\}\) for each video. Following a popular choice of prior works [2; 27; 36], we employ G-TAD [57] as our task head for predicting the action category and its start and end timestamps. Figure 2: We study four adaptation methods to apply a foundation model (FM) to video understanding downstream tasks: (a) end-to-end finetuning, (b) frozen backbone evaluation, (c) frozen features with multi-layer attention pooler (MLAP), and (d) a low-rank adapter. #### 4.2.2 Adapting the modified FMs' weights for downstream tasks Adapting the modified FMs to a downstream task is to tune their weights. Then, we immediately have two basic adaptation strategies: 1) full finetuning to update all weights in the original FM plus the task head and 2) freezing FM weights and only updating newly added weights. The choice of the adaptation methods depends on specific application scenarios such as computation and memory constraints. We argue that an ideal FM should perform well across various adaptation methods to support the breadth of use cases. **End-to-end finetuning.** End-to-end finetuning is the most common FM evaluation method for videos [1; 18; 48; 55], but it requires the deployment of a separate and possibly expensive FM for each downstream task. When finetuning all weights in the modified FMs, we limit cross-attention to a single transformer layer with 12 heads and hidden size 768. We vary learning rates and weight decays for each experiment to ensure every FM is configured to its best setup. Figure 2(a) illustrates this end-to-end finetuning. **Frozen FM.** Linear probing and cross-attention based pooling over frozen FM features are routinely used to test the strength of the FM representation [48; 59; 47; 22; 35]. In practice, adapting task-specific heads with a frozen FM allows us to deploy the same FM for multiple tasks. If we use light-weight heads over the FM features, then a single FM inference can serve multiple tasks efficiently in terms of both compute and memory. To this end, we examine two variations with a frozen FM, one with a single cross-attention layer and the other with multiple layers. The first results in exactly the same model architectures as in end-to-end finetuning (Figure 2(b)), and the second allows us to leverage an FM's hierarchical features beyond its last endpoint layer (Figure 2(c)). First, the frozen features are extracted from the last \(k\) layers, \(F_{N-k+1}\), \(F_{N-k+2}\),..., \(F_{N}\). Then, attention pooling is applied between a learnable token \(\tau\) and the features \(F_{N-k+1}\) using multi-head cross-attention (MHCA). The output of this layer serves as the query token for the next round of attention pooling with the features \(F_{N-k+2}\). This process is repeated for \(k\) rounds: \[\begin{split}\tau_{N-k+1}&=\text{MLP}(\text{ MHCA}(\tau,F_{N-k+1}))\\ \tau_{N-k+2}&=\text{MLP}(\text{MHCA}(\tau_{N-k+1},F_ {N-k+2}))\\...\\ \tau_{N}&=\text{MLP}(\text{MHCA}(\tau_{N-1},F_{N})) \end{split} \tag{2}\] where \(k=4\) in our experiments, and the final classifier is \(p=\text{LinearClassifier}(\tau_{N})\). **Frozen FM with a low-rank adapter**. Finally, we explore a frozen FM beyond the last \(k\) layers using a low-rank adapter [23], which is a bottleneck architecture that projects a feature tensor into a low-dimensional space and then up-samples to the original space. The bottleneck space's dimension is 64 in our experiments. Inserting a few adapter layers with trainable weights \(\{w\}\) into the pretrained FM while keeping all FM's weights frozen, the feature adapter is more parameter-efficient than end-to-end finetuning the whole network while achieving better performance than simply adding a task head to the frozen FM. Essentially, the adapter leads to a new \(\widetilde{\text{FM}}\) with some trainable weights \(\{w\}\): \(\widetilde{F}=\widetilde{\text{FM}}(C,\{w\})\), such that the output feature maps remain the same in shape as the original FM's output (Figure 2(d)). Hence, different pooling schemes and task heads aforementioned could be applied to the extracted feature map \(\tilde{F}\). For simplicity, we still choose the single-layer cross-attention as the default task head due to its computation efficiency and performance. The low-rank adaptation allows a single FM for multiple tasks, in contrast to the per-task models in end-to-end finetuning. However, it incurs a per-task forward pass at inference time, being less efficient than the task-specific heads over frozen features. ## 5 Experiments ### End-to-end finetuning Table 3 shows the end-to-end finetuning results of six FMs on eight datasets. We split the FMs into two groups based on their input modalities at the time of pretraining: CoCa, CLIP, and FLAVA are image-native FMs, and VideoMAE, VATT, and InternVideo are video-native. The datasets span spatiotemporal action localization (STAL), video classification (VC), and temporal action localization (TAL). Note that we freeze FM weights in TAL because otherwise its full finetuning consumes excessive memory and computation. We draw the following observations from Table 3. _All six FMs underperform task-specialized models on the video tasks._ Table 3's last row collects the state-of-the-art results on the eight datasets, each obtained by a task-specialized model with comparable architecture or size to ours in the prior work. All six FMs significantly underform the task-specialized models on the video tasks, indicating the lack of strong video-focused FMs. This observation is in sharp contrast to what FMs have achieved on natural language [39; 3] and image understanding [11]. _Video-native FMs outperform image-native FMs on SSv2, Charades, and ANet_ which require a model to reason along the time dimension: SSv2 actions are motion-rich, Charades has multiple actions per video, and ANet is about temporal action localization. These results strut the advantages of video-native FMs over image-native ones and, hopefully, prompt more efforts dedicating to the research of video-native FMs. _CoCa performs the best among image-native FMs on the video tasks._ It actually gives rise to the highest accuracy on all datasets except SSv2, Charades, and ANet probably because CoCa, pretrained using image-text pairs, does not capture sufficient motion signals required for understanding SSv2, and it cannot handle Charades and ANet's complex, multiple actions per video. ### Frozen FMs End-to-end finetuning is infeasible for some application scenarios due to FMs' rapidly growth in size and the consequent demands in computational resources. In the following, we evaluate frozen FMs with various adaptation methods. Tables 4, 5, and 6 are the results of adaptation with a single cross-attention layer, multiple cross-attention layers, and a low-rank adapter, respectively. _CLIP generally performs the best among image-native frozen FMs (Tables 4 and 5), but CoCa catches up thanks to the low-rank adapter (Table 6)._ It is worth noting that this ranking of image-native frozen FMs differs from the ranking of image-native FMs in end-to-end finetuning. It seems that CLIP's endpoint features are more amendable to the video tasks than CoCa, but CoCa as a whole adapts better to video under both finetuning and the adapter. Hence, it is crucial to consider adaptation \begin{table} \begin{tabular}{c c c c c c c c c|c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{STAL} & TAL & \multicolumn{2}{c}{VC (A)} & \multicolumn{2}{c|}{VC (M)} & \multicolumn{2}{c|}{VC (ML)} & \multirow{2}{*}{} \\ \cline{2-2} \cline{5-9} & AVA & AVA-K & ANet & K400 & MiT & D48 & SSv2 & Charades & AVG \\ \hline CoCa [59] & **23.3** & \(24.7\) & 33.0 & \(73.1\) & \(32.0\) & \(34.1\) & \(41.5\) & \(8.8\) & \(30.8\) \\ CLIP [41] & \(21.1\) & **25.9** & 32.7 & **75.2** & **32.6** & \(44.1\) & \(41.0\) & \(11.2\) & \(32.8\) \\ FLAVA [47] & \(18.8\) & \(21.5\) & 32.2 & \(71.3\) & \(29.7\) & \(45.9\) & \(40.6\) & \(12.6\) & \(31.6\) \\ \hline VideoMAE [18] & \(16.0\) & \(19.9\) & 33.0 & \(65.1\) & \(23.0\) & **59.5** & \(53.9\) & \(11.3\) & \(32.5\) \\ InternVideo [55] & \(13.4\) & \(15.7\) & 33.3 & \(69.3\) & \(26.3\) & \(55.6\) & **58.2** & \(13.0\) & \(33.1\) \\ VATT [1] & \(20.3\) & \(22.2\) & **35.3** & \(75.1\) & \(32.1\) & \(49.7\) & \(57.8\) & **33.3** & \(40.5\) \\ \hline \hline \end{tabular} \end{table} Table 4: Evaluating FMs when adapted to video understanding using frozen features. Only weights in the task heads are updated using the downstream tasks’ training sets. \begin{table} \begin{tabular}{c c c c c c c c|c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{STAL} & TAL & \multicolumn{2}{c}{VC (A)} & \multicolumn{2}{c|}{VC (M)} & \multicolumn{2}{c|}{VC (ML)} & \multirow{2}{*}{} \\ \cline{2-2} \cline{5-9} & AVA & AVA-K & ANet & K400 & MiT & D48 & SSv2 & Charades & AVG \\ \hline CoCa [59] & **27.7** & **31.0** & \(-\) & **82.6** & **43.6** & **79.6** & \(66.8\) & \(55.0\) & \(55.2\) \\ CLIP [41] & \(27.1\) & \(28.9\) & \(-\) & \(81.0\) & \(39.0\) & \(75.7\) & \(46.6\) & \(54.3\) & \(52.8\) \\ FLAVA [47] & \(22.0\) & \(25.6\) & \(-\) & \(79.1\) & \(38.3\) & \(72.0\) & \(61.1\) & \(48.6\) & \(49.4\) \\ \hline VideoMAE [18] & \(23.5\) & \(26.2\) & \(-\) & \(78.7\) & \(36.1\) & \(75.5\) & \(65.5\) & \(51.4\) & \(51.0\) \\ InternVideo [55] & \(27.2\) & \(29.8\) & \(-\) & \(80.1\) & \(35.9\) & \(75.8\) & **67.0** & \(52.2\) & \(52.5\) \\ VATT [1] & \(27.0\) & \(28.4\) & \(-\) & \(77.1\) & \(34.8\) & \(77.6\) & \(65.1\) & **55.7** & \(52.7\) \\ \hline \hline \multirow{2}{*}{Task-specialized} & \(42.3\) & \(38.9\) & \(37.5\) & \(88.6\) & \(42.7\) & \(88.9\) & \(68.7\) & \(63.2\) & \multirow{2}{*}{\(-\)} \\ & [42] & [42] & [54] & [40] & [33] & [58] & [17] & [29] & \\ \hline \hline \end{tabular} \end{table} Table 3: Evaluating FMs when adapted to video understanding tasks using end-to-end finetuning. We report the Top-1 accuracy on K400, MiT, D48 and SSv2, MAP on Charades and ANet, and [email protected] on AVA and AVA-K. methods as an organic part of the evaluation of FMs to supply them various paths to demonstrate their capabilities. _Video-native FMs are better than image-native FMs in understanding motion-rich SSv2 and D48, Charades that contain multiple actions per video, and ANet for temporal action localization._ This observation is about the same as the one under end-to-end finetuning. The image-native FMs is mainly superior on appearance-rich video datasets, where high-quality spatial perceptual features are the key. We conjecture that the vast image data empowering image-native FMs is more diverse in appearance than videos used to pretrain video-native FMs. _Given frozen FMs, the low-rank adapter outperforms cross-attention layers, and multiple layers of cross-attention is better than a single cross-attention layer._ Many works [10; 22] have shown features from different layers of a vision transformer have different attention maps. Hence, it is potentially beneficial to have an adaptation method to leverage multiple layers of a frozen FM. Table 5 reports the results with four cross-attention layers, whose average score per model (across different columns) is higher than that with a single cross-attention layer (Table 4) by \(18\%\) to \(40\%\). The low-rank adapter (Table 6) further improves upon the cross-attention results partially because it explores all layers of a frozen FM. _On average, image-native FMs outperform video-native FMs under end-to-end finetuning and the adapter, but it becomes the inverse in the other two adaptation methods._ The adapter experiment paired with end-to-end finetuning experiment reveal the fact that existing image-based FMs could be more easily adapted to video tasks when we could adjust the feature space of FMs, possibly caused by the large-scale higher quality image(-text) pretraining datasets. On the other hand, frozen feature experiments discussed above present us the inverse picture where video-based FM performs better. The seemingly paradox encourages more future research on bridging the gap on video-based pretraining with high-quality data and more effective modeling. ### VideoGLUE score: Ranking FMs by their efficacies and efficiencies on video tasks In this section, we consolidate our studies of the FMs with different adaptation methods on a broad range of video tasks by considering their adaptation efficacies and efficiencies. Adaptation methods with different numbers of trainable weights lead to incompatible comparisons. Motivated \begin{table} \begin{tabular}{c c c c c c c c|c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{STAL} & \multicolumn{2}{c}{VC (A)} & \multicolumn{2}{c|}{VC (M)} & \multicolumn{2}{c|}{VC (ML)} & \multirow{2}{*}{} \\ \cline{2-2} \cline{5-8} & AVA & AVA-K & & K400 & MiT & D48 & SSv2 & Charades & AVG \\ \hline CoCa [59] & **26.6** & **28.7** & **80.9** & **41.4** & \(67.1\) & \(56.1\) & \(45.8\) & \(49.0\) \\ CLIP [41] & \(24.5\) & \(28.0\) & \(80.2\) & \(39.7\) & **77.2** & \(56.0\) & \(44.2\) & \(49.3\) \\ FLAVA [47] & \(17.9\) & \(23.8\) & \(74.7\) & \(34.1\) & \(68.4\) & \(52.1\) & \(40.8\) & \(44.1\) \\ \hline VideoMAE [18] & \(16.6\) & \(23.3\) & \(73.6\) & \(30.6\) & \(76.0\) & \(61.4\) & \(43.0\) & \(45.9\) \\ InterVideo [55] & \(19.2\) & \(25.5\) & \(75.5\) & \(31.3\) & \(73.6\) & **63.9** & \(46.2\) & \(47.7\) \\ VATT [1] & \(22.3\) & \(25.8\) & \(75.0\) & \(36.5\) & \(68.9\) & \(63.5\) & **53.5** & \(49.9\) \\ \hline \hline \end{tabular} \end{table} Table 6: The low-rank adapter results of FMs for video understanding. We only update the weights of the adapter and task head while keeping the original FMs’ weights frozen. \begin{table} \begin{tabular}{c c c c c c c c|c} \hline \hline & \multicolumn{2}{c}{STAL} & \multicolumn{2}{c}{TAL} & \multicolumn{2}{c}{VC (A)} & \multicolumn{2}{c}{VC (M)} & \multicolumn{2}{c|}{VC (ML)} & \multirow{2}{*}{} \\ \cline{2-2} \cline{5-8} & AVA & AVA-K & ANet & K400 & MiT & D48 & SSv2 & Charades & AVG \\ \hline CoCa [59] & \(24.4\) & \(27.0\) & \(33.3\) & \(74.2\) & \(37.2\) & \(48.4\) & \(45.9\) & \(19.6\) & \(45.9\) \\ CLIP [41] & **27.7** & **29.6** & \(33.9\) & **77.1** & **39.0** & \(55.8\) & \(50.1\) & \(41.5\) & \(46.2\) \\ FLAVA [47] & \(21.3\) & \(23.2\) & \(32.4\) & \(71.5\) & \(34.5\) & \(58.5\) & \(43.1\) & \(38.2\) & \(41.7\) \\ \hline VideoMAE [18] & \(19.6\) & \(22.1\) & \(33.4\) & \(71.7\) & \(32.2\) & \(69.6\) & \(57.4\) & \(35.9\) & \(43.2\) \\ InternVideo [55] & \(15.9\) & \(17.7\) & \(33.6\) & \(73.7\) & \(34.7\) & **71.9** & **60.3** & \(40.5\) & \(44.8\) \\ VATT [1] & \(22.9\) & \(24.1\) & \(35.0\) & \(75.1\) & \(35.6\) & \(60.1\) & \(58.7\) & **58.2** & \(46.9\) \\ \hline \hline \end{tabular} \end{table} Table 5: Evaluating FMs when adapted to video understanding using multi-layer attention pooler (MLAP), which takes multiple frozen features from an FM as inputs and map them hierarchically for the final task prediction. Only the multi-layer attention pooling layers are updated using the downstream tasks’ training sets. by this, we propose a scalar measure, called VideoGLUE score (_VGS_), to capture an FM's overall adaptation performance on our video understanding tasks. While the VideoGLUE score may not be a perfect metric, it condenses multiple aspects of comparison into a scalar value, enabling a simplified comparison of FMs. Taking the adaptation efficiency into account, we propose to use the trainable FLOPs to normalize an adapted FM's average score \(s\) over all tasks. The trainable FLOPs are better than tunable weights because they allow our _VGS_ to reflect both the model architecture's freedom and the input data's impact (e.g., sequence length) on downstream tasks. Formally, denoting by \(\mathcal{S}_{i}\) an FM's average score over our video tasks under the \(i\)-th adaptation method and by \(F_{i}\) the corresponding trainable FLOPs (in billion), we calculate the FM's _VGS_ by \[\textit{VGS}=\sum_{i=1}^{N}w_{i}\mathcal{S}_{i},\text{ where }w_{i}=\frac{ \mathcal{A}_{i}}{\sum_{j=1}^{N}\mathcal{A}_{j}}\text{ and }\mathcal{A}_{i}=\frac{1}{\log_{10}F_{i}}, \tag{3}\] where \(N=4\) is the number of adaptation methods, and \(w_{i}\in[0,1]\) weigh score \(\mathcal{S}_{i}\) according to the trainable FLOPs \(F_{i}\). In Figure 3 we plot the averaged score achieved by each FM under each adaptation method, respectively, and compare their overall video understanding capabilities using the proposed _VGS_. The changes in FMs' ranking by different adaptation methods (see the left panel in Figure 3) reinforce that the adaptation methods matter and should be considered an organic part of the evaluation of FMs. On the right panel of Figure 3, we notice that the video-native FMs overall outperform image-native FMs on our video understanding tasks, achieving averaged _VGS_\(41.58\) vs. \(39.35\) respectively. This is intuitive as video-native FMs probably have a smaller domain gap to our tasks and are more capable of temporal and motion reasoning, which are important cues for video understanding. Zooming in to the individual FMs, we find that VATT, a video-native FM, is at the first place with _VGS_\(44.74\), followed by the image-native CLIP with _VGS_\(41.1\). This suggests that in-domain pretraining yields overall the best adaptation capability to video tasks, and image-native FMs could also achieve competitive results on many but not all video understanding tasks. ## 6 Conclusion and Discussion In this report, we study three image-based and three video-based foundation models and their adaptation capability on general video understanding tasks. Experiments are conducted on three hallmark video tasks, eight diverse datasets with four distinct adaption methods. Our study shows existing image-based FMs performs well on some appearance-rich video datasets, while video-based FMs tend to achieve better on motional and temporal reasoning. Four studied adaption methods curve different landscape, revealing the critical role of considering adaption methods as an organic part of evaluating FMs. Finally, we propose one single metric _VGS_ to represent the video task adaptation efficiency of FMs. We hope our research provides useful resources for evaluating and analyzing video foundation models, and address the current gap in foundation model evaluation within the video domain. Figure 3: FMs are equipped with different adaptation methods. Left: For each adaptation method, we plot FMs’ averaged scores across all video tasks vs. trainable FLOPs in a log scale. Right: We plot the overall VideoGLUE score (_VGS_) per FM. **Supplementary Materials** We detail the datasets (Section A), models (Section B), and training setups (Section C) in the supplementary materials to improve this work's reproducibility. Besides, Section D includes more experimental studies to strengthen the main text. ## Appendix A Video understanding datasets ### Appearance-focused action recognition Video classification is a task of classifying videos into pre-defined labels, with the major focus on human actions. Kinetics400 [28] (K400) is a large-scale, high-quality video dataset widely used as a standard video classification benchmark. It contains more than \(250\)k video clips with annotations of \(400\) human daily actions. The actions are human focused and cover a broad range of classes including human-human interactions and human-object interactions. Although the video clips span \(10\) seconds on average, many studies [45, 52] have pointed out the task could be easily solved on the Kinetics datasets by inferring from the static objects appeared or background environment -- motion information is less important than the visual appearance. Hence, we categorize Kinetics400 as an appearance-focused action classification dataset. Moments-in-Time [38] (MiT) is a large-scale video event classification dataset, with one million human annotated short video clips (around \(3\) seconds each). The temporal span corresponds to the averaged duration of human working memory and is a temporal envelope holding meaningful actions between people, objects, and phenomena. Videos in MiT are annotated with 339 most used verbs in the English vocabulary. ### Motion-focused action recognition Videos contain much more commonsense knowledge than still images do, such as an object's motion patterns and the causal consequences of an action, just to name a few. However, appearance-based benchmarks do not evaluate a model's understanding of such commonsense knowledge, complex scenes, and situations. In observance of this, some video datasets have been proposed and studied in recent years with the focus on motions and common-sensing reasoning that are prosperous in video data. Something-something v2 [20] (SSv2) is a collection of around \(200\)k videos of human performing pre-defined, basic actions with everyday objects. There are \(174\) unique labels in total depicting atomic hand manipulations, like putting something into something, turning something upside down or covering something with something. This dataset benchmarks a model's fine-grained understanding capability of object motions and scene changes by making the label space atomic-action-focused and background-invariant. Diving48 [34] (D48) is introduced to evaluate a model's dynamic reasoning capability. The video clips in this dataset are obtained by segmenting online videos of major diving competitions. In total, there are around \(18\)k videos annotated with \(48\) classes. Because of its standardization, the diving scenario is purposefully chosen to avoid the scene, object, and person biases. ### Multi-label daily action classification Most of current action classification datasets involve video clips with a clean snapshot of a single action. In contrast, humans perform daily complex activities step-by-step, simultaneously, or in an interleaving manner. Towards more comprehensive human daily activity reasoning, Charades [46] is introduced. Different from web-collected datasets whose contents are more structured, Charades is collected by crowd-sourcing from hundreds of actors recording their videos in their own homes, acting out casual everyday activities. Charades brings in more diversity into the video classification task due to its close-to-daily-life setting. Its videos are \(30\) seconds long on average and have multi-label annotations testing models' understanding of complex daily activities with multiple steps. Charades provides \(110\)k videos with \(157\) action classes for training and evaluation. ### Temporal action localization Natural long videos contain scene changes and semantic shifts, while most of the existing video benchmarks formulate problems to focus on trimmed video clips. Such a gap introduces evaluation bias as clip-level benchmarks could not reflect a model's temporal feature discriminativeness, which is of key importance to solve long-form video understanding tasks. To comprehend the study on foundation models' video capabilities, we include the temporal action localization (TAL) task in our evaluation. The task of TAL is to predict not only the action labels but also each action instance's temporal boundary in untrimmed videos. We adopt ActivityNet v1.3 [16] as the dataset for the TAL task, which contains \(10,002\) untrimmed videos in training and \(4,985\) in validation. The video length in this dataset is between \(5\)-\(10\) minutes. In total, there are \(200\) types of activities annotated. ### Spatiotemporal action localization Spatiotemporal Action Localization (STAL) is a person-centric task that asks a system to localize actors and predict their atomic actions [6; 21] in a transitory duration. In AVA [21], \(15\) minutes long movie clips are densely annotated at \(1\)Hz. In the key frames, every person is localized using a bounding box and labels corresponding to actions being performed by the actor. The label vocabulary consists of \(80\) different atomic visual actions. There are \(430\) different movies in total. AVA-Kinetics [31] follows the same labeling protocol as AVA, while its data source comes from the Kinetics700 [28] video pool. The dataset contains over \(230\)k clips annotated with the \(80\) AVA action classes for each of the humans in key frames. ## Appendix B Model details ### Task head architectures In Figure 4, we plot the task heads used in our video classification and spatiotemporal action localization experiments, namely, the simple pooler head and multi-layer attention pooling head. For temporal localization, please refer to [57] for the task head's detailed architecture. Figure 5 illustrates the encoder adapter layer's architecture. In the the adapter layer, only the down-sample layer, up-sample layer, and the scaling factor are tunable. Figure 4: (a) Single-layer pooler head and (b) multi-layer attention pooling head for video classification and spatiotemporal action localization. ### Image-to-video adaptation Adapting image backbones to video tasks requires us to fuse the image embeddings at some point in the network and also introduce additional temporal information. We consider two choices, early-fusion and late-fusion, and ablate them in the frozen feature setting in Table 7. In both early-fusion and late-fusion, we first apply the projection layer on each frame independently to embed pixel patches into embedding tokens. We then average-pool the embedding tokens from nearby frames to reduce the sequence length to \(n\times h\times w\). In the early-fusion setting, we pass all tokens _together_ to the image backbone to extract video features. In late-fusion, we pass each set of \(h\times w\) tokens _independently_ to the image backbone. Empirically, we find that the FLAVA [47] and CLIP [41] models do better with late-fusion while CoCa [59] does better with early-fusion. Furthermore, we ablate the importance of temporal information using the frozen-features from FLAVA [47]. In Table 8, we find that adding temporal positional embedding to the input is essential for D48 [34], SSv2 [20], and Charades [46] while not necessary for K400 [28] and MiT [38]. This supports our grouping that K400 and MiT are appearance-focused datasets. Based on these findings, we use late-fusion for FLAVA [47] and CLIP [41] and early-fusion for CoCa [59]. We add learnable temporal positional embeddings for all the image-native FMs. \begin{table} \begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{K400} & \multicolumn{2}{c}{SSv2} \\ \cline{2-5} & Early & Late & Early & Late \\ \hline CoCa [59] & \(72.7\) & \(61.4\) & \(41.5\) & \(33.3\) \\ CLIP [41] & \(70.5\) & \(75.2\) & \(38.1\) & \(41.0\) \\ FLAVA [47] & \(67.9\) & \(71.3\) & \(40.4\) & \(40.6\) \\ \hline \hline \end{tabular} \end{table} Table 7: Early vs. late fusion on image-native FMs. In this experiment, the frozen feature with a single-layer pooler head is used. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Temporal Positional & \multicolumn{2}{c}{VC (A)} & \multicolumn{2}{c}{VC (M)} & \multicolumn{2}{c}{VC (ML)} \\ \cline{2-5} Embedding & K400 & MiT & D48 & SSv2 & Charades \\ \hline ✗ & \(71.3\) & \(29.7\) & \(41.6\) & \(30.3\) & \(10.7\) \\ ✓ & \(71.3\) & \(29.7\) & \(45.9\) & \(40.6\) & \(12.6\) \\ \hline \hline \end{tabular} \end{table} Table 8: Ablation study on the temporal positional embedding for image-to-video adaption. We choose FLAVA [47] with the frozen feature setting in this experiment. Figure 5: The adapter used in vision transformer. In the adapter layer, only the down-sample layer, up-sample layer, and the scaling factor are tunable. Between the down-sample layer and up-sample layer, an activation function is applied, which in our case is ReLU. Task-specific hyperparameters In the following, we provide experiment settings and hyperparamters we used in this study. In Table 9, we list the hyperparameters we applied in the video classification task. In Table 10, we present the hyperparameters we used on spatiotemporal action localization. In Table 11, we present the hyperparameters we used on temporal action localization task. ## Appendix C Task-specific hyperparameters In the following, we provide experiment settings and hyperparamters we used in this study. In Table 9, we list the hyperparameters we applied in the video classification task. In Table 10, we present the hyperparameters we used on spatiotemporal action localization. In Table 11, we present the hyperparameters we used on temporal action localization task. ## Appendix D More studies ### Large model adaptations For the completeness of this report and reader's reference, in Table 12 we report experimental results under our settings with large FMs under two adaptation scenarios, namely, the frozen backbone with pooler head and the low-rank adapter. VideoMAE-v2-B/DL [51] denotes the ViT-B model distilled from ViT-g on the Kinetics710 datasets4. VideoMAE-v2-g [51] is the model that pretrained \begin{table} \begin{tabular}{l|l|l} \hline \hline **Config** & **AVA v2.2** & **AVA-Kinetics** \\ \hline batch size & 256 & 256 \\ training epochs & 50 & 50 \\ ViT sequence length & 8 \(\times\) 16 \(\times\) 16 & 8 \(\times\) 16 \(\times\) 16 \\ **optimization** & \multirow{2}{*}{AdamW} & AdamW \\ optimizer momentum & 0.9 & 0.9 \\ layer decay & 0.75 & 0.75 \\ learning rate schedule & cosine decay & cosine decay \\ warmup ratio & 5\% & 5\% \\ **data augmentations** & \multirow{2}{*}{true} & true \\ random horizontal flip & \multirow{2}{*}{(0.5, 2.0)} & true \\ random scale & & (0.5, 2.0) \\ random color augmentation & & true \\ \hline \hline \end{tabular} \end{table} Table 10: Experimental configurations for spatiotemporal action localization. \begin{table} \begin{tabular}{l|l|l|l|l|l} \hline \hline **Config** & **Kinetics400** & **Sth-sth v2** & **MiT** & **Diving48** & **Charades** \\ \hline batch size & 256 & 256 & 256 \\ training epochs & 150 & 50 & 100 & 50 \\ ViT sequence length & 8 \(\times\) 14 \(\times\) 14 & 8 \(\times\) 14 \(\times\) 14 & 8 \(\times\) 14 \(\times\) 14 & 8 \(\times\) 14 \(\times\) 14 \\ **optimization** & \multirow{2}{*}{AdamW} & AdamW & AdamW & AdamW \\ optimizer momentum & 0.9 & 0.9 & 0.9 & 0.9 \\ learning rate schedule & cosine decay & cosine decay & cosine decay & cosine decay \\ warmup ratio & 5\% & 5\% & 5\% & 5\% \\ **data augmentations** & \multirow{2}{*}{true} & \multirow{2}{*}{false} & true & true \\ random horizontal flip & \multirow{2}{*}{(0.5, 2.0)} & true & \multirow{2}{*}{(0.5, 2.0)} & \multirow{2}{*}{(0.5, 2.0)} \\ aspect ratio & & & & \\ area ratio & (0.3, 1.0) & (0.3, 1.0) & (0.3, 1.0) & (0.3, 1.0) \\ RandAug & (9, 0.5) & (9, 0.5) & - & - \\ MixUp & 0.8 & 0.8 & - & - \\ CutMix & 1.0 & 1.0 & - & - \\ **evaluation** & \multirow{2}{*}{4} & \multirow{2}{*}{1} & \multirow{2}{*}{4} & \multirow{2}{*}{4} \\ multi-clips & 4 & & & \\ multi-views & 3 & 3 & 3 & 3 \\ segment-based sample & false & true & false & false \\ \hline \hline \end{tabular} \end{table} Table 9: Experimental configurations for video classification tasks. We let learning rate and weight decay to be tunable per model to allow some flexibility for task adaptations. on UnlabeledHybrid dataset, while VideoMAE-v2-g/FT [51] conducts further finetuning using supervised training on Kinetics710. ### Sample-efficient transfer learning A strong FM should be able to adapt to downstream tasks with a few training samples. In this section, we test the adaption ability of FMs in a sample-efficient transfer learning setting. Particularly, we freeze backbones and train a pooler head to adapt the FMs on K400 and SSv2. For either dataset, we sample \(1\%\) and \(10\%\) data from the training set uniformly for training and evaluate on the full evaluation dataset. We show our experimental results in Table 13. To better understand the data efficiency, we also show the relative Top-1 accuracy for each model (shown in the bracket), which is defined as the ratio between accuracy with fewer training examples and the accuracy achieved using all the training data. A higher relative Top-1 accuracy means the performance of the model is closer to its "full" capacity under the sample-efficient setting. We notice that the best performed model on each dataset in fully \begin{table} \begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{3}{c}{K400} & \multicolumn{3}{c}{SSv2} \\ \cline{2-7} Method & \(1\%\) & \(10\%\) & \(100\%\) & \(1\%\) & \(10\%\) & \(100\%\) \\ \hline CoCa [59] & \(27.1(37.8\%)\) & \(48.9(67.0\%)\) & \(73.1\) & \(5.6(13.4\%)\) & \(20.9(50.4\%)\) & \(41.5\) \\ CLIP [41] & \(36.9(46.2\%)\) & \(66.8(83.6\%)\) & \(79.0\) & \(8.7(19.3\%)\) & \(25.1(55.5\%)\) & \(45.3\) \\ FLAVA [47] & \(14.4(20.2\%)\) & \(35.8(50.3\%)\) & \(71.3\) & \(7.2(17.7\%)\) & \(14.3(35.3\%)\) & \(40.6\) \\ \hline VideoMAE [18] & \(15.5(23.9\%)\) & \(32.0(49.2\%)\) & \(65.0\) & \(13.7(25.4\%)\) & \(30.3(56.2\%)\) & \(53.9\) \\ InternVideo [55] & \(20.4(29.5\%)\) & \(50.2(72.4\%)\) & \(69.3\) & \(19.5(33.6\%)\) & \(41.1(70.7\%)\) & \(58.2\) \\ VATT [1] & \(34.1(45.4\%)\) & \(63.7(84.8\%)\) & \(75.1\) & \(12.9(22.4\%)\) & \(37.6(65.0\%)\) & \(57.8\) \\ \hline \hline \end{tabular} \end{table} Table 13: Benchmark FMs adaptation on video understanding tasks under sample-efficient transfer learning. This table shows Top-1 classification accuracy and the relative accuracy (shown in the bracket). Results are achieved by using frozen features with pooler head. \begin{table} \begin{tabular}{l|l} \hline \hline **Config** & **ActivityNet v1.3** \\ \hline batch size & 32 \\ training epochs & 10 \\ **feature extraction** & 15 \\ fps & 16 \\ per-clip length & 16 \\ clip stride & 16 \\ **optimization** & \begin{tabular}{l} AdamW \\ 0.9 \\ cosine decay \\ \end{tabular} \\ \hline \hline \end{tabular} \end{table} Table 11: Experimental configurations for temporal action localization. \begin{table} \begin{tabular}{l l c c c c c} \hline \hline & \multicolumn{3}{c}{VC (A)} & \multicolumn{3}{c}{VC (M)} & \multicolumn{2}{c}{VC (ML)} \\ \cline{2-7} Model & Method & K400 & MiT & \multicolumn{1}{c}{D48} & SSv2 & \multicolumn{1}{c}{Charades} \\ \hline InternVideo-L [55] & frozen & \(78.6\) & \(33.7\) & \(69.6\) & \(67.4\) & \(20.9\) \\ InternVideo-L [55] & adapter & \(81.5\) & \(40.3\) & \(85.8\) & \(70.9\) & \(54.2\) \\ \hline VideoMAE-v2-B/DL [51] & frozen & \(86.7\) & \(38.9\) & \(61.4\) & \(57.7\) & \(33.2\) \\ VideoMAE-v2-B/DL [51] & adapter & \(86.0\) & \(41.8\) & \(82.3\) & \(66.6\) & \(53.8\) \\ \hline VideoMAE-v2-g [51] & frozen & \(59.7\) & \(20.7\) & \(42.5\) & \(44.2\) & \(12.7\) \\ VideoMAE-v2-g [51] & adapter & \(80.8\) & \(35.9\) & \(85.3\) & \(68.2\) & \(55.5\) \\ \hline VideoMAE-v2-g/FT [51] & frozen & \(82.1\) & \(35.0\) & \(60.5\) & \(56.1\) & \(22.4\) \\ VideoMAE-v2-g/FT [51] & adapter & \(85.2\) & \(42.5\) & \(84.6\) & \(70.6\) & \(58.6\) \\ \hline \hline \end{tabular} \end{table} Table 12: Evaluating large-scale FMs when using (a) frozen feature with a one-layer pooler head, and (b) low-rank adapter with frozen features. We report the Top-1 accuracy on K400, MiT, D48, SSv2 and MAP on Charades. fine-tuned model also performs best in the few-shot setting. Especially, CLIP [41] achieves \(46.2\%\) and \(83.6\%\) relative Top-1 accuracy on K400 using only \(1\%\) and \(10\%\) of the training data, respectively. On SSv2, InternVideo [55] achieves \(33.6\%\) and \(70.6\%\) relative Top-1 accuracy with only \(1\%\) and \(10\%\) of the training data.
2303.11095
Irreversibility in an optical parametric driven optomechanical system
We investigate the role of nonlinearity via optical parametric oscillator on the entropy production rate and quantum correlations in a hybrid optomechanical system. Specifically, we derive the modified entropy production rate of an optical parametric oscillator placed in the optomechanical cavity which is well described by the two-mode Gaussian state. We find a dramatic deviation in the irreversibility and quantum mutual information for small detuning. Our analysis shows that the system irreversibility can be reduced by choosing the appropriate phase of the self-induced nonlinearity. We further demonstrate that the nonlinearity effect persist for a reasonable range of cavity decay rate.
Obinna Abah, Collins O. Edet, Norshamsuri Ali, Berihu Teklu, Muhammad Asjad
2023-03-20T13:31:37Z
http://arxiv.org/abs/2303.11095v1
# Irreversibility in an optical parametric driven optomechanical system ###### Abstract We investigate the role of nonlinearity via optical parametric oscillator on the entropy production rate and quantum correlations in a hybrid optomechanical system. Specifically, we derive the modified entropy production rate of an optical parametric oscillator placed in the optomechanical cavity which is well described by the two-mode Gaussian state. We find a dramatic deviation in the irreversibility and quantum mutual information for small detuning. Our analysis shows that the system irreversibility can be reduced by choosing the appropriate phase of the self-induced nonlinearity. We further demonstrate that the nonlinearity effect persist for a reasonable range of cavity decay rate. ## I Introduction The performances of thermal heat machines were successfully analyzed within the established framework of classical thermodynamics [1] and played prominent role during the industrial revolution. In the last decades, thermodynamics has been extended to classical small devices/systems operating far from equilibrium by considering the fluctuations via stochastic thermodynamics [2; 3]. In view of harnessing the promises of quantum technologies, there has been a tremendous interest in the thermodynamical analysis of devices operating in quantum regime [4; 5; 6; 7]. In addition, with the advancement in fabrication technology, various experiments studying nonequilibrium thermodynamics in the quantum regime have been realized [8; 9; 10]. Recently, the irreversible entropy production dynamics in mesoscopic quantum systems have been experimentally measured in two different driven-dissipative quantum systems realized by coupling bosonic systems to high-finesse cavities [11]. These two experimental platforms are a cavity-optomechanical device, and a Bose-Einstein condensate (BEC) with cavity-mediated long-range inter-actions [12; 13]. Hybrid quantum systems exploit different physical components with complementary functionalities for efficient multi-tasking tasks [14]. They catalyse novel fundamental research in quantum mechanics, condensed matter physics, and mesoscopic physics by providing platforms to investigate various phenomena at the quantum regimes. These systems are also playing a prominent role in realizing the novel range of applications in quantum technologies, including quantum metrology [15], quantum communication [16], as quantum transducers [17; 18], for fundamental tests of quantum mechanics [19; 20; 21] or realizing quantum thermal machines [22; 23; 24]. However, hybrid cavity optomechanical systems have attracted a lot of attraction due to their integration versatility, promising reliable quantum controllability, and long coherent time [25]. The systems provide a strong analogy between quantum optomechanics, nonlinear optics, and mapping optical effects. Moreover, the degenerate optical parametric oscillator, due to a second-order nonlinearity of optical crystals, placed inside a dissipative cavity optomechanical has been proposed to considerably enhance the entanglement [26], mechanical squeezing [27; 28], the cooling of the micromechanical mirror [29; 30; 31], force sensing of the system [32] and improve the precision of position detection [33]. In addition, the optical parametric oscillator (phase-sensitive amplifier) is offering a promising applications in quantum communication [34; 35] and quantum sensing [36]. On the otherhand, non-linearity has been shown to be a useful resource for generating non-classical quantum states [37; 38; 39] and a measure of the non-linearity of a quantum oscillator has been proposed, which can quantify the non-lineaity of the oscillator [40]. In this paper, we study the irreversibility generated in the stationary state of a nonlinear crystal optical parametric oscillator placed inside the optomechanical cavity. We have demonstrate that the entropy production rate and the corresponding quantum correlations of the optomechanical setup are modified by this self-induced nonlinearity. We show that the irreversiblity in the system is enhance via squeezing generated from the nonlinear medium but it can be reduced for some specific choise of nonlinear interaction phase. In addition, we analyze the role of self induced nonlinearity on the quantum correlations. The rest of the paper is structured as follows. In Section II we describe the full theoretical model Hamiltonian of the hybrid optomechanical setup and then proceed to obtain the equations of motion. Following the usual procedure, we linearize the dynamics, where we focus ex plictly on Gaussian states. Section III present the results and discussions of the entropy production rate and the quantum correlations of model Hamiltonian. First, in Section III.1 we present the analysis of the entropy production rate of nonlinear hybrid setup while the Section III.2 details the behaviour of quantum mutual information. Finally, we conclude in Section IV. ## II Model We consider an optical parametric oscillator placed in the optomechanical cavity, see Fig. 1. The cavity is driven by a laser with frequency \(\omega_{L}\) at rate \(\eta\) through one of its end-mirror. The movable cavity mirror is controlled by the mechanical resonator vibrations at frequency \(\omega_{b}\), which modulate the cavity resonance frequencies. The Hamiltonian of the system in a rotating frame at the frequency \(\omega_{L}\) of the pump field reads \[H=\Delta_{a}\hat{a}^{\dagger}\hat{a}+\xi\hat{a}^{\dagger^{2}}\hat{a}^{2}+ \omega_{b}\hat{b}^{\dagger}\hat{b}+g\hat{a}^{\dagger}\hat{a}(\hat{b}+\hat{b}^ {\dagger})-i\hbar\left(\eta^{*}\hat{a}-\eta\hat{a}^{\dagger}\right), \tag{1}\] where \(\Delta_{a}\!=\!(\omega_{C}-\omega_{L})\) is the cavity detuning with \(\omega_{C}\) is the cavity frequency, \(\hat{a}(\hat{a}^{\dagger})\) is the cavity field's annihilation (creation) operator and \(\xi\) is the strength of nonlinear interaction. The vibrational mode annihilation and creation operators are denoted by \(b\) and \(b^{\dagger}\) respectively. The term \(g\!=\!\sqrt{\hbar/M\omega_{b}}\,\omega_{C}/L\) is the optomechanical coupling between the bare cavity and the mechanics; \(L\) is the cavity length in the absence of the cavity field and \(M\) is the mass of the mechanical resonator. The laser rate \(\eta=|\eta|e^{i\theta}\) with \(|\eta|=\sqrt{2\kappa\mathcal{R}/\hbar\omega_{L}}\) (\(\mathcal{R}\) is the laser power and \(\kappa\) is the cavity decay rate) and \(\theta\) is the phase of the deriving lase field. Considering the dissipation of mechanical and cavity modes and the corresponding fluctuating noise terms, the quantum Langevin equation for the system described by the Hamiltonian can be written as \[\dot{\hat{a}} = -\left(\kappa+i\Delta_{a}\right)\hat{a}-ig\left(\hat{b}+\hat{b}^{ \dagger}\right)\hat{a}-2i\xi\hat{a}^{\dagger}\hat{a}^{2}+\eta \tag{2}\] \[+ \sqrt{2\kappa}\hat{a}_{in},\] \[\dot{\hat{b}} = -(\gamma+i\omega_{b})\hat{b}-ig\hat{a}^{\dagger}\hat{a}+\sqrt{2 \gamma}\hat{b}_{in},\] where \(\gamma\) is the damping rate of mechanical resonator, \(\hat{a}_{in}\) is the _zero_ mean, _i.e_\(\langle\hat{a}_{in}\rangle\!=\!0\), input noise operator for optical mode with only non-zero correlation \(\langle\hat{a}_{in}(t)\hat{a}^{\dagger}_{in}(t^{\prime})\rangle\!=\!\delta(t-t ^{\prime})\), while \(\hat{b}_{in}\) is the input noise operator with _zero_ mean, \(\langle\hat{b}_{in}\rangle\!=\!0\), associated with the mechanical oscillator and described by the correlation function \(\langle\hat{b}_{in}(t)\hat{b}^{\dagger}_{in}(t^{\prime})\rangle\!=\!(n_{b}\!+ \!1)\delta(t-t^{\prime})\) with \(n_{b}\!=\!\left(e^{(\hbar\omega_{b}/K_{B}T)}-1\right)^{-1}\) is the mean thermal occupation number of the mechanical mode at the temperature \(T\). To linearize the nonlinear set of equations, Eq. (2), the quantum operators can be expanded around their respective classical mean values as \(=:\,a=a_{s}+\delta a\) and \(b=b_{s}+\delta b\), where \(\delta a\) and \(\delta b\) are small quantum fluctuations around the mean fields \(a_{s}\) and \(b_{s}\). The steady-state mean-field values are obtained as follows \[a_{s}=\frac{\eta}{\kappa+i\tilde{\Delta}_{a}},\qquad b_{s}=-\frac{g\left|a_{s }\right|^{2}}{\gamma+i\omega_{b}}, \tag{3}\] where \(\tilde{\Delta}_{a}=\Delta_{a}+2g\mathrm{Re}b_{s}+2\xi\left|a_{s}\right|^{2}\) is the effective detuning by including the self-Kerr induced frequency shifts. Basically, these frequency shifts are minute, i.e., \(\left|\tilde{\Delta}_{a}-\Delta_{a}\right|\ll\Delta_{a}\approx\omega_{b}\). Consequently, from this point forward, we can assume \(\tilde{\Delta}_{a}\simeq\Delta_{a}\). The related linearized quantum Langevin equations defining the quantum fluctuations dynamics are given by: \[\delta\dot{\hat{a}} = -\left(\kappa+i\Delta_{a}\right)\delta\hat{a}-ig\left(\delta\hat {b}+\delta\hat{b}^{\dagger}\right)-\xi\delta\hat{a}^{\dagger}+\sqrt{2\kappa} \delta\hat{a}_{in},\] \[\delta\dot{\hat{b}} = -\left(\gamma+i\omega_{b}\right)\delta\hat{b}-i\left(G^{*}\delta \hat{a}+G\delta\hat{a}^{\dagger}\right)+\sqrt{2\gamma}\delta\hat{b}_{in}, \tag{4}\] where \(G=g\langle a_{s}\rangle\) is the effective optomechanical coupling strength, \(\chi\equiv-2i\xi a_{s}^{2}\) represents the effective nonlinear interaction, with the amplitude \(|\chi|\) and \(\phi=\tan^{-1}[\mathrm{Im}\chi/\mathrm{Re}\chi]\). To linearize the equation of motion, which ensures that any initial Gaussian state will remain such at any instant of time [41], we defined the optical quadrature operators \(\delta x_{a}\!=\!(\delta\hat{a}+\delta\hat{a}^{\dagger})/\sqrt{2}\) and \(\delta p_{a}\!=\!(\delta\hat{a}-\delta\hat{a}^{\dagger})/i\sqrt{2}\). Similarly, the quadratures of the cavity mode are \(\delta x_{b}\!=\!(\delta\hat{b}+\delta\hat{b}^{\dagger})/\sqrt{2}\) and \(\delta p_{b}\!=\!(\delta\hat{b}-\delta\hat{b}^{\dagger})/i\sqrt{2}\). Likewise, the fluctuation operators \(x_{j,\mathrm{in}}\) and \(p_{j,\mathrm{in}}\) (\(j\!=\!a,b\)) are defined in similar fashion. Now, the quantum Langevin equations for the quadratures can be written in the compact matrix form as \[\dot{\mathbf{R}}(t)=A\mathbf{R}(t)+\mathbf{R}_{\mathrm{in}}, \tag{5}\] where the quadratures vector \(\mathbf{R}(t)=(\delta x_{a},\delta p_{a},\delta x_{b},\delta p_{b})^{T}\) and the noises vector Figure 1: Schematic representation of the optomechanical system with driven optical parametric oscillator. The optical parametric oscillator (OPO) is sandwiched between the two mirrors in the Fabry-Perot cavity. One of the cavity mirror (right) which is attached to the mechanical oscillator is movable, such that the resonant cavity frequencies is modulated by the mechanical vibrations. The left side mirror is fixed while the optical cavity is pumped/driven by a laser. \((\sqrt{2\kappa}\,x_{a,\text{in}},\sqrt{2\kappa}\,p_{a,\text{in}},\sqrt{2\gamma}\, \delta x_{b,\text{in}},\sqrt{2\gamma}\,\delta p_{b,\text{in}})^{T}\). The corresponding drift matrix \(A\) can be explicitly determined and depends on the set of parameters characterizing the dynamics of the two-mode system; it reads \[A=\begin{pmatrix}-\kappa+\chi\cos(\phi)&\Delta_{a}+\chi\sin(\phi)&0&0\\ -\Delta_{a}+\chi\sin(\phi)&-\kappa-\chi\cos(\phi)&G&0\\ 0&0&-\gamma&\omega_{b}\\ G&0&-\omega_{b}&-\gamma\end{pmatrix}. \tag{6}\] The system dynamical equations should be stable in order for a steady state to exist and the stability condition for system can be formalized in terms of the Routh-Hurwitz criterion [42], which we employed in our characterization of the dynamics. This is achieved if the real part of the spectrum of the drift matrix \(A\) is negative, i.e all the eigenvalues of the drift matrix \(A\) have negative real parts. ## III Results and discussions In this section, we now proceed to analyze the irreversible entropy production rate and the quantum correlation profiles of our setup consisting of an optical parametric oscillator inside the dissipative-driven optomechanical cavity. We will focus on the how these physical quantities are affected by the presence of the OPO when the system reaches its stationary state. ### Entropy production and correlations matrix In the quantum domain, the problem of calculating the entropy production of a quantum system is formulated in terms of the quantum master equations [43; 44; 45; 46], quantum trajectories [47] and fluctuation theorems [48; 49] among others. In recent, a formulation for the characterization of irreversible entropy production of quantum systems interacting with nonequilibrium reservoirs which combines quantum phase-space methods and the Fokker-Planck equation, has been put forward [50; 51; 52; 53]. This framework has been employed to experimentally measure and characterize the irreversible entropy production rates of bosonic quantum systems in two platforms- a micromechanical resonator and a Bose-Einstein condensate [11]. For the optomechanical system, the cooling of the mechanical resonator is reflected in the entropy production rates. Shahidani and Rafiee have studied the role of self-correlation on irreversible thermodynamics in a parametrically driven-dissipative system [54]. Here, we follow the framework that characterizes the entropy production as the correlation between a system and a reservoir [55; 50]. Due to the linearized dynamics of the fluctuations and since all the quantum noise terms are Gaussian, the resulting steady state of the system is a continuous-variable Gaussian state which can be fully characterized by the \(4\times 4\) stationary correlation matrix (CM) \(\mathcal{V}\), with components \(\mathcal{V}_{ij}=\langle\delta u_{i}(\infty)\delta u_{j}(\infty)+\delta u_{j }(\infty)\delta u_{i}(\infty)\rangle/2\). The elements of the quantum CM must satisfy the uncertainty relation \(\mathcal{V}+i\Omega\geq 0\), where \(\omega_{ij}\) are the elements of the symplectic matrix given by the Heisenberg uncertainty principle (\([u_{i},u_{j}]=i\Omega_{ij}\)) [56]. We assume that the first moments are null, which can be achieved by choosing a suitable displacement in the phase space. The equation of motion for the covariance matrix as \[\dot{\mathcal{V}}=A\mathcal{V}+\mathcal{V}A^{T}+D, \tag{7}\] where \(D=\text{diag}\{\kappa,\kappa,\gamma(2n_{b}+1),\gamma(2n_{b}+1)\}\) is the diffusion matrix. Considering that the two reservoirs are prepared at different temperatures, this leads to the breaking of the detailed balance and takes the system to nonequilibrium state. When the system is assumed to be stable, we obtain the Lyapunov equation for the nonequilibrium steady state covariance matrix \(A\mathcal{V}^{s}+\mathcal{V}^{s}A^{T}=-D\). The open dynamics of the joint system can be described in terms of Fokker-Planck equations based on the Wigner function of the system. Following the approach recently put forward, the steady-state entropy production rate \(\Pi_{s}\) is given by [50; 11], \[\Pi_{s} =2\text{Tr}\left((A^{\text{irr}})^{T}D^{-1}A^{\text{irr}}\, \mathcal{V}^{s}\right)+\text{Tr}\left(A^{\text{irr}}\right)\] \[=2\kappa\left(\mathcal{V}^{s}_{11}+\mathcal{V}^{s}_{22}-1\right)+ 2\gamma\left(\frac{\mathcal{V}^{s}_{33}+\mathcal{V}^{s}_{44}}{2n_{b}+1}-1 \right),\] \[=\mu_{a}+\mu_{b}, \tag{8}\] where \(A^{\text{irr}}=\text{diag}\{-\kappa,-\kappa,-\gamma,-\gamma\}\) and \(\mu_{a}\) (\(\mu_{b}\)) corresponds to the contributions to \(\Pi_{s}\) from the cavity (mechanical) mode respectively. When the system is in the equilibrium state, we have \(\mathcal{V}^{s}_{11}+\mathcal{V}^{s}_{22}=1\), \(\mathcal{V}^{s}_{33}+\mathcal{V}^{s}_{44}=2n_{b}+1\), and hence, \(\Pi_{s}=0\). From the Lyapunov equation, in the steady state, the diagonal and off-diagonal terms of the covariance matrix are related as follows; \[\mathcal{V}^{s}_{11} =\frac{\kappa}{2}\frac{1}{\kappa-\chi\cos(\phi)}+\frac{\Delta_{a} +\chi\sin(\phi)}{\kappa-\chi\cos(\phi)}\mathcal{V}^{s}_{12},\] \[\mathcal{V}^{s}_{22} =\frac{\kappa}{2}\frac{1}{\kappa+\chi\cos(\phi)}+\frac{G}{\kappa+ \chi\cos(\phi)}\mathcal{V}^{s}_{23}\] \[-\frac{\Delta_{a}-\chi\sin(\phi)}{\kappa+\chi\cos(\phi)} \mathcal{V}^{s}_{12},\] \[\mathcal{V}^{s}_{33} =\frac{2n_{b}+1}{2}+\frac{\omega_{b}}{\gamma}\mathcal{V}^{s}_{34},\] \[\mathcal{V}^{s}_{44} =\frac{2n_{b}+1}{2}+\frac{G}{\gamma}\mathcal{V}^{s}_{14}-\frac{ \omega_{b}}{\gamma}\mathcal{V}^{s}_{34}. \tag{9}\] Hence, the entropy production rate can be expressed using off-diagonal elements of the covariance matrix as Equation (10) encompasses the full information on the role of optical parametric oscillator on the irreversibility of driven-dissipative optomechanical system. It shows that in the absence of the crystal nonlinearity \(\chi\!=\!0\), the role of the correlations at the steady state is explicitly established [50]. From Eq. (10), even for small vanishing coupling \(G\!=\!0\), the \(\Pi_{s}\) is non-zero and depends explicitly on the contribution of the optical cavity mode. This is because the nonlinear interaction drives the system optical mode into nonequilibrium state, which vanishes when \(\chi\!=\!0\). However, it is also interesting to see that for finite nonlinear interaction, the \(\Pi_{s}\) is modified by the contribution from both modes dynamical variables. To numerically illustrate the influence of the non-linear contribution of the optical parametric oscillator placed in a cavity on entropy production at steady-state, we consider the resolved sideband regime \(\kappa<\omega_{b}\). In Fig. 2 we present the individual contributions \(\mu_{i}\) (\(i=a,b\)) to the entropy production rate as a function of normalized detuning for different initial occupation of the cavity oscillator. In panels 2(a) and (b) [(c) and (d)], we plot the rescaled cavity [atomic] contribution to the total entropy production rate as a function of the detuning \(\Delta_{m}/\omega_{b}\) for different values of \(\chi\). For small value of coupling strength, \(G=0.1\,\omega_{b}\), the system is stable in both the red-detuned region \(\Delta_{a}>0\) and blue-detuned region \(\Delta_{a}<0\). In addition, both contributions \(\mu_{a}\) and \(\mu_{b}\) are peaked at the two sidebands. In the limit of large detuning \(\Delta_{a}\gg 1\), the two modes are effectively decoupled and leading to vanishing \(\Pi_{s}\). It can be seen that the cavity mode contribution to entropy production rate \(\mu_{a}\) always increases as the nonlinear self-interaction of the cavity mode \(\chi\) increases. On the otherhand, Fig. 2(c) - (d), reveals sign changes in the mechanical mode component \(\mu_{b}\) which captures the heating/cooling of the optomechanical system. We can see that \(\mu_{b}\) shows a decreasing (increasing) behaviour for increasing \(\chi\) in the blue-detuned (red-detuned) regime. For the contribution \(\mu_{a}\), the behaviour appears symmetric between the red-detuned and the blue-detuned regimes with the amplification being peaked at \(\Delta_{a}=0\). From Fig. 2 (a) and (b) we observe that the increase entropy production rate (irreversibility) associated with the presence of crystal nonlinearity \(\chi\neq 0\) tends to vanish at \(\Delta_{a}=1\) and \(\Delta_{a}=-1\) as the number of thermal excitations increases. In comparable to the \(\mu_{b}\), Fig. 2 (c) and (d), there is no appreciable effect on the behavior of the entropy production rate outside the peaks for both change in \(\chi\neq 0\) and the number of thermal excitations \(n_{b}\). Fig. 3 show the entropy production rate \(\Pi_{s}\) against the rescaled detuning \(\Delta_{a}/\omega_{b}\) and the phase \(\phi\) for different values of non-linear parameter \(\chi\) of the optical parametric oscillator as well as \(\Pi_{s}\) against rescaled \(\chi\) for different cavity decay rates \(\kappa\). For the stable parameter range of the system, increasing the nonlinearity contribution enhances the entropy production rate. In Fig. 3 panel (a) - (c) we have considered a low number of thermal excitation of the mechanical oscillator \(n_{b}=10\) while the panel (d) - (f) assumed a very high initial occupation number \(n_{b}=100\). Focusing on the red-detuned parameter regime, Fig. 3 (a) shows that the entropy production rate \(\Pi_{s}\) increases with the increasing strength of crystal nonlinearity \(\chi\) and diverges towards the resonance (\(\Delta_{a}\!=\!0\)). From plot (d), the case of high number of thermal excitation, it shows that the amount of irreversibility reduces but still finite, even when \(\Delta_{a}\gg\omega_{b}\). Thus, the initial thermal number of the excitation of the mechanical mode influences the \(\Pi_{s}\) profile. In Fig. 3 (b) and (e), \(\Pi_{s}\) are shown as a function of the phase \(\phi/\pi\), for different values of \(\chi\). Specifically plot (b) and plot (e) are calculated for lower and higher number of excitation respectively. We observe an increased irreversibility with a striking dips at \(\phi\!=\!1.7n\pi\) or \(-0.25n\pi\) (\(n\) is an integer) due to phase of the nonlinear self-interaction \(\chi\) of the optomechanical system. In the sufficiently large initial number of thermal excitation \(n_{b}=100\), the \(\Pi_{s}\) for \(\chi\!=\!0.5\omega_{b}\) dips below the absence of OPO case as shown in Fig. 3 (e). This clearly show that the irreversible entropy associated with the driving the system into the nonequilibrium state can be Figure 2: Scaled contributions \(\mu_{a}/\omega_{b}\) and \(\mu_{b}/\omega_{b}\) to the entropy production \(\Pi_{s}\) against the normalized detuning \(\Delta_{a}/\omega_{b}\) for different strength of the non-linear self-interaction of the mode \(a\). The black solid curves corresponds to \(\chi=0\), the dashed blue curves corresponds to \(\chi=0.5\omega_{b}\) and the dotted red curves corresponds to \(\chi=0.3\omega_{b}\). Panels (a) and (c) represent the plots when the number of thermal excitations \(n_{b}=10\) while the panels (b) and (d) denotes the case of \(n_{b}=100\). The other parameters are \(\gamma=10^{-2}\omega_{b}\), \(n_{a}=0\), \(\kappa_{a}=0.5\omega_{b}\) and \(G=0.1\omega_{b}\). suppressed by choosing the appropriate phase-sensitive oscillator to incorporate into the an optomechanical system. Considering that the experimental optomechanical systems are affected by environmental noise, we present the impact of the cavity decay rate on the OPO modified setup in the third row of Fig. 3 (i.e, (c) and (f)). It shows the robustness of the irreversibility associated with the OPO crystal non-linearity against the cavity decay rate \(\kappa\). For small number of thermal excitation, \(n_{b}=10\), the impact of the decay rate is minimal up to \(\chi/\omega_{b}\simeq 1.0\) while the effect of increasing the cavity decay rate on \(\Pi_{S}\) is more pronounce for very large number of thermal excitation. We remark that the observed reduction in the entropy production rate as the \(\kappa\) increases and its linear grow with respect to \(\chi/\omega_{b}\) is owing to the increase imbalance in populations between the two modes. We remark that the entropy production rate diverges at the point \(\kappa^{2}\!=\!\chi^{2}\cos^{2}(\phi)\). ### Quantum correlations Let now proceed to analyze how the presence of optical parametric oscillator in an optomechanical cavity influences the correlation profiles. The net correlations between two modes can be quantified by means of the quantum mutual information, \[\mathcal{I}(\rho_{a:b})=S(\rho_{a})+S(\rho_{b})-S(\rho_{ab}), \tag{11}\] where \(S(\rho)=-\mathrm{tr}\rho\ln\rho\) is the von Neumann entropy, and \(\rho_{a}=\mathrm{tr}_{b}\rho_{ab}\) and \(\rho_{b}=\mathrm{tr}_{a}\rho_{ab}\) are the reduced states of the two modes (\(a\) and \(b\)). However, considering that the Gaussian nature of the states presented here, which are completely characterized by the two-mode convariance matrix \(\mathcal{V}_{ab}\), it is more convenient to use the Renyi-2 entropy \(S_{2}(\rho)=-\log\mathrm{tr}[\rho^{2}]\). For a Gaussian state with covariance matrix \(\mathcal{V}\), the Renyi-2 entropy can easily be evaluated and given by [57] \[S_{2}(\mathcal{V})=\frac{1}{2}\ln(\mathrm{det}\mathcal{V}). \tag{12}\] Thus, the Gaussian Renyi-2 mutual information for the two mode Gaussian state reads [57] \[\mathcal{I}(\mathcal{V}_{a:b})=\frac{1}{2}\ln\left(\frac{\mathrm{det} \mathcal{V}_{a}\det\mathcal{V}_{b}}{\mathrm{det}\mathcal{V}_{ab}}\right). \tag{13}\] Next we consider the measure of quantum discord based on the Renyi-2 entropy, which quantify the amount of quantum correlations beyond entanglement in Gaussian state. The quantum discord is defined as the difference between the mutual information \(\mathcal{I}(\mathcal{V}_{a:b})\) and the one way classical correlations \(\mathcal{J}(\mathcal{V}_{a|b})\), \[\mathcal{D}(\mathcal{V}_{a|b})=\mathcal{I}(\mathcal{V}_{a:b})-\mathcal{J}( \mathcal{V}_{a|b}), \tag{14}\] where \(\mathcal{J}(\mathcal{V}_{a|b})=\sup_{\pi_{b}(X)}\{S(\mathcal{V}_{a})-\int \mathrm{d}XP_{X}S(\mathcal{V}_{a|X}^{\pi_{b}})\}\) is the maximum decrease in the Renyi-2 entropy of subsystem \(a\), when a Gaussian measurement has been performed on subsystem \(b\) such that \(\pi_{b}(X)\geq 0\) \(\int\mathrm{d}X\pi_{b}(X)\!=\!\mathbb{1}\). Considering the maximization over all the possible measurements implemented on the mode \(b\), we can express as \[\mathcal{D}(\mathcal{V}_{a|b})\!=\!\frac{1}{2}\ln(\mathrm{det}\mathcal{V}_{b})- \frac{1}{2}\ln(\mathrm{det}\mathcal{V}_{ab})+\mathrm{inf}_{\pi_{b}}\frac{1}{2} \ln(\mathrm{det}\mathcal{V}_{a}^{\pi_{b}}). \tag{15}\] It have recently been demonstrated that the irreversibility generated by the steady state and the total amount of correlations shared between two coupled oscillators are closely related [50]. In what follows, we focus on the mutual information and quantum discord between the two modes at the stationary state as well as the influence of self non-linear interaction on them. In Figure 4 we compare the entropy production rate \(\pi_{S}\) to the correlations established by the optomechanical system with a driven nonlinear crystal, as quantified by the mutual information \(\mathcal{I}\) and quantum discord \(\mathcal{D}\) at the phase \(\phi\!=\!0.8\pi\). In panel (a), for \(\chi\!=\!0\), we see a close similarity between the entropy production rate and the mutual information curves. For \(\chi\neq 0\), Fig. 4 (b) and (c), there is a striking difference between the entropy production rate and the quantum correlations. It can be seen that the entropy production \(\Pi_{s}\) increases when \(\Delta_{a}/\omega_{b}<0\) while the mutual information \(\mathcal{I}\) and discord \(\mathcal{D}\) are decreasing. The deviation can be attributed to the modification of the cavity decay rate by the nonlinear medium. ## IV Conclusions We have studied the irreversible entropy generated in an interacting nonlinear hybrid optomechanical cavity system by stationary driven dissipation process. We have investigated the scenario in which a non-linear medium is placed inside a driven optomechanical system. The system can be well described by two mode composite Gaussian system. We have shown that the stationary state entropy production rate depends on the strength of the nonlinear self interaction of the optical cavity mode. Our analysis showed that the presence of the nonlinear crystal decreases the entropy production rate and quantum correlations as nonlinearity increases. We have further shown that the relationship between the entropy production rate and the quantum correlations are drastically modified by the nonlinear medium. We remark that our investigation can easily be implemented in the current state-of-art experimental technology. As reported in optomechanical setup experiment [58], the frequency of cavity \(\omega_{C}=2\pi\times 4.93\) GHz, the decay rate of the cavity \(\kappa\!=\!2\pi\times 215\) KHz, the frequency of mechanical resonator \(\omega_{b}\!=\!2\pi\times 65\) MHz, the corresponding mechanical damping rate \(\gamma\!=\!2\pi 15\) KHz, and the single-photon optomechanical coupling strength \(g\!=\!2\pi\times 1.6\) MHz. Our work would benefit the current effort towards optimization of quantum thermal devices [10; 24; 59] and the better understanding energetic cost of cooling optomechanical systems [60]. ## Acknowledgements MA and BT were supported by Khalifa University through project no.8474000358 (FSU-2021-018). COE and NA were supported by LRGS Grant LRGS/1/2020/UM/01/5/2 (9012-00009) provided by the Ministry of Higher Education of Malaysia (MOHE). OA acknowledged the Newcastle University Academic Track Fellowship.
2308.13296
Gender Gaps in Online Social Connectivity, Promotion and Relocation Reports on LinkedIn
Online professional social networking platforms provide opportunities to expand networks strategically for job opportunities and career advancement. A large body of research shows that women's offline networks are less advantageous than men's. How online platforms such as LinkedIn may reflect or reproduce gendered networking behaviours, or how online social connectivity may affect outcomes differentially by gender is not well understood. This paper analyses aggregate, anonymised data from almost 10 million LinkedIn users in the UK and US information technology (IT) sector collected from the site's advertising platform to explore how being connected to Big Tech companies ('social connectivity') varies by gender, and how gender, age, seniority and social connectivity shape the propensity to report job promotions or relocations. Consistent with previous studies, we find there are fewer women compared to men on LinkedIn in IT. Furthermore, female users are less likely to be connected to Big Tech companies than men. However, when we further analyse recent promotion or relocation reports, we find women are more likely than men to have reported a recent promotion at work, suggesting high-achieving women may be self-selecting onto LinkedIn. Even among this positively selected group, though, we find men are more likely to report a recent relocation. Social connectivity emerges as a significant predictor of promotion and relocation reports, with an interaction effect between gender and social connectivity indicating the payoffs to social connectivity for promotion and relocation reports are larger for women. This suggests that online networking has the potential for larger impacts for women, who experience greater disadvantage in traditional networking contexts, and calls for further research to understand differential impacts of online networking for socially disadvantaged groups.
Ghazal Kalhor, Hannah Gardner, Ingmar Weber, Ridhi Kashyap
2023-08-25T10:43:30Z
http://arxiv.org/abs/2308.13296v1
# Gender Gaps in Online Social Connectivity, Promotion and Relocation Reports on LinkedIn ###### Abstract Online professional social networking platforms provide opportunities to expand networks strategically for job opportunities and career advancement. A large body of research shows that women's offline networks are less advantageous than men's. How online platforms such as LinkedIn may reflect or reproduce gender networking behaviours, or how online social connectivity may affect outcomes differentially by gender is not well understood. This paper analyses aggregate, anonymised data from almost 10 million LinkedIn users in the UK and US information technology (IT) sector collected from the site's advertising platform to explore how being connected to Big Tech companies ('social connectivity') varies by gender, and how gender, age, seniority and social connectivity shape the propensity to report job promotions or relocations. Consistent with previous studies, we find there are fewer women compared to men on LinkedIn in IT. Furthermore, female users are less likely to be connected to Big Tech companies than men. However, when we further analyse recent promotion or relocation reports, we find women are more likely than men to have reported a recent promotion at work, suggesting high-achieving women may be self-selecting onto LinkedIn. Even among this positively selected group, though, we find men are more likely to report a recent relocation. Social connectivity emerges as a significant predictor of promotion and relocation reports, with an interaction effect between gender and social connectivity indicating the payoffs to social connectivity for promotion and relocation reports are larger for women. This suggests that online networking has the potential for larger impacts for women, who experience greater disadvantage in traditional networking contexts, and calls for further research to understand differential impacts of online networking for socially disadvantaged groups. 1University of Tehran, Iran 2University of Oxford, UK 3Saarland University, Germany [email protected], [email protected], [email protected], [email protected] Footnote 1: This work is licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0). **Accepted and forthcoming at the International AAAI Conference on Web and Social Media (ICWSM) 2024.** As of Jan 14, 2023: [https://about.linkedin.com](https://about.linkedin.com). the difficulty to achieve work-family balance Ahuja2002, Armstrong2018. Socialising and networking events within IT companies are commonly reported to be in male-oriented spaces - e.g., involving sports, pub trips, and take place outside of already long industry working hours Cross2006, Kirmton2018, McGee2018, Earles2020 - with some women feeling uncomfortable or unable to attend as often as male colleagues due to gendered family expectations, and others not receiving invitations Bjerk2008, Kirmton2018. These norms within the industry may further reinforce disadvantages faced by women within it, and limit their ability to expand their networks in beneficial ways for career progression. In contrast, online networking theoretically offers greater flexibility to participate in terms of time and location, with the potential to bolster opportunities for those that face greater constraints, such as women. It may provide opportunities to expand and build advantageous networks beyond those encountered within one's immediate work environment. However, whether online connectivity provides these differential payoffs by gender has received limited empirical attention, and is a question that our study examines. ## Related Work Gender differences in building, maintaining, and using offline professional networks have been widely investigated across different settings. Women's networks are generally characterised as smaller groups formed of stronger ties, and more intimate connections than men's Wang2009, Greguletz2018. Within the IT sector, leveraging a natural experiment, Bapna2018 and Funk find gender differences in network formation, with women at an IT conference meeting 42% fewer new contacts, spending 48% less time talking with them, and adding 25% fewer LinkedIn connections than men Bapna2018. Mengel proposes that it is largely differences in network use across genders that leads to higher payoffs for men Mengel2020. Their lab experiments indicate that men reward their network neighbours more than women do, and as men's networks show greater gender homophily, these benefits are disproportionately passed onto other men. Presenting yourself to potential networking contacts, interviewing successfully, and negotiating promotions all aid career advancement, but require a person to promote themselves to others. In professional contexts, research has shown that women consistently underperform when asked to self-promote, both in self-assessment and when judged anonymously by others Moss-Racusin2010, Smith2013, Rudman1998. The gendered expectation that women will act modestly has been shown to drive this underperformance Rudman1998, with women altering behaviour to avoid backlash from others who may degrade them for bragging Lindeman2018. These studies suggest that women's career progression is disfavoured in traditional network-based professional contexts. However, how these differences in networking and professional self-promotion behaviours have been translated, reproduced or offset within the context of new online professional spaces is not clear. LinkedIn, as the world's largest professional networking platform, is an important context in which to understand how men and women use online spaces differentially, because recent experimental work has shown that LinkedIn use has real job market implications for users. Rajkumar and colleagues suggest there is a causal relationship between LinkedIn's creation of new ties in networks and their translation into job opportunities Rajkumar2022, particularly highlighting the importance of moderately weak ties over strong ties in job transmission. Wheeler2018 suggest links between training job-seekers to use LinkedIn, and increased employment rates Wheeler2022. Both suggest that connectivity within online networks brings important benefits to users in their working lives, although these studies do not examine heterogeneity of these effects by gender. A body of work is beginning to describe gender gaps found across LinkedIn, showing how the platform is a gendered space for interaction, with differences in how men and women use their profiles to self-present. Altenburger2018 find that female MBA graduates are less likely to use free-form data fields on their profiles (such as the Summary and Job Description fields) but are just as likely to include structured fields (such as Honours and Skills) Altenburger2017. Similarly, another study suggested that men were more likely to receive and give recommendations, and to display personal and professional interests, but were unable to control for industry confounding in these behaviours Zide2014. Aguado later drew on this work but found conversely that women and more senior individuals showed greater breadth of interaction on LinkedIn (encompassing recommendations given, companies and people followed, and skills validated) Aguado2019. Women were also more likely to have completed additional sections of their profiles (such as interests identified, length of written text, languages identified). However, less is known about how these online gender differentials, particularly in connectivity to potentially advantageous users, are related to professional outcomes. To the best of our knowledge, only four previous studies have harnessed data from LinkedIn's advertising platform as we do to study gender gaps on LinkedIn. Haranko2018 analyze gender gaps on LinkedIn across 20 US cities and find that there is little variation across location, but larger variation exists across industries, with the high-tech industry as among the most gender imbalanced Haranko2018. The authors also find technical, and computing skills reported on LinkedIn to be highly male-dominated. Berte2023 examine subnational variations in gender gaps on LinkedIn in Italy, and consistent with regional labour market gender gaps find that women are also underrepresented on LinkedIn. Kashyap2018 analyze gender gaps on the platform across countries, ages, industries and seniorities, and find women to be significantly underrepresented on LinkedIn in Science, Engineering, Maths and Technology (STEM) fields, as well as in higher-level managerial positions Kashyap2021. Our work builds on these aforementioned studies, as well as Verkroost2020, which computes gender gap indices (GGIs) using LinkedIn's advertising platform data, describing variation across countries and industries within the IT sector specifically. We extend these studies by looking at gender differences in LinkedIn across additional characteristics and behaviours, such as social connectivity, promotion, and relocation reports. Footnote 2: [https://about.linkedin.com/](https://about.linkedin.com/) [https://www.linkedin.com/help/linkedin/answer/a517610/](https://www.linkedin.com/help/linkedin/answer/a517610/) inferred-age-or-gender-on-linkedin [https://www.linkedin.com/help/lms/answer/a422631](https://www.linkedin.com/help/lms/answer/a422631) [https://www.linkedin.com/campaignmanager](https://www.linkedin.com/campaignmanager) [https://worldbank.github.io/connectivity_mapping/](https://worldbank.github.io/connectivity_mapping/) linkedin_nbs/interface.html ## Data Part of LinkedIn's revenue is generated by offering an advertising platform, allowing advertisers to reach over 875 million LinkedIn global users. To maximise the effectiveness of advertising campaigns, LinkedIn offers advertisers the possibility to carefully target their audience based on a number of user attributes. The available attributes are based on a combination of self-declared information, and information inferred using machine learning from user activity and user profiles. For example, users' employment history or social network connections are based on the information explicitly provided by them in their profiles. On the other hand, age and gender are inferred from profile information, including "the pronouns used when others recommend [them] for skills. Similarly, information on the user's job seniority is inferred, most likely based on job titles. Information about a user's location is likely based on a combination of self-provided employment history and IP-based geolocation. Advertisers can then target their advertisements to users with a desired combination of attributes. To launch and manage advertising campaigns, LinkedIn provides advertisers with an online platform. As part of the campaign and budget planning process, the advertising platform provides advertisers with so-called "audience estimates", estimating how many LinkedIn users match the provided targeting criteria selected by the advertiser. These aggregated estimates, which are provided free of cost before launching an advertising campaign, create a kind of digital census: for any chosen set of targeting attributes, potential advertisers can obtain a count estimate of how many of its users match the targeting criteria. These estimates can be collected programmatically through an API. For our analysis, we collected a large number of such individual audience estimates by repeatedly modifying the targeting criteria provided to the advertising platform. We decided to limit our data collection to cover LinkedIn users in the US and the UK, two of the largest user populations on LinkedIn [12] that are also culturally similar. We further narrowed our data population by only collecting audience estimates for LinkedIn users currently employed in the information technology (IT) sector, which we define by the company industry of users, covering the following 11 industries: Internet, Information Technology and Services, Computer Software, Computer and Network Security, Computer Hardware, Computer Networking, Wireless, Telecommunications, Semiconductors, Nanotechnology, and Consumer Electronics. Aligned with previous work [20], our definition of the IT sector relies on the OECD definition of the ICT industry [13], as defined according to International Standard Industry Classification (ISIC) Revision 4. This definition includes manufacturing-related industries, such as electronics and semiconductors, in addition to service-related parts of the industry, like software and internet services. It is worth noting that the names of some of these industries on LinkedIn have been changed from 2021 to 2023. Therefore, we have provided the mapping of their previous names to their current names in Table 1. For the above-mentioned selection, we then looped over combinations of (i) a user's location, either the US or the UK, (ii) their inferred job seniority, (iii) whether they recently reported a promotion or relocation, (iv) their gender, (v) their age range (vi), and whether they have a social network connection to an employee at (at least) one of several big companies, namely, Facebook, Apple, Amazon Web Services, Microsoft, and Google. Together, these companies are often referred to as the Big Five and are seen as the quintessential representatives of Big Tech [1]. As many connections to employees at these companies are likely to come from colleagues at the same companies, we excluded LinkedIn users who were at the time of data collection working at any of the Big Five companies. Using this measure of social connectivity captures the influence of external social networks at prestigious companies. Prestigious external networks can have impacts on job prospects of an individual through serving as better information access for the labour market, through exposure to new ideas from established companies, and the potential for external job offers from these companies. Together, each of these can increase perceived desirability of employees, which may make an individual's existing company more likely to retain or promote them, or make them competitive in the job market more generally. Users are considered to have relocated if they have "recently relocated their permanent location", while LinkedIn infers recent promotions based on user-provided profile updates to employment history. Table 2 provides an overview of the features that we collected. This data was collected in June and July of 2021. Data were preprocessed and stored as a CSV file. We analysed the counts of users for each combination of these variables (total of 192 unique combinations, 2 (gender) \(\times\) 4 (seniority) \(\times\) 4 (age range) \(\times\) 3 (recent status) \(\times\) 2 (social connectivity)) to make sure they are not zero or, in other words, to remove sparsity from the dataset. Our final dataset consisted of 156 combinations of the variables, as 36 were dropped due to small audience counts. More specifically, to protect user privacy, LinkedIn does not provide audience estimates when the targeted audience is smaller than 300. Note that, apart from sparsity, our aggregate data is conceptually _equivalent_ to individual-level data for the set of covariates considered, as our data contains all possible cross-tabulations. The dataset and analysis code supporting the conclusions are available at [https://github.com/kalhorghazal/icwsm-promotionrelocation-genderaps](https://github.com/kalhorghazal/icwsm-promotionrelocation-genderaps). ## Results ### Gender Differences in LinkedIn Use Our data show that there are fewer women than men in the IT sector on LinkedIn, consistent with previous work on LinkedIn [20]. While this gender gap on LinkedIn may reflect there are fewer women working in the IT sector [21], it may also reflect how women working within the IT sector select into being LinkedIn users. The distribution across age groups differs between men and women, with Figure 1 showing that women on the platform and working in the IT sector are proportionately younger than men. Women aged 25 to 34 make up half of the female professional population in IT on LinkedIn, while the male distribution is flatter, with similar proportions of workers in both the 25 to 34 and 35 to 54 age categories. Lower proportions of women older than 35 perhaps reflect workforce departure after family formation, or increasing gender balance in entry to the sector over time. We also find that women in IT on LinkedIn are more junior than their male counterparts. Figure 1 shows distributions of women and men across seniority levels to be largely similar at the senior and manager levels, but that a higher proportion of all women are working in entry-level positions than men, and a correspondingly lower proportion of women are employed at the highest director rank. The results of a Kolmogorov-Smirnov test [14] indicate these differences between women's and men's job seniority distributions is highly statistically significant (\(p<10^{-16}\)). Our next analyses of the promotion and relocation reporting behaviours hence normalises the male and female populations by age and seniority distribution to make it possible to isolate differences attributable to gender using demographic standardisation methods [15]. When applying adjustment to our calculations, we consider gender-agnostic age or seniority distributions on LinkedIn as the reference distribution. ### Promotion and Relocation Reports by Gender Table 3 displays promotion report rates by age- and seniority- groups, and by gender. The age- and seniority-adjusted rates show that women are between 6.5% and 18.8% more likely than men to recently report a promotion. As shown in Table 3, within each age category the youngest LinkedIn users are most likely to report promotions, with successively older age groups each reporting promotions at lower rates. Male and female distributions track a similar trend with increasing age, but women consistently have higher rates of promotion reports than men across all age groups. After age-adjustment, per 100,000 women, 1,173 had recently been promoted compared to 1,101 per 100,000 men, indicating a highly statistically significant difference in promotion reports by gender of 72 per 100,000 (\(z=10.285\), \(SE=7.000\times 10^{-5}\), \(p<0.001\)). LinkedIn users at successively higher seniority levels are more likely to report their recent promotions, except at the director level (Table 3). This is in line with existing work showing that senior employees are more likely to be promoted than juniors [16], although our data cannot differentiate between actual promotion rates of users, and the rate at which users report a promotion they have received to their online network. Promotion report rates increase most between the manager and director levels, which is also the seniority group among which there is the largest difference between male and female reports. Our findings further indicate that it is only at manager level that differences in promotion reports begin to emerge strongly, with LinkedIn's entry and senior groups showing similar rates of promotion reports for both genders. Across all except entry-level users, women have a higher promotion report rate than men. After seniority adjustment, this higher rate of female promotion reports is maintained, with 1,259 per 100,000 female users and 1,060 per 100,000 male users reporting promotion. This difference, of 199 per 100,000, is highly statistically significant (\(z=28.349\), \(SE=7.020\times 10^{-5}\), \(p<0.001\)). Thus, overall, even adjusting for gender differences in age and seniority composition among LinkedIn users, we see women \begin{table} \begin{tabular}{c c} \hline \hline Previous Name & Current Name \\ \hline Internet & Technology, Information and Internet \\ Information Technology and Services & IT Services and IT Consulting \\ Computer Software & Software Development \\ Computer Hardware & Computer Hardware Manufacturing \\ Computer Networking & Computer Networking Products \\ Wireless & Wireless Services \\ Semiconductors & Semiconductor Manufacturing \\ Nanotechnology & Nanotechnology Research \\ Consumer Electronics & Computers and Electronics Manufacturing \\ \hline \hline \end{tabular} \end{table} Table 1: Mapping of previous names of industries to current names. \begin{table} \begin{tabular}{c c} \hline \hline Feature & Possible Values \\ \hline Gender & Female, Male \\ Job Seniority & Entry, Senior, Manager, Director \\ Age Range & 18 to 24, 25 to 34, 35 to 54, 55+ \\ Recent Status & Promoted, Relocated, Any \\ Social Connectivity & Connected to big companies, Any \\ Count & Integer Values \(\geq 300\) \\ \hline \hline \end{tabular} \end{table} Table 2: The dataset features and their possible values. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline Age Range / & \multicolumn{2}{c}{Number of Relocations} & \multicolumn{2}{c}{Population (Millions)} & \multicolumn{2}{c}{Rate per 100,000} & \multicolumn{2}{c}{Weight} & \multicolumn{2}{c}{Weighted Rate} \\ \hline & Female & Male & Female & Male & Female & Male & & Female & Male \\ 18-24 & 2640 & 5950 & 0.36 & 0.54 & 733.3 & 1101.8 & 0.093 & 68.2 & 102.5 \\ 25-34 & 18790 & 30340 & 1.81 & 2.60 & 1038.1 & 1166.9 & 0.452 & 469.2 & 527.4 \\ 35-54 & 17610 & 42100 & 1.28 & 2.49 & 1375.8 & 1690.8 & 0.388 & 533.8 & 656.0 \\ 55+ & 1750 & 8100 & 0.17 & 0.48 & 1029.4 & 1687.5 & 0.067 & 69.0 & 113.1 \\ Total & 40790 & 86490 & 3.62 & 6.11 & 1126.8 & 1415.5 & 1.000 & **1140.2** & **1399.0** \\ \hline Entry & 13490 & 27200 & 1.58 & 2.45 & 1415.5 & 1110.2 & 0.414 & 586.0 & 459.6 \\ Senior & 17440 & 35250 & 1.38 & 2.32 & 1263.8 & 1519.4 & 0.380 & 480.2 & 577.4 \\ Manager & 4760 & 8480 & 0.39 & 0.71 & 1220.5 & 1194.4 & 0.113 & 137.9 & 135.0 \\ Director & 5100 & 15560 & 0.27 & 0.63 & 1888.9 & 2469.8 & 0.093 & 175.7 & 229.7 \\ Total & 40790 & 86490 & 3.62 & 6.11 & 1126.8 & 1415.5 & 1.000 & **1379.8** & **1401.7** \\ \hline \hline \end{tabular} \end{table} Table 4: Age and seniority adjustment calculations for relocation report ratios of each gender. Figure 1: Age range and job seniority distributions of users disaggregated by gender. 95% confidence intervals shown, with standard errors computed via bootstrapping. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline Age Range / & \multicolumn{2}{c}{Number of Promotions} & \multicolumn{2}{c}{Population (Millions)} & \multicolumn{2}{c}{Rate per 100,000} & \multicolumn{2}{c}{Weighted Rate} \\ Seniority Level & \multicolumn{2}{c}{\((a)\)} & \multicolumn{2}{c}{\((b)\)} & \multicolumn{2}{c}{\((c=(a/b)\times 100,000)\)} & \multicolumn{2}{c}{Weight \((d)\)} & \multicolumn{2}{c}{\((c\times d)\)} \\ \hline & Female & Male & Female & Male & Female & Male & & Female & Male \\ 18-24 & 5880 & 8000 & 0.36 & 0.54 & 1633.3 & 1481.5 & 0.093 & 151.9 & 137.8 \\ 25-34 & 21390 & 30290 & 1.81 & 2.60 & 1181.8 & 1165.0 & 0.452 & 534.2 & 526.6 \\ 35-54 & 15160 & 26600 & 1.28 & 2.49 & 1184.4 & 1068.3 & 0.388 & 459.5 & 414.5 \\ 55+ & 690 & 1590 & 0.17 & 0.48 & 405.9 & 331.2 & 0.067 & 27.2 & 22.2 \\ Total & 43120 & 66480 & 3.62 & 6.11 & 1191.2 & 1088.0 & 1.000 & **1172.8** & **1101.0** \\ \hline Entry & 1540 & 2500 & 1.58 & 2.45 & 97.5 & 102.0 & 0.414 & 40.4 & 42.2 \\ Senior & 21390 & 33160 & 1.38 & 2.32 & 1550.0 & 1429.3 & 0.380 & 589.0 & 543.1 \\ Manager & 11950 & 17260 & 0.39 & 0.71 & 3064.1 & 2431.0 & 0.113 & 346.2 & 274.7 \\ Director & 8240 & 13560 & 0.27 & 0.63 & 3051.8 & 2152.4 & 0.093 & 283.8 & 200.2 \\ Total & 43120 & 66480 & 3.62 & 6.11 & 1191.2 & 1088.0 & 1.000 & **1259.4** & **1060.2** \\ \hline \hline \end{tabular} \end{table} Table 3: Age and seniority adjustment calculations for promotion report ratios by gender. showing higher promotion reports. Table 4 displays age- and seniority- adjusted rates of male and female relocation, according to LinkedIn's categorisations, and suggests that men are between 22.7% and 1.6% more likely to report relocation than women, respectively. After age-adjustment, per 100,000 women, 1,140 had recently relocated, compared to 1,399 per 100,000 men, indicating a difference of 259 per 100,000 fewer relocations for women (\(z=34.430\), \(SE=7.517\times 10^{-5}\), \(p<0.001\)). After seniority adjustment, differences between male and female relocation reports are smaller (-21.8 per 100,000) than after age-adjustment, but differences are still statistically significant (\(z=2.831\), \(SE=7.772\times 10^{-5}\), \(p<0.01\)). The increase in rates of female relocation after seniority adjustment indicates that either mobility among female-dominated entry positions is lower than average across ranks, and/or that mobility among male-dominated director positions is higher. As shown in Table 4, entry-level women relocate at a higher rate than men, but among senior employees and directors, males relocate significantly more, in line with previous work showing that women are more mobile earlier in their careers (Branden and Strom, 2011). For both genders, older users are more likely to report their recent relocations except for the 55+ age range, where women's likelihood of reporting relocation decreases to similar levels as 25 to 34 year-olds, but men's likelihood roughly stagnates. At all ages, women are significantly less likely to report relocation. ### Gender Differences in Social Connectivity Women not working at a Big Tech company are less likely than men to be connected to an employee of a Big Tech company, as shown in Figure 2, in line with literature that shows that women's networks are not as advantageous as men's (Greguletz, Diehl, and Kreutzer, 2019). As shown in Figure 3, social connectivity increases with seniority for both male and female users, with men having higher or the same rates of connectivity at each rank, but with smaller gender differentials at more senior Manager and Director ranks. This likely reflects a type of survivorship bias (Fryer, 2007; Smith and Huntoon, 2013) among the women at these higher ranks - although women are less likely to be at these higher ranks, those who are present in them are positively selected, and fairly equally socially connected to men at these ranks. While social connectivity is similarly low for all the youngest users aged 18 to 24, a gender connectivity gap emerges and widens in the successive age groups from the ages of 25 to 54, peaking among 35-54 year olds (Figure 4). The gap in these age groups widens across the ages at which women are often likely to experience interrupted labour market trajectories due to family formation and caregiving roles, and when professional careers are likely to become well established. Among the oldest employees, men over 55 maintain a connectivity advantage over female colleagues, but both show social connectivity levels comparable to the youngest (18-24) age group. ### Relationship between Social Connectivity and Status Reports To assess the relationship between social connectivity and promotion and relocation reports for LinkedIn users, we estimate different logistic regression models (Kleinbaum et al., 2002), as promotion and relocation reports are dichotomous (binary) outcomes. Through this analysis, we examine whether potentially advantageous online connections are associated with job progression outcomes, and whether this association differs by gender. We estimate four models with different combinations of predictors, with social connectivity being our key predictor of interest, and compare them using Akaike Information Criterion (AIC) (Akaike, 1974) to determine the model with the best fit (smaller AIC value). The dataset on which these regressions are estimated is 9,735,600 rows, each representing a specific user. These rows are obtained by unrolling our aggregated dataset that contains counts of each combination of targeting attributes. Table 5 shows an example of the initial aggregated dataset. Table 6 and 7 present the unrolled (individual-level) versions used for the promotion and relocation report models. To examine the differential impacts of social connectivity by gender, we include a social connectivity \(\times\) gender interaction term in our models. We also check for potential multicollinearity of predictor variables in the regression models by calculating the variance inflation factor (VIF) of each variable. As all observed factors were smaller than 5.0 (the biggest factor was 2.8 for the interaction of gender and social connectivity), we did not exclude any predictors from the models, following standard guidelines (Akinwande, Dikko, and Samson, 2015). Table 8 shows the model estimates with the outcome of whether a user has recently reported a promotion or not. Based on the odds ratios of social connectivity, which are higher than one, we can conclude that users Figure 2: Bar blot of socially connected and unconnected users disaggregated by gender. 95% confidence intervals shown, with standard errors computed via bootstrapping. connected to Big Tech companies, i.e. those with potentially advantageous external networks, are more likely to report recent promotions. Consistent with our age- and seniority-specific analyses shown previously, these models also show that younger users, higher seniorities, as well as women have higher odds of reporting promotions, and that these gender differences in promotion reports persist even after we control for age and seniority. Looking at the social connectivity \(\times\) gender interaction, we see a positive interaction (odds ratio higher than 1), which suggests that social connectivity has higher payoffs (stronger positive association) for women in predicting promotion. This differential impact by gender can be seen in Figure 5 that shows the predicted probabilities of promotion by gender (holding other covariates at their mean/mode values) by social connectivity from the full model shown in Table 8. Although social connectivity is not a good predictor, the social connectivity is not a good predictor. \begin{table} \begin{tabular}{c c c c c} \hline Gender & Job Seniority & Age Range & Social Connectivity & Reconet Status & Count \\ \hline Female & Senior & 35 to 54 & Connected to big companies & Any & 3 \\ Female & Senior & 35 to 54 & Connected to big companies & Relocated & 1 \\ Male & Manager & 25 to 34 & Not connected & Promoted & 3 \\ \hline \end{tabular} \end{table} Table 6: Unrolled version of the example in Table 5 for the promotion report models. \begin{table} \begin{tabular}{c c c c c} \hline Gender & Job Seniority & Age Range & Social Connectivity & Relocation Status \\ \hline Female & Senior & 35 to 54 & Connected to big companies & Not relocated \\ Female & Senior & 35 to 54 & Connected to big companies & Not relocated \\ Female & Senior & 35 to 54 & Connected to big companies & Relocated \\ \hline \end{tabular} \end{table} Table 7: Unrolled version of the example in Table 5 for the relocation report models. Figure 4: Gender-disaggregated bar plot of social connectivity ratio (proportion connected to big company) at each age group. 95% confidence intervals shown, with standard errors computed via bootstrapping. \begin{table} \begin{tabular}{c c c c c} \hline Gender & Job Seniority & Age Range & Social Connectivity & Relocation Status \\ \hline Female & Senior & 35 to 54 & Connected to big companies & Not promoted \\ Female & Senior & 35 to 54 & Connected to big companies & Not promoted \\ Male & Manager & 25 to 34 & Not connected & Promoted \\ Male & Manager & 25 to 34 & Not connected & Promoted \\ Male & Manager & 25 to 34 & Not connected & Promoted \\ \hline \end{tabular} \end{table} Table 8: Unrolled version of the example in Table 5 for the promotion report models. Figure 3: Gender-disaggregated bar plot of social connectivity ratio (proportion connected to big company) at each seniority level. 95% confidence intervals shown, with standard errors computed via bootstrapping. boosts promotion probability for both men and women, it bolsters women's probability of reporting a promotion comparatively more than men, approximately 0.046 percent, or 46 per 100,000 promotions more, which translates to a 3.86 percent increase above the mean promotion report rate for women of 1191.2 per 100,000 (as shown in Table 3). Table 9 presents results from logistic regression models predicting users' relocation status. Once again, the odds ratios of social connectivity are higher than one, leading us to conclude that users connected to big companies are more likely to report recent relocations. Similar to the previous analysis of promotion reports, the best model (lowest AIC) is the one that includes all predictors of user characteristics and the social connectivity \(\times\) gender interaction. Once again, we observe a positive interaction effect between gender \(\times\) social connectivity. However, as shown in Figure 5, the gender gap in relocation among those that are socially connected to Big Tech firms on LinkedIn is smaller - indicating a higher payoff to social connectivity for women compared to men. To correct for multiple comparisons, such as multiple variables potentially being statistically significant, all p-values in Table 8 and 9 have been adjusted using the Bonferroni correction (k = number of variables of each model) (Rice, Schork, and Rao 2008). ## Ethical Considerations The data collected and used in this paper consist of aggregate and anonymous user counts. As the smallest identifiable unit contains 300 users, we see little to no reidentification risk of individual users. While such aggregate data could, theoretically, still be used to map vulnerable groups, we do not see the general data source - LinkedIn - nor the particular targeting attributes posing a danger for this. The data used is also accessible free of charge on LinkedIn's advertising platform through APIs. This type of data access can be seen as a type of Data Collaborative, providing a non-standard way of enabling partial auditing of large platforms. In terms of the privacy _expectation_ of users, we believe that this type of data is less problematic than public individual-level data consisting of posts, comments, or pictures. At the same time, we acknowledge that LinkedIn users might be unaware that their data can be accessed and analyzed in this way, even if in anonymous and aggregate form (Anderson and Leigh Anderson 2020). LinkedIn and all other online advertising platforms we are aware of only support binary female-or-male gender for the ad targeting on the platform, though it leaves some users unclassified. This could be viewed as a form of exclusion, if not erasure, of gender minorities (Bivens and Haimson 2016). At the same time, this design choice limits the potential use of the advertising platform for targeted harassment of gender minorities. Furthermore, the binary gender used by LinkedIn is automatically inferred, and also draws on pronouns used by other users referring to the user, rather than based on self-identified information. This will undoubtedly result in misclassifications of some users. In a user-facing system, such misclassification has the risk of causing psychological harm by misgneglending the user without providing them with an option to self-declare their gender identity (Hoffmann 2018; Keyes 2019). However, LinkedIn seems to only assign a gender to cases with sufficient confidence. For example, for all the 6,500,000 LinkedIn users in the US working in IT (using the same definition as in the paper), LinkedIn's advertising platform classifies 2,100,000 as female and 3,900,000 as male, leaving 500,000 as unclassified. For cases that can be mapped with sufficient confidence, openly accessible name-to-gender mappings achieve accuracies of 95-97% (Santamaria and Mihaljevic 2018). This suggests that for those users for whom a (binary) gender is inferred, the precision is likely to be high. For our population-level study on relative female-vs-male gender gaps, we feel that these data are able to highlight important, aggregate gender differences that outweigh the harm caused by potential (population-level) gender misidentification. ## Discussion and Conclusion Professional networking is important for career progression, yet research has shown that women's offline networks are less advantageous than men's. How these gender differences translate to online spaces, specifically the use of online professional networking platforms, is not well understood. This study examines gender differences in the information technology (IT) industry in two of the largest LinkedIn user populations of the UK and US, leveraging aggregated, anonymised data on the LinkedIn user population from its advertising platform. Consistent with previous work using these data (Kashyap and Verkroost 2021; Verkroost et al. 2020; Haranko et al. 2018; Berte, Kalimeri, and Paolotti 2023), we find there are fewer women compared to men on LinkedIn in IT. Female LinkedIn users are younger, less senior, and also less likely to be connected to big companies compared with male LinkedIn users in IT. Yet, they were more likely to report a recent promotion at work. Even in this high-achieving sample, we nonetheless found women were less likely to report a relocation, confirming previous research that highlights women's lower availability to relocate (Baldridge, Eddleston, and Veiga 2006; Mansfield, Mutz, and Silver 2014). While the data preclude us from distinguishing whether the observed gender differences in promotion rates reflect differences in propensity to report promotions, or the actual prevalence of promotions among LinkedIn users, we offer two plausible interpretations of these findings, which are not mutually exclusive. First, aligned with prior studies that show positive selection effects by gender on online platforms such as LinkedIn (Kashyap and Verkroost 2021; Verkroost et al. 2020) or Google+ (Magno and Weber 2014), women who are on LinkedIn, especially in a highly unequal industry such as IT, may be high-achieving, professionally driven, and positively selected. Second, women on LinkedIn \begin{table} \begin{tabular}{c c c c c} \hline \hline _Dependent variable = Recently promoted_ & \multicolumn{4}{c}{_Odds ratio_} \\ _(ref: Not promoted)_ & \multicolumn{4}{c}{_(standard error)_} \\ \hline Gender (ref: Male) & 1.032** & 1.142*** & 1.142*** & 1.079*** \\ Female & (0.010) & (0.011) & (0.007) & (0.010) \\ Job Seniority (ref: Entry) & & 12.572*** & 14.974*** & 14.964*** \\ Senior & - & (0.206) & (0.246) & (0.246) \\ Manager & - & 22.272*** & 28.147*** & 28.068*** \\ & - & (0.376) & (0.477) & (0.476) \\ Director & - & 18.119*** & 26.224*** & 26.179*** \\ Age Range (ref: 18 to 24) & 0.709*** & - & 0.567*** & 0.567*** \\ 25 to 34 & (0.007) & - & (0.006) & (0.006) \\ 35 to 54 & 0.518*** & - & 0.294*** & 0.294*** \\ & (0.005) & - & (0.003) & (0.003) \\ 55+ & 0.170*** & - & 0.090*** & 0.091*** \\ & (0.004) & - & (0.002) & (0.002) \\ Social Connectivity (ref: Not connected) & 3.811*** & 2.462*** & 3.045*** & 2.922*** \\ Connected to big companies & (0.031) & (0.020) & (0.019) & (0.024) \\ Social Connectivity \(\times\) Gender & 1.227*** & 1.180*** & - & 1.109*** \\ & (0.015) & (0.015) & - & (0.014) \\ Constant & 0.010*** & 0.001*** & 0.001*** & 0.001*** \\ & (0.000) & (0.000) & (0.000) & (0.000) \\ AIC & 1141919 & 1079141 & 1054174 & **1054109** \\ \(N\) & \multicolumn{4}{c}{9,608,320} \\ \hline \(*p<0.05,**p<0.01,***p<0.001\) & & & \\ \hline \end{tabular} \end{table} Table 8: Estimates (odds ratios) from logistic regression models predicting promotion status by user characteristics. \begin{table} \begin{tabular}{c c c c c} \hline \hline _Dependent variable = Recently promoted_ & \multicolumn{4}{c}{_Odds ratio_} \\ _(ref: Not promoted)_ & \multicolumn{4}{c}{_(standard error)_} \\ \hline Gender (ref: Male) & 0.787*** & 0.787*** & 0.838*** & 0.795*** \\ Female & (0.006) & (0.006) & (0.005) & (0.006) \\ Job Seniority (ref: Entry) & - & 1.381*** & 1.324*** & 1.323*** \\ Senior & - & (0.009) & (0.009) & (0.009) \\ Manager & - & 1.165*** & 1.110*** & 1.106*** \\ Manager & - & (0.012) & (0.011) & (0.011) \\ & & 2.147*** & 1.990*** & 1.987*** \\ Director & - & (0.019) & (0.018) & (0.018) \\ Age Range (ref: 18 to 24) & 1.158*** & & 1.123*** & 1.124*** \\ 25 to 34 & (0.014) & - & (0.013) & (0.013) \\ 35 to 54 & 1.555*** & - & 1.381*** & 1.383*** \\ & (0.018) & - & (0.016) & (0.016) \\ & 1.461*** & - & 1.252*** & 1.255*** \\ 55+ & (0.022) & - & (0.019) & (0.019) \\ Social Connectivity (ref: Not connected) & 1.247*** & 1.191*** & 1.210*** & 1.150*** \\ Connected to big companies & (0.009) & (0.009) & (0.007) & (0.008) \\ Social Connectivity \(\times\) Gender & 1.190*** & 1.176*** & - & 1.185*** \\ & (0.015) & (0.015) & - & (0.015) \\ & 0.010*** & 0.011*** & 0.009*** & 0.009*** \\ Constant & (0.000) & (0.000) & (0.000) & (0.000) \\ AIC & 1346296 & 1342040 & 1340790 & **1340623** \\ \(N\) & \multicolumn{4}{c}{9,626,000} \\ \hline \(*p<0.05,**p<0.01,***p<0.001\) & & & \\ \hline \end{tabular} \end{table} Table 9: Estimates (odds ratios) from logistic regression models predicting relocation status by user characteristics. may choose to more actively share recent promotions to their online networks, seeking visibility from the wider professional community at lower costs afforded through online platforms. These benefits from online networking may help women who face greater disadvantage in accessing offline networking due to gendered family or caring responsibilities (e.g., attending conferences or socialising after work), or have smaller offline professional networks. The fact that women often face greater constraints, e.g., related to gendered family expectations, in making job-related decisions is suggested by our finding that even among this sample of professionally motivated women, women are less likely to report relocations. Further suggestive of these constraints is our finding that the social connectivity gap on LinkedIn between men and women is greatest during the childbearing ages. While the lower relocation rate may reflect a lower availability to relocate among women, the differences in promotion versus relocations also suggest the pursuit of different career progression strategies for men and women, which are likely to be shaped by differing choice sets, norms and expectations. Our findings add an important gendered nuance to recent research highlighting the value of online professional networking via LinkedIn for job search and mobility processes [23, 24]. We find that although women not working at Big Tech firms on average have lower social connectivity to those at Big Tech companies than men, the payoffs to online social connectivity for those with these networks are larger for women compared with men. While Rajkumar et al. (2022) show the causal effects of weak ties on LinkedIn for job search, our findings suggest returns to these ties may be even larger for women, who have conventionally faced greater disadvantages in accessing potentially advantageous network ties in the labour market through traditional forms of networking. We acknowledge nonetheless that the cross-sectional nature of the data implies that our findings cannot be given a causal interpretation, and are susceptible to the potential for reverse causality. Online social connectivity may not be the driver of higher promotion rates, but women who are recently promoted may be seen as more successful and attract more social connections. As such, the formation of social links reflects a bidirectional process, which makes the interpretation of social connectivity tricky. This process is also likely to be algorithmically mediated, and the acceptance rate of such requests could also correlate with other characteristics, which we are unable to control for in our analyses. Longitudinal data are needed to better disentangle these mechanisms underlying social connectivity and job progression. We acknowledge that the data come with additional limitations. The audience counts obtained from the marketing platform may include fake accounts, and also include misrepresented or inaccurate affiliations. Moreover, there is a sparsity-related limitation as LinkedIn's advertising platform does not provide counts below 300. Currently, our data has 36 out of 192 (19%) sparse values. If we were to disaggregate by the two countries (US vs. UK), then we would have 116 out of 384 (30%) sparse values. As we felt that this level of sparsity with values missing not at random would be too high, we decided to combine the two countries. Further, many of the targeting categories for which counts are provided are algorithmically inferred, and are vulnerable to biases. For example, people with non-standard careers who start university later in life, might be misclassified as be Figure 5: Predicted probabilities of the interaction between gender and social connectivity in promotion (left)/relocation (right) models. 95% confidence intervals shown, derived from the combination of the z-scores and standard errors [1]. ing younger than they are. Greater transparency and documentation from platforms about the data-generating process and algorithms underlying these data can be helpful to better understand and address these biases. Even within our existing (binary) analysis of gender, we recognise that men and women are not homogeneous groups, and acknowledge that much heterogeneity exists in how workplace structures may differentially affect the experiences and professional trajectories of individuals - e.g., by race, immigration status, or sexual orientation, and how these intersect. Nonetheless, our findings expand on previous research about gender gaps on LinkedIn, by exploring the additional dimensions of social connectivity and how these are associated with job progression behaviours such as recent promotions and relocations. They contribute to a growing body of work showing the potential value of online professional networks for employment behaviours, but highlight the need to integrate a gender perspective to understand the differential impacts of online platforms on social and economic domains. With the growing digitalisation of work, and also increasing levels of remote work [1], online networking is likely to become even more central to job progression and mobility processes. This increasing use of online networking may help to mitigate gender gaps in the labour market. In turn, policies that integrate online professional networking within educational and job training programmes have the potential to help benefit disadvantaged groups. Moreover, for employers, seeking potential candidates through online platforms may serve to bring a broader pool of candidates to their attention than those through traditional network-based contexts, e.g. through conferences or events. For researchers, our study motivates further studies of user behaviours on LinkedIn in its own right as the largest professional networking platform, but also studies that examine how online networking is experienced and used by different disadvantaged social groups, and whether they reproduce or alter social inequalities experienced by them. ## Competing Interests The authors declare that they have no competing interests.
2307.03600
Antenna Impedance Estimation in Correlated Rayleigh Fading Channels
We formulate antenna impedance estimation in a classical estimation framework under correlated Raleigh fading channels. Based on training sequences of multiple packets, we derive the ML estimators for antenna impedance and channel variance, treating the fading path gains as nuisance parameters. These ML estimators can be found via scalar optimization. We explore the efficiency of these estimators against Cramer-Rao lower bounds by numerical examples. The impact of channel correlation on impedance estimation accuracy is investigated.
Shaohan Wu, Brian Hughes
2023-07-07T13:43:35Z
http://arxiv.org/abs/2307.03600v1
# Antenna Impedance Estimation in Correlated Rayleigh fading Channels ###### Abstract We formulate antenna impedance estimation in a classical estimation framework under correlated Raleigh fading channels. Based on training sequences of multiple packets, we derive the ML estimators for antenna impedance and channel variance, treating the fading path gains as nuisance parameters. These ML estimators can be found via scalar optimization. We explore the efficiency of these estimators against Cramer-Rao lower bounds by numerical examples. The impact of channel correlation on impedance estimation accuracy is investigated. Shaohan Wu Brian L. Hughes Impedance Estimation, Channel Correlation, Scalar Optimization, Maximum-Likelihood Estimation. ## 1 Introduction Antenna impedance matching to the receiver front-end has been shown to significantly impact the capacity and diversity of wireless channels [1]. This matching becomes challenging as antenna impedance changes with time-varying near-field loading, e.g., human users. To mitigate this change, antenna impedance estimation techniques at mobile receivers have been proposed [2, 3, 4, 5, 6, 7]. Hassan and Wittebeon proposed least-square estimators to jointly estimate the spatial channel and coupling impedance matrices [2]. Wu and Hughes first derived joint channel and antenna impedance estimators at single-antenna receivers using a hybrid estimation framework [4]. Wu extended it to multi-antenna receivers [5]. Under classical estimation, the maximum-likelihood (ML) estimators of antenna impedance have been derived under i.i.d. Rayleigh fading, treating channel path gains as nuisance parameters [6, 7]. However, the optimal impedance estimator remains unknown when the channel is correlated. In this paper, we fill this gap. We formulate antenna impedance estimation in a classical estimation framework under correlated Raleigh fading channels. Based on training sequences of multiple packets, we derive the ML estimators for antenna impedance and channel variance, treating the fading path gains as nuisance parameters. These ML estimators can be found via scalar optimization. We explore the performance, e.g., efficiency against Cramer-Rao lower bounds, of these estimators through numerical examples. The impact of channel correlation on impedance estimation accuracy is also investigated. The rest of the paper is organized as follows. We present the system model in Sec. 2 and derive the ML estimators in Sec. 3. We explore the performance of these estimators through numerical examples in Sec. 4 and conclude in Sec. 5. ## 2 System Model Consider a narrow-band, multiple-input, single-output (MISO) channel with \(N\) transmit antennas and one receive antenna. Suppose the transmitter sends \(L\) packets each with an identical training sequence to the receiver. During transmission, the receiver front-end shifts halfway in the training sequence [6, eq. 7], to observe the unknown antenna impedance. We assume the channel is constant _within_ a packet, but generally varies from packet to packet randomly. Under these assumptions, the signal observed during the \(k\)-th packet can be described by [6, eq. 10], with \(K\) assumed even, \[u_{k,t}\ =\ \begin{cases}\mathbf{h}_{k}^{T}\mathbf{x}_{t}+n_{k,t}\;,&1\leq t \leq K/2\;,\\ F\mathbf{h}_{k}^{T}\mathbf{x}_{t}+n_{k,t}\;,&K/2<t\leq K\;,\end{cases} \tag{1}\] where \(F\) is a function of the unknown antenna impedance [6, eq. 11], \(\mathbf{h}_{k}\) is the channel during \(k\)-th packet, and the noise \(n_{k,t}\sim\mathcal{CN}(0,1)\) is i.i.d.. We can express (1) in matrix form, \[\mathbf{U}_{1}=\mathbf{H}\mathbf{X}_{1}+\mathbf{N}_{1}\;,\;\;\;\mathbf{U}_{2}= F\mathbf{H}\mathbf{X}_{2}+\mathbf{N}_{2} \tag{2}\] where \(\mathbf{X}_{1}\triangleq[\mathbf{x}_{1},\ldots,\mathbf{x}_{K/2}],\mathbf{X}_ {2}\triangleq[\mathbf{x}_{K/2+1},\ldots,\mathbf{x}_{K}]\), \[\mathbf{H}\ \triangleq\ [\mathbf{h}_{1},\ldots,\mathbf{h}_{L}]^{T}\in \mathbb{C}^{L\times N}\;. \tag{3}\] It follows \(\mathbf{N}_{1}\) and \(\mathbf{N}_{2}\) are independent random matrices with i.i.d. \(\mathcal{CN}(0,1)\) entries. Note the horizontal dimension of \(\mathbf{H}\) represents space, while the vertical dimension is time. Here \(\mathbf{H}\) models Rayleigh fading path gains which are uncorrelated in space but generally correlated in time. This implies the columns of \(\mathbf{H}\) are i.i.d. zero-mean, complex Gaussian random vectors with a temporal correlation matrix \(\sigma_{h}^{2}\mathbf{C}_{\mathbf{H}}\). We assume the correlation matrix \(\mathbf{C}_{\mathbf{H}}\) is known but the power \(\sigma_{h}^{2}\) is unknown. As in our prequel, we assume the known sequences \(\mathbf{X}_{1}\) and \(\mathbf{X}_{2}\) are equal-energy and orthogonal over the first and last \(K\) symbols [6, eq. 16], \[\mathbf{X}_{1}\mathbf{X}_{1}^{H}\ =\ \mathbf{X}_{2}\mathbf{X}_{2}^{H}\ =\ \left(\frac{ PK}{2N}\right)\mathbf{I}_{N}\;. \tag{4}\] ## 3 Maximum-likelihood Estimators The goal of this paper is to derive optimal estimators for \[\boldsymbol{\theta}\ \triangleq\ \big{[}F\quad\sigma_{h}^{2}\big{]}^{T} \tag{5}\] based on the observations (2). To this end, we leverage the classical estimation framework by treating \(\mathbf{H}\) as a nuisance parameter. We first prove the sufficiency of observations. **Theorem 1** (Sufficient Statistics): _Given \(\mathbf{U}_{1}\) and \(\mathbf{U}_{2}\) defined in (2), where \(\mathbf{X}_{1},\mathbf{X}_{2}\) are known training sequences (4) and \(\boldsymbol{\theta}\) in (5) are unknown, then_ \[\mathbf{Y}_{1}\ \triangleq\ \sigma^{2}\mathbf{U}_{1}\mathbf{X}_{1}^{H}\,\ \ \mathbf{Y}_{2}\ \triangleq\ \sigma^{2}\mathbf{U}_{1}\mathbf{X}_{2}^{H}\, \tag{6}\] _are sufficient statistics to estimate \(\boldsymbol{\theta}\), where we define_ \[\sigma^{2}\ \triangleq\ \frac{2N}{PK}. \tag{7}\] _Moreover, \(\mathbf{Y}_{1}-\mathbf{H}\) and \(\mathbf{Y}_{2}-F\mathbf{H}\) are independent random matrices with i.i.d. \(\mathcal{CN}(0,\sigma^{2})\) entries. \(\diamond\)_ **Proof** From (2) and (4), we have \(\mathbf{Y}_{1}=\mathbf{H}+\sigma^{2}\mathbf{N}_{1}\mathbf{X}_{1}^{H}\). Note the rows of \(\sigma^{2}\mathbf{N}_{1}\mathbf{X}_{1}^{H}\) are i.i.d. with covariance \(\sigma^{4}\mathbf{X}_{1}\mathbf{X}_{1}^{H}=\sigma^{2}\mathbf{I}_{N}\), due to (4). Similarly, \(\mathbf{Y}_{2}=F\mathbf{H}+\sigma^{2}\mathbf{N}_{2}\mathbf{X}_{2}^{H}\), where the last matrix has i.i.d \(\mathcal{CN}(0,\sigma^{2})\) entries. So \(\mathbf{Y}_{1}-\mathbf{H}\) and \(\mathbf{Y}_{2}-F\mathbf{H}\) are independent random matrices with i.i.d. \(\mathcal{CN}(0,\sigma^{2})\) entries. From the Neyman-Fisher Theorem [8, pg. 117], to prove sufficiency it suffices to show \(p(\mathbf{U}_{1},\mathbf{U}_{2};\boldsymbol{\theta})\) can be factored into a product \(g(\mathbf{Y}_{1},\mathbf{Y}_{2},\boldsymbol{\theta})f(\mathbf{U}_{1},\mathbf{U }_{2})\), where \(f\) does not depend on \(\mathbf{Y}_{1},\mathbf{Y}_{2}\) or \(\boldsymbol{\theta}\), and \(g\) does not depend on \(\mathbf{U}_{1},\mathbf{U}_{2}\). We can express this pdf in terms of the conditional pdf as \[p(\mathbf{U}_{1},\mathbf{U}_{2};\boldsymbol{\theta})\ =\ E_{\mathbf{H}}\bigg{[}p( \mathbf{U}_{1},\mathbf{U}_{2}|\mathbf{H};\boldsymbol{\theta})\bigg{]}\,\] where \(E_{\mathbf{H}}[\cdot]\) denotes expectation with respect to \(\mathbf{H}\). Since \(\mathbf{U}_{1}\), \(\mathbf{U}_{2}\) are conditionally independent given \(\mathbf{H}\), we can simplify the pdf in to (8), where \(\|\mathbf{A}\|^{2}=\mathrm{Tr}[\mathbf{A}^{H}\mathbf{A}]\) denotes the Frobenius norm. Note identities \(2\mathrm{Re}\mathrm{Tr}[\mathbf{A}]=\mathrm{Tr}[\mathbf{A}]+\mathrm{Tr}[ \mathbf{A}^{H}]\) and \(\mathrm{Tr}[\mathbf{A}\mathbf{B}]=\mathrm{Tr}[\mathbf{B}\mathbf{A}]\) are used, along with (4) and (7). In (8), denote the first factor by \(\pi^{LK}g(\mathbf{Y}_{1},\mathbf{Y}_{2},\boldsymbol{\theta})\), and the second by \(f(\mathbf{U}_{1},\mathbf{U}_{2})\). Note \(f\) does not depend on \(\mathbf{Y}_{1},\mathbf{Y}_{2}\) or \(\boldsymbol{\theta}\), while \(g\) depends on \(\mathbf{Y}_{1},\mathbf{Y}_{2},F\) and \(\sigma_{h}^{2}\) (through the expectation), but not \(\mathbf{U}_{1},\mathbf{U}_{2}\). \(\diamond\) We now present maximum-likelihood (ML) estimators for \(\boldsymbol{\theta}\) defined in (5) using sufficient statistics (6). By definition, the ML estimators maximize the likelihood function, i.e., \[\hat{\boldsymbol{\theta}}_{ML}\ \triangleq\ \arg\max_{\boldsymbol{\theta}}p( \mathbf{Y}_{1},\mathbf{Y}_{2};\boldsymbol{\theta}). \tag{9}\] Based on this criterion, we show in the following theorem the ML estimators can be calculated directly after a scalar optimization. **Theorem 2** (Multiple-Packet ML Estimators): _Let \(\mathbf{Y}_{1}\) and \(\mathbf{Y}_{2}\) be the sufficient statistics in (6), where \(\boldsymbol{\theta}\) in (5) are unknown constants. Consider the matrix_ \[\mathbf{S}(\mu)\ \triangleq\ \frac{1}{N}\begin{bmatrix}S_{11}(\mu)&S_{12}(\mu) \\ S_{21}(\mu)&S_{22}(\mu)\end{bmatrix}. \tag{10}\] _where we define for \(1\leq i,j\leq 2\),_ \[S_{ij}(\mu)\ \triangleq\ \mathrm{Tr}\left[\mu\mathbf{C}_{\mathbf{H}}\left(\mu \mathbf{C}_{\mathbf{H}}+\sigma^{2}\mathbf{I}_{L}\right)^{-1}\mathbf{Y}_{i} \mathbf{Y}_{j}^{H}\right]. \tag{11}\] _With \(\sigma^{2}\) in (7) and \(\mathbf{C}_{\mathbf{H}}\) known, we define a scalar optimization problem_ \[\hat{\mu}\ \triangleq\ \arg\ \max_{\mu\geq 0}\ \ \left[\eta(\mu)-\sigma^{2}\ln \mathrm{det}[\mu\mathbf{C}_{\mathbf{H}}+\sigma^{2}\mathbf{I}_{L}]\right]\, \tag{12}\] _where \(\eta(\mu)\) is the largest eigenvalue of \(\mathbf{S}(\mu)\) in (10):_ \[\eta(\mu)\ \triangleq\ \frac{S_{22}+S_{11}+\sqrt{(S_{11}-S_{22})^{2}+4|S_{12}|^{2}} }{2}. \tag{13}\] _Let \(\hat{\mathbf{e}_{1}}=[E_{1},E_{2}]^{T}\) be any unit eigenvector of \(\mathbf{S}(\hat{\mu})\) corresponding to the eigenvalue \(\eta(\hat{\mu})\). Then the maximum-likelihood estimates of \(F\) and \(\sigma_{h}^{2}\) are given by_ \[\hat{\boldsymbol{\theta}}_{ML}\ =\ \begin{bmatrix}\hat{F}_{ML}\\ \hat{\sigma_{h}^{2}}\end{bmatrix}\ =\ \begin{bmatrix}E_{2}/E_{1}\\ |E_{1}|^{2}\hat{\mu}\end{bmatrix}\, \tag{14}\] _provided \(E_{1}\neq 0\). For \(E_{1}=0\) and \(\hat{\mu}>0\) the likelihood is maximized in the limit as \(F\to\infty\). \(\diamond\)_ **Proof** For any matrix \(\mathbf{A}\), denote the \(kj\)-th element and \(k\)-th row by \([\mathbf{A}]_{kj}\) and \([\mathbf{A}]_{k}\), respectively. Let \(\mathbf{C}_{\mathbf{H}}=\mathbf{V}^{H}\mathrm{diag}[\lambda_{1},\ldots, \lambda_{L}]\mathbf{V}\) be an eigen-decomposition of \(\mathbf{C}_{\mathbf{H}}\), where \(\lambda_{1}\geq\ldots\geq\lambda_{L}\geq 0\) are eigenvalues of \(\mathbf{C}_{\mathbf{H}}\), and \(\mathbf{V}\) is a unitary matrix such that \(\mathbf{V}\mathbf{V}^{H}=\mathbf{V}^{H}\mathbf{V}=\mathbf{I}_{L}\). It follows the elements of \(\mathbf{V}\mathbf{H}\) are independent with \([\mathbf{V}\mathbf{H}]_{kj}\sim\mathcal{CN}(0,\sigma_{h}^{2}\lambda_{k})\). For \(1\leq k\leq L\), let \(\mathbf{w}_{k1}\triangleq[\mathbf{V}\mathbf{Y}_{1}]_{k}\) and \(\mathbf{w}_{k2}\triangleq[\mathbf{V}\mathbf{Y}_{2}]_{k}\). From Theorem 1, \(\mathbf{w}_{k1}\) and \(\mathbf{w}_{k2}\) are conditionally independent given \([\mathbf{V}\mathbf{H}]_{k}\), with conditional distributions \(\mathbf{w}_{k1}\sim\mathcal{CN}([\mathbf{V}\mathbf{H}]_{k},\sigma^{2}\mathbf{I }_{N})\) and \(\mathbf{w}_{k2}\sim\mathcal{CN}(F[\mathbf{V}\mathbf{H}]_{k},\sigma^{2}\mathbf{I }_{L})\). Since \([\mathbf{V}\mathbf{H}]_{k}\sim\mathcal{CN}(\mathbf{0},\sigma_{h}^{2}\lambda_{k} \mathbf{I}_{N})\) is independent of the noise in (6), it follows \(\mathbf{w}_{k}\triangleq(\mathbf{w}_{k1},\mathbf{w}_{k2})^{T}\) is a zero-mean Gaussian random vector with covariance \[\mathbf{C}_{\mathbf{w}_{k}}\ \triangleq\ E\left[\mathbf{w}_{k}^{H}\mathbf{w}_{k} \right]\ \ =\ \ \mathbf{C}_{k}\otimes\mathbf{I}_{N}\, \tag{15}\] where we define \[\mathbf{C}_{k}\ \triangleq\ \begin{bmatrix}\sigma_{h}^{2}\lambda_{k}+\sigma^{2}& \sigma_{h}^{2}F^{*}\lambda_{k}\\ \[\pi^{LK}\cdot p\left({\bf U}_{1},{\bf U}_{2};\boldsymbol{\theta} \right)\;=\;E_{\bf H}\left[\exp\left(-\left\|{\bf U}_{1}-{\bf H}{\bf X}_{1} \right\|^{2}-\left\|{\bf U}_{2}-F{\bf H}{\bf X}_{2}\right\|^{2}\right)\right] \tag{19}\] \[= E_{\bf H}\left[\exp\!\left(\frac{1}{\sigma^{2}}\!\left\{2{\rm Re} {\rm Tr}[{\bf H}^{H}{\bf Y}_{1}]+2{\rm Re}{\rm Tr}[F^{*}{\bf H}^{H}{\bf Y}_{2} ]-\left(1+|F|^{2}\right)\left\|{\bf H}\right\|^{2}\right\}\right)\right]\exp \left(-\left\|{\bf U}_{1}\right\|^{2}-\left\|{\bf U}_{2}\right\|^{2}\right)\;,\] where \(\mu_{k1}\geq\mu_{2}\) are the ordered eigenvalues and \({\bf e}_{1},{\bf e}_{2}\) are the associated unit eigenvectors. From (16), it is easy to verify the following explicit formulas, \[\mu_{k1} = \mu\lambda_{k}+\sigma^{2}\;,\;\;\;\mu_{2}\;=\;\sigma^{2} \tag{20}\] \[{\bf e}_{1} = \frac{1}{\sqrt{1+|F|^{2}}}\begin{bmatrix}1\\ F\end{bmatrix}\;,\;{\bf e}_{2}=\frac{1}{\sqrt{1+|F|^{2}}}\begin{bmatrix}-F^{*} \\ 1\end{bmatrix}\;.\] where \(\mu\triangleq\sigma_{h}^{2}(1+|F|^{2})\). Note only \(\mu_{k1}\) depends on \(k\). As in [6, eq. 31], we can simplify \(\ln p\left({\bf w}_{k1},{\bf w}_{k2};\boldsymbol{\theta}\right)\) into, \[B_{k}+\frac{N}{\sigma^{2}}\left[\frac{\mu\lambda_{k}}{\mu\lambda_{k}+\sigma^{ 2}}{\bf e}_{1}^{H}{\bf S}_{k}{\bf e}_{1}-\sigma^{2}\ln(\mu\lambda_{k}+\sigma^{ 2})\right]\;,\] where \(B_{k}\) does not depend on \(\mu\) or \({\bf e}_{1}\) and we define \[{\bf S}_{k}\;\triangleq\;\frac{1}{N}\begin{bmatrix}{\bf w}_{k1}{ \bf w}_{k1}^{H}&{\bf w}_{k1}{\bf w}_{k2}^{H}\\ {\bf w}_{k2}{\bf w}_{k1}^{H}&{\bf w}_{k2}{\bf w}_{k2}^{H}\end{bmatrix} \tag{21}\] \[= \frac{1}{N}\begin{bmatrix}[{\bf Y}_{1}{\bf Y}_{1}^{H}{\bf V}^{H }]_{kk}&[{\bf Y}{\bf Y}_{1}{\bf Y}_{2}^{H}{\bf V}^{H}]_{kk}\\ [{\bf Y}{\bf Y}_{2}{\bf Y}_{1}^{H}{\bf V}^{H}]_{kk}&[{\bf Y}{\bf Y}_{2}{\bf Y }_{2}^{H}{\bf V}^{H}]_{kk}\end{bmatrix}\;.\] Since \({\bf w}_{1},\ldots,{\bf w}_{L}\) are independent, the joint probability of \({\bf Y}_{1}\) and \({\bf Y}_{2}\) is then given by \[\ln p({\bf Y}_{1},{\bf Y}_{2};\boldsymbol{\theta})\;=\;\sum_{k=1 }^{L}\ln p\left({\bf w}_{k1},{\bf w}_{k2};\boldsymbol{\theta}\right)\] \[= B+\frac{N}{\sigma^{2}}\left[{\bf e}_{1}^{H}{\bf S}(\mu){\bf e}_{1 }-\sigma^{2}\sum_{k=1}^{L}\ln(\mu\lambda_{k}+\sigma^{2})\right]\;,\] where \(B\) does not depend on the parameters \(\boldsymbol{\theta}\) and \[{\bf S}(\mu)\;\triangleq\;\sum_{k=1}^{L}\frac{\mu\lambda_{k}}{\mu\lambda_{k}+ \sigma^{2}}{\bf S}_{k} \tag{22}\] is the matrix in (20). To see this, let \(\Lambda\triangleq\,{\rm diag}(\lambda_{1},\ldots,\lambda_{L})\) and observe \[[{\bf S}(\mu)]_{ij} \triangleq \sum_{k=1}^{L}\frac{\mu\lambda_{k}}{\mu\lambda_{k}+\sigma^{2}}[{ \bf V}{\bf Y}_{i}{\bf Y}_{j}^{H}{\bf V}^{H}]_{kk} \tag{23}\] \[= \sum_{k=1}^{L}\left[\mu\Lambda\left(\mu\Lambda+\sigma^{2}{\bf I }_{L}\right)^{-1}{\bf V}{\bf Y}_{i}{\bf Y}_{j}^{H}{\bf V}^{H}\right]_{kk}\] \[= {\rm Tr}\left[\mu{\bf C}_{\bf H}\left(\mu{\bf C}_{\bf H}+\sigma^{ 2}{\bf I}_{L}\right)^{-1}{\bf Y}_{i}{\bf Y}_{j}^{H}\right]\;.\] To find maximum-likelihood estimates of \(F\) and \(\sigma_{h}^{2}\), we proceed in two steps: First, we find conditions on \(\mu\) and \({\bf e}_{1}\) that achieve the maximum in (20). Second, we use (20) to translate these conditions into values of \(F\) and \(\sigma_{h}^{2}\). For each \(\mu\), the maximum of (20) over \({\bf e}_{1}\) is a unit eigenvector corresponding to the largest eigenvalue of \({\bf S}(\mu)\). From [6, eq. 24], it is shown this eigenvalue is \(\eta(\mu)\) in (13). It follows that the maximum-likelihood estimate of \(\mu\) is \[\hat{\mu} \triangleq \arg\max_{\mu\geq 0}\left[\eta(\mu)-\sigma^{2}\sum_{k=1}^{L}\ln( \mu\lambda_{k}+\sigma^{2})\right]\;,\] which equals (12), since \(\sum_{k=1}^{L}\ln(\mu\lambda_{k}+\sigma^{2})\;=\;\ln\det[\mu{\bf C}_{\bf H}+ \sigma^{2}{\bf I}_{L}]\). Finally, we translate these conditions into values of \(F\) and \(\sigma_{h}^{2}\): If \(\hat{\mu}=0\), \({\bf S}(\hat{\mu})\) vanishes and \(\ln p({\bf Y}_{1},{\bf Y}_{2};\boldsymbol{\theta})\) does not depend on \(F\). From (20), it follows the likelihood is maximized by \(\hat{\sigma}_{h}^{2}=0\) and any value of \(F\); In particular, (14) maximizes the likelihood. However, if \(\hat{\mu}>0\), then \({\bf S}(\hat{\mu})\) is not zero and \({\bf e}_{1}\) must be an eigenvector of \({\bf S}(\hat{\mu})\) corresponding to \(\eta(\hat{\mu})>0\). For \(E_{1}\neq 0\), the unique solution to the equations \(\hat{\mu}=\sigma_{h}^{2}(1+|F|^{2})\) and \({\bf e}_{1}=(E_{1},E_{2})^{T}\) in (20) is given by (14). For \(E_{1}=0\), no finite \(F\) solves these equations; rather, the solution (and maximum) is approached in the limit as \(F\to\infty\). \(\diamond\) Theorem 2 reduces finding the ML estimators to solving a scalar optimization. In general, this optimization must be done numerically. If the fading channel is i.i.d., i.e., \({\bf C}_{\bf H}={\bf I}_{L}\), then (14) is in closed-form [6, eq. 22]. These i.i.d. ML estimators are the method of moments (MM) estimators in correlated fading, and serve as a reference to (12). The entries of the Fisher information matrix (FIM) have been derived, for \(1\leq i,j\leq 2\), using [8, pg. 529] and extension of (15.60) in Kay [8, pg. 531], \[[{\boldsymbol{\mathcal{I}}}(\boldsymbol{\theta})]_{ij}\;=\;N\cdot\sum_{m=1}^{L}{ \rm Tr}\left[{\bf C}_{k}^{-1}\frac{\partial{\bf C}_{k}}{\partial\theta_{i}^{*}}{ \bf C}_{k}^{-1}\frac{\partial{\bf C}_{k}}{\partial\theta_{j}}\right]\;, \tag{24}\] where \({\bf C}_{k}\) is given in (16). We derive the FIM as \[{\boldsymbol{\mathcal{I}}}(\boldsymbol{\theta})\;=\;\sum_{k=1}^{L}\frac{N(1+|F|^ {2})\lambda_{k}^{2}}{\left[\lambda_{k}\sigma_{h}^{2}(1+|F|^{2})+{\sigma^{2}} \right]^{2}}{\boldsymbol{\mathcal{F}}}_{k}\;, \tag{25}\] where we define \[{\boldsymbol{\mathcal{F}}}_{k}\;\triangleq\;\begin{bmatrix}(\sigma_{h}^{2})^{2} \left(\frac{\lambda_{k}\sigma_{h}^{2}}{\sigma^{2}}+1\right)&F\sigma_{h}^{2}\\ F^{*}\sigma_{h}^{2}&1+|F|^{2}\end{bmatrix}\;. \tag{26}\] For any unbiased estimators \(\hat{\boldsymbol{\theta}}\), the classical Cramer-Rao bound (CRB) is then calculated as the inverse of FIM, \[E\left[\left(\hat{\boldsymbol{\theta}}-\boldsymbol{\theta}\right)\left(\hat{ \boldsymbol{\theta}}-\boldsymbol{\theta}\right)^{H}\right]\;\geq\;{\boldsymbol{ ## 4 Numerical Results In this section, we explore the performance of estimators in the previous section through numerical examples. Consider a narrow-band MISO communications system with \(N=4\) transmit antennas, whose carrier frequency is 900 MHz. The duration of each packet equals a slot of 5G NR (New Radio), i.e., \(T_{s}=1\) ms. Block-fading channel is assumed, such that during one packet, the channel remains the same, but it generally varies from packet to packet [9]. Other settings follow from [6, Sec. IV]. Based on (1), the average post-detection signal-to-noise ratio (SNR) of a received symbol is \[E\left[\left|\mathbf{h}^{T}\mathbf{x}\right|^{2}\right]\;=\;P\sigma_{h}^{2}\;. \tag{27}\] Assume Clarke's model for the normalized channel correlation matrix \(\mathbf{C_{H}}\)[10, 11]. The maximum Doppler frequency is \(f_{d}\triangleq v/\lambda\), where \(v\) is the velocity of the fastest moving scatterer and \(\lambda\) the wave-length of the carrier frequency [12, Sec. IV]. In Fig. 1, we compare the behavior of our derived ML estimator \(\hat{F}_{ML}\) in (14) against a reference estimator [6, eq. 22], which we call \(\hat{F}_{MM}\). Relative root-mean-square error (RMSE) is used as the metric for performance comparison. Different fading conditions, fast (i.i.d.), medium (\(v=50\) km/h), and slow fading (\(v=5\) km/h) are assumed, each with \(L=10\) packets. For i.i.d. fading, \(\hat{F}_{ML}\) and \(\hat{F}_{MM}\) are identical, and their curves completely overlap. In the other fading conditions, the optimal \(\hat{F}_{ML}\) exhibits negligible improvement over the simple \(\hat{F}_{MM}\) for all SNR considered. Also, the 1 dB gap between \(\hat{F}_{ML}\) (or \(\hat{F}_{MM}\)) and the Cramer-Rao bound in (26) under slow fading is because the CRB could be loose for finite samples. When the fading channel is less correlated, more significant eigenvalues of \(\mathbf{C_{H}}\) exist and hence more independent observations for estimating \(F\). This explains the narrower gap to the CRB for fast (i.i.d.) and medium fading conditions. To this end, a rule of thumb is, to be 1 dB within the CRB, a combined 4 orders of diversity, temporal and/or spatial, is needed. Note \(\hat{F}_{MM}\) can be obtained in closed-form via direct calculation, but \(\hat{F}_{ML}\) is generally found via iterative numerical methods, e.g., a line search. Thus, practical systems may choose \(\hat{F}_{MM}\) over \(\hat{F}_{ML}\) for a better performance-complexity trade-off. In Fig. 2, we consider a slow fading scenario with packet lengths, \(L=5,10\). Although both \(L=5\) to 10 packets lead to only one significant eigenvalue out of the channel correlation matrix \(\mathbf{C_{H}}\), the doubling in power results in a drop in RMSE by about 3 dB. Moreover, since slow fading is the most different fading condition than i.i.d. fading (where \(\hat{F}_{MM}\) is the ML estimator), if the generally optimal \(\hat{F}_{ML}\) fails to demonstrate a sizable gain over its counterpart in \(\hat{F}_{MM}\), then the closed-form \(\hat{F}_{MM}\) may be preferable due to its more efficient implementation in practical systems. ## 5 Conclusion In this paper, we formulated the antenna impedance estimation problem at a MISO receiver in classical estimation. We derived the maximum-likelihood estimator for antenna impedance under generally correlated Rayleigh fading channels. This MLE can be found via scalar optimization. By comparing to a reference, the method of moments (MM) estimator derived in a prequel, we observed the ML and MM estimators exhibit similar RMSE and both approach their CRB's given sufficient degrees of diversity, spatial and/or temporal. A rule of thumb is four degrees of diversity are needed to be within 1 dB of CRB. The MM estimator demonstrated an overall better performance-complexity trade-off. These findings suggest a fast principal-components-based algorithm to estimate antenna impedance in real time for all Rayleigh fading conditions. A future direction might be to evaluate the benefit of our derived estimators using system-level metrics, e.g., ergodic capacity. Figure 1: Impedance Estimation under Different Fading. Figure 2: Impedance Estimation under Slow Fading.
2303.00826
Quenching star formation with low-luminosity AGN winds
We present a simple model for low-luminosity active galactic nucleus (LLAGN) feedback through winds produced by a hot accretion flow. The wind carries considerable energy and deposits it on the host galaxy at kiloparsec scales and beyond, heating the galactic gas thereby quenching star formation. Our model predicts that the typical LLAGN can quench more than $10\%$ of star formation in its host galaxy. We find that long-lived LLAGN winds from supermassive black holes (SMBH) with masses $\geq 10^8 M_{\odot}$ and mass accretion rates $\dot{M} > 10^{-3} \medd \ (0.002 \msun / yr)$ can prevent gas collapse and significantly quench galactic star formation compared to a scenario without AGN, if the wind persists over 1 Myr. For sustained wind production over timescales of 10 Myr or longer, SMBHs with $10^8 M_{\odot}$ or larger masses have important feedback effects with $\dot{M} > 10^{-4} \medd \ (0.0002 \msun / yr)$.
Ivan Almeida, Rodrigo Nemmen, Rogemar A. Riffel
2023-03-01T21:24:57Z
http://arxiv.org/abs/2303.00826v2
# Quenching star formation with low-luminosity AGN winds ###### Abstract We present a simple model for low-luminosity active galactic nucleus (LLAGN) feedback through thermal winds produced by a hot accretion flow. The wind carries considerable energy and deposits it on the host galaxy at kiloparsec scales and beyond, heating the galactic gas thereby quenching star formation. Our model predicts that the typical LLAGN can quench more than 10% of star formation in its host galaxy. We find that long-lived LLAGN winds from supermassive black holes (SMBH) with masses \(\geq 10^{8}M_{\odot}\) and mass accretion rates \(\dot{M}>10^{-3}M_{\rm Edd}\) can prevent gas collapse and significantly quench galactic star formation compared to a scenario without AGN, if the wind persists over 1 Myr. For sustained wind production over timescales of 10 Myr or longer, SMBHs with \(10^{8}M_{\odot}\) or larger masses have important feedback effects with \(\dot{M}>10^{-4}\dot{M}_{\rm Edd}\). keywords: black hole physics - galaxies: active - galaxies: evolution - accretion, accretion discs ## 1 Introduction Once an early-type galaxy forms, that does not mean it will remain quiescent forever and ever. Early-type galaxies have abundant gas (e.g. Binette et al., 1994) and should also accrete fresh amounts of it. If all this gas cooled and led to star formation, the global stellar mass density should currently be larger than observations by a factor of a few (Benson et al., 2003). Furthermore, the number of galaxies in the red sequence is steadily growing since the peak epoch of quasars and starbursts (e.g. Bell et al., 2004; Bundy et al., 2006). This implies that galaxies are still transitioning to quiescence. Taken together, these are evidence for an unceasing feedback process which suppresses star formation in red sequence galaxies and keeps it quenched. In this work, we explore the possibility that the feedback mechanism keeping these galaxies quiescent is due to winds from accreting supermassive black holes (SMBH) hosted in low-luminosity active galactic nuclei (LLAGN). This idea is quite promising because most SMBH activity in the nearby universe is happening in LLAGNs (e.g. Ho, 2008). These SMBHs are weakly accreting via radiatively inefficient accretion flows (RIAF; Yuan and Narayan, 2014). RIAFs are prone to producing profuse winds (e.g. Yuan et al., 2015; Almeida and Nemmen, 2020; Yang et al., 2021). In addition, there is increasing evidence of a new class of early-type galaxies hosting galaxy-scale LLAGN winds from spatially resolved spectroscopy (Cheung et al., 2016; Roy et al., 2021; Sanchez et al., 2021) and radio observations (Roy et al., 2018). Given the potential importance of AGN winds in quenching star formation at late times, here we perform an analytical study of LLAGN winds as a feedback mechanism. We build a simplified model of RIAF winds based on the latest results from numerical simulations and analyze how the presence of an LLAGN could impact the gas and stellar content of a galaxy. RIAF winds are very hot, subrelativistic and non-collimated. They carry considerable energy, with powers up to 1% of the rest mass energy \(\dot{M}c^{2}\) associated with accretion (Almeida and Nemmen, 2020). The kinetic and thermal energy of the ejected wind must be deposited in the environment, and its most plausible fate is depositing its energy in the interstellar medium. By exploring the properties of these winds and their impact on the host galaxy, we tackle the following questions: Are LLAGN powerful enough to quench star-formation in an early-type galaxy? Can LLAGN winds keep a red-and-dead galaxy quiescent? This paper is structured as follows. In section 2, we present the details of the model. In section 3 we present the results, which include the predicted relation between LLAGN power and star-formation quenching. We compare our results to the literature in section 4. Finally, section 5 presents a summary and some perspectives. ## 2 Model In order to quantify the effect of LLAGN feedback, we approximated a galaxy as an isothermal sphere of dark matter with a fixed fraction of gas. The wind itself is an expanding sphere. In the following subsections, we describe our model in more details. ### Galaxy We followed Silk & Rees (1998) and modelled the galaxy as an isothermal sphere characterized by a velocity dispersion \(\sigma\). Stars dominate the total mass of the galaxy's central region, and only a small fraction is gaseous corresponding to a fraction \(f_{\rm g}\approx 0.05-0.1\) of the total mass. The gas density profile is described as \[\rho(R)=\frac{f_{\rm g}\sigma^{2}}{2\pi GR^{2}}. \tag{1}\] The total gas mass enclosed in a radius \(R\) is \[M_{\rm gas}(R) =\int_{0}^{R}4\pi r^{2}\rho(r)dr=\frac{2f_{\rm g}\sigma^{2}R}{G}\] \[=9.6\times 10^{9}f_{\rm g}\,\left(\frac{\sigma}{200\,{\rm km/s}} \right)^{2}\left(\frac{R}{1\,{\rm kpc}}\right)M_{\odot} \tag{2}\] and is in the form of atomic hydrogen. The gravitational binding energy \(E_{\rm gal}\) is \[E_{\rm gal}(R)=\frac{3G\,M_{\rm total}M_{\rm gas}}{5R}=\frac{6M_{G}\sigma^{2}}{ 5}. \tag{3}\] Adopting \(f_{\rm g}=0.05\) and replacing equation (2) in (3) gives \[E_{\rm gal}(R)=4.5\times 10^{56}\left(\frac{\sigma}{200\,{\rm km/s}}\right)^{ 4}\left(\frac{R}{1\,{\rm kpc}}\right)\,{\rm erg}. \tag{4}\] The system is isothermal with a temperature of \(T_{\rm Gal}=1.5\times 10^{6}\sigma_{200}^{2}\) K where \(\sigma_{200}\equiv\sigma/200\,{\rm km/s}\). ### LLAGN Energy Output The LLAGN is able to inject a \(\Delta E\) amount of energy into the galaxy via thermal winds given by \(\Delta E=L_{\rm w}\Delta t\) where \(L_{\rm w}\) the wind power and \(\Delta t\) is the LLAGN lifetime. We parameterise the wind power as a fraction of the Eddington luminosity, \(L_{\rm w}=\eta L_{\rm Edd}\). Following Almeida & Nemmen (2020), the wind power is \(\sim 0.1-1\) per cent of the rest-mass energy \(\dot{M}c^{2}\) accreted by the SMBH. Given that for a LMAGN we expect \(\dot{M}\lesssim 10^{-3}\dot{M}_{Edd}\) and \(L_{\rm Edd}\equiv 0.1\dot{M}_{\rm Edd}c^{2}\), we have \(\eta\lesssim 10^{-4}\). Thus, in our calculations we assume \(\eta=10^{-4}\) and thereby \[\Delta E=4\times 10^{56}\left(\frac{\eta}{10^{-4}}\right)\left(\frac{M_{\rm BH }}{10^{9}\,{\rm M_{\odot}}}\right)\left(\frac{\Delta t}{1\,{\rm Myr}}\right) \,{\rm erg}. \tag{5}\] With these considerations, the impact of the AGN on the host galaxy increases trivially with its lifetime and decreases with the distance from the SMBH, as can be seen by taking the ratio of the LLAGN energy output with the galactic gravitational binding energy, \[f_{\rm AGN}\equiv\frac{\Delta E}{E_{\rm gal}}=0.24\left(\frac{\Delta t}{1\,{ \rm Myr}}\right)\left(\frac{R}{1\,{\rm kpc}}\right)^{-1}\left(\frac{M_{\rm BH }}{10^{9}\,{\rm M_{\odot}}}\right)^{0.22}, \tag{6}\] where we have used the \(M-\sigma\) relation of McConnell et al. (2011). As we will see, the LLAGN energy output can be comparable to the galactic gravitational binding energy. ### Star-formation Star formation usually occurs in giant molecular clouds (GMC), massive reservoirs of cold gas prone to star formation. In our model, we assume that the entirety of the wind kinetic power couples to GMCs and is converted to thermal energy. This approximation amounts to \(f_{\rm AGN}\) translating directly into the fractional temperature increase caused by AGN feedback. We describe the protostellar core mass function as \[\frac{dN}{dlnM}=N_{0}\left(\frac{M}{M_{0}}\right)^{-\xi},\,(M\lesssim M_{0}). \tag{7}\] following Rosolowsky (2005); Dib et al. (2008). Equation (7) gives the distribution of protostellar cores inside GMCs as a function of mass and sizes. We considered in our model dense clouds with \(M_{0}\lesssim 100M_{\odot}\) and \(0.3\leq\xi\leq 2.7\). Cores able to generate stars are those with masses exceeding the Jeans mass \[M_{J}=20M\odot\left(\frac{T}{10\,{\rm K}}\right)^{1.5}\left(\frac{n}{100\,{\rm cm ^{-3}}}\right)^{-0.5}. \tag{8}\] Assuming a constant external pressure around the cloud and \(n=100\) cm\({}^{-3}\), this simplifies to \(M_{J}=20M_{\odot}(T/10\,{\rm K})^{2}\). ## 3 Results ### Energetics Figure 1 illustrates the characteristic values of \(f_{\rm AGN}\) for a range of AGN timescales and distances. The figure indicates that an LLAGN can inject a significant amount of energy into the inner 10 kpcs of the host galaxy. The effect is more prominent in galaxies with more massive SMBHs. For instance, a galaxy hosting a \(10^{8}M_{\odot}\) SMBH can undergo a \(10\%\) temperature increase in the innermost 2 kpc in one million years; a \(10^{9}M_{\odot}\) SMBH active over 2 Myr with achieve a heating fraction higher than 50%. Moreover, if the LLAGN is active for 5 Myr or longer, the galactic heating within 5 kpc will be energetically relevant regardless of the mass. ### How far does the wind reach? Simulations suggest strong thermal winds coming from RIAFs, with powers reaching up to one percent of the rest mass associated with accreted gas (Almeida & Nemmen, 2020). These winds have thermal energies greater than the gravitational binding energy, which means they have enough energy to escape the black hole's gravitational sphere of influence. Nevertheless, the spatial extent of these winds remains an open question. We investigated the wind extension using two different approaches. In the first one, we model the wind as expanding bubble which cools via bremsstrahlung. In the second one, we consider a central heating source and a heat transfer through the gas--here, the wind carries only energy and not mass. In the first scenario, we computed the distance travelled by the bubble front over the cooling time, \(R_{\rm wind}=v_{\rm cool}\), where we assume \(v=300\) km s\({}^{-1}\)(Cheung et al., 2016; Almeida & Nemmen, 2020) and that the density follows \(\rho_{\rm wind}\propto r^{\alpha}\). The resulting expression is \[R_{\rm wind}=\left(\frac{5.9\times 10^{-5}\eta^{-1}\sigma_{200}^{2}M^{\alpha}}{10^ {7\alpha}\sqrt{1-\alpha}}\right)^{\frac{1}{1+\alpha}}{\rm kpc} \tag{9}\] where we assume \(\eta\sim 10^{-4}\), related to the efficiency of the wind production. This is roughly \[R_{\rm wind}\gtrsim\begin{cases}3\ {\rm kpc},\alpha<-0.1\\ 100\ {\rm kpc},\alpha<-0.3\end{cases} \tag{10}\] We find that for \(\alpha<0\), the wind can reach distances larger than ten kpc which are beyond the visible size of most galaxies. For the second case, we numerically solve the one-dimensional radial heat transfer equation for a sphere made of hydrogen with a central heat point source, \[\frac{1}{r^{2}}\partial_{r}(r^{2}\partial_{r}T)=\frac{\rho c_{\rm\,pr}r^{2}}{ \kappa}\partial_{t}T+Q_{\rm\,AGN} \tag{11}\] We modelled the AGN impact as a spherical boundary with constant temperature and hotter than the medium. This can be translated as the boundary condition in equation (12) and initial condition in equation (13). For practical reasons, we assumed \(r_{\rm\,AGN}=0\) since the AGN scales are too small compared to the galaxy. \[T(r=r_{\rm AGN})\leq T_{AGN}, \tag{12}\] \[T(t=0,r)=\begin{cases}T_{\rm AGN},&r\leq r_{\rm\,AGN}\\ T_{\rm\,gal},&r>r_{\rm\,AGN}\end{cases}. \tag{13}\] Solving equation (11) and assuming the characteristic values from Fabian et al. (2005) (their equation 4), we found that the resulting temperature profile follows \(T(R)\propto R^{-1}\). This is the same radial dependence as in equation (6). After about 5 Myr, even gas at kiloparsec scales will undergo a 20% temperature increase. For this model \(R_{\rm wind}\) is the radius at which \(\lim_{r\to R_{\rm wind}}T(r)=T_{\rm gal}\). We find that typically \(R_{\rm wind}\gtrsim 1\) kpc. Both models indicate that winds can get to the galactic outskirts, reaching distances up to kpc. We stress that the multiscale physics of the ISM and its interaction with hot winds is quite complex. We leave the numerical modeling of these phenomena for a future work. ### Star formation quenching The number of protostellar cores able to collapse and form stars can be calculated using equations 7 and 8 as \[\mathcal{N}(M\geq M_{J})=\int_{M_{J}}^{M_{0}}N(M)dM. \tag{14}\] We use \(\mathcal{N}\) to quantify the impact of LLAGN feedback in quenching star formation by computing it in two different ways: \(\mathcal{N}_{0}\) is the num Figure 1: Energy injected by LLAGN winds scaled by the galactic binding energy as a function of distance to the supermassive black hole, based on equation 6. Different AGN durations and black hole masses are displayed in the different panels, with the mass in solar masses. ber of protostellar cores able to collapse into stars when the AGN effect is not taken into account, whereas \(\mathcal{N}_{\rm AGN}\) is the corresponding quantity with the AGN turned on. In particular, we are interested in comparing how much lower \(\mathcal{N}_{\rm AGN}\) is compared to \(\mathcal{N}_{0}\) as a function of the main accreting BH parameters: the BH and mass accretion rate. When estimating \(\mathcal{N}_{0}\), we consider a temperature \(T_{\rm PC}\sim 10\)K and corresponding Jeans mass is denoted by \(M_{J}\) (see equation (8)); for \(\mathcal{N}_{\rm AGN}\), we adopt \(T_{\rm PC}^{\rm AGN}=(1+f_{\rm AGN})T_{\rm PC}\) as the AGN increase the average temperature the appropriate Jeans mass is \(M_{J}^{\rm AGN}\). This implies that \(M_{J}<M_{J}^{\rm AGN}\). Protostellar cores with masses in the range \(M_{J}<m<M_{J}^{\rm AGN}\) will suffer gravitational collapse when the impact of the AGN is not considered; they would not if the LLAGN is taken into account. We define the fraction of star formation quenched by the LLAGN--the quenching fraction \(Q\)--as \[Q\equiv 1-\frac{\mathcal{N}_{\rm AGN}}{\mathcal{N}_{0}}=1-\frac{1-(M_{J}/M_{ 0})^{1-\xi}(1+f_{\rm AGN})^{2-2\xi}}{1-(M_{J}/M_{0})^{1-\xi}}. \tag{15}\] where \(\xi\) is a power-law index and \(M_{0}\) is the mass scale related to the protostellar core mass distribution, (see equation (7)). The meaning of \(Q\) is the following: in the extreme case when \(Q=1\), the entirety star formation is aborted due to AGN feedback; on the other hand, when \(Q=0\) there is no quenching at all. Therefore, \(Q\) and the star-formation rate are inversely correlated. We plot in figure 2 the relation between star formation quenching and the AGN heating fraction \(f_{\rm AGN}\), where we explore the dependence on the parameter \(\xi\) (equation (7)). As expected, quenching becomes more pronounced as the amount of energy dumped by the LLAGN increase though this proceeds in a nonlinear fashion. Figure 3 illustrates the dependence of quenching on the SMBH mass accretion rate. Each shaded region with a different color corresponds to a given SMBH mass, with the interior spanning all allowed \(\xi\) values assuming \(R=20\) kpc (a typical galaxy size). The different panels explore the impact of the duration of the LLAGN activity varying from 1 Myr (upper left panel) to 50 Myr (bottom right panel). For illustration, let's consider a SMBH accreting at the \(10^{-3}\dot{M}_{\rm Edd}\) level. If its mass is \(10^{8}M_{\odot}\) (\(10^{9}M_{\odot}\)) and the wind is produced for only 1 Myr, it can quench less than one per cent (5%) of star formation in the host galaxy; now, if the LLAGN is active for 10 Myr, it can quench up 10% (30%); moreover, if it is active for 50Myr, the quenched grows to 40% (60%). Figure 4 displays the SMBH activation function for effective AGN feedback, as predicted in our calculations. This figure displays the family of accreting SMBH parameters required to produce a ten per cent quenching of star formation, i.e. the combination of mass accretion rates and masses that result in \(Q=0.1\). Figure 4 shows that a \(10^{8}M_{\odot}\) or \(10^{9}M_{\odot}\) SMBH that experiences an accretion episode lasting 1 Myr with \(\dot{M}>4\times 10^{-3}\dot{M}_{\rm Edd}\) will be able to abort more than 10% of star formation in its host galaxy. For an accretion episode lasting 10 Myr, a \(10^{8}M_{\odot}\) SMBH needs \(\dot{M}>4\times 10^{-4}\dot{M}_{\rm Edd}\) to significantly impact its host galaxy via winds; a \(10^{9}M_{\odot}\) SMBH needs \(\dot{M}>3\times 10^{-4}\dot{M}_{\rm Edd}\). Correspondingly, Figure 5 displays the wind power resulting in effective AGN feedback with \(Q\geq 0.1\). Similarly to the story told in Figure 4, a \(10^{8}M_{\odot}\) SMBH that produces a wind lasting 1 Myr with power larger than \(5\times 10^{39}\)erg s\({}^{-1}\) will be able to abort more than 10% of star formation in its host galaxy. For winds lasting 10 Myr, a \(10^{8}M_{\odot}\) (\(10^{9}M_{\odot}\)) SMBH needs a wind power larger than \(4\times 10^{38}\)erg s\({}^{-1}\) (\(2\times 10^{39}\)erg s\({}^{-1}\)) for effective quenching. Overall, the LLAGN will only have an impact larger than ten per cent on the host galaxy if it persists for durations longer than 10 Myr, regardless of the SMBH mass. This timescale is one order of magnitude larger than the typical quasar lifetime. Long LLAGN durations are needed in order to significantly suppress star formation. ## 4 Discussion Going back to the questions posed at the beginning of this work: Are LLAGN powerful enough to quench star-formation in an early-type galaxy? Can LLAGN winds keep a red-and-dead galaxy quiescent? With our simple models we find that the answer to both questions is yes. The quenching intensity, however, depends on the black hole mass, accretion rate and on the duration of the accretion episode. Let's consider the particular case of the "Akira" galaxy, Cheung et al. (2016) reported evidence for winds emerging from the LLAGN in Akira. The authors dubbed this putative class of objects "red geysers" (e.g. Roy et al., 2018). Our work supports the notion that LLAGN winds can indeed be energetic enough to originate the red geyser phenomenon. Cheung et al. (2016) find that Akira hosts a \(10^{8}M_{\odot}\) SMBH currently accreting with \(\lambda\equiv L/L_{\rm Edd}=4\times 10^{-4}\) and that the wind lasts at least 100Myr. This value of \(\lambda\) corresponds to \(\dot{M}=3\times 10^{-3}\dot{M}_{\rm Edd}\), for a typical RIAF radiative efficiency of 1% (Xie & Yuan, 2012). Our model predicts that LLAGN winds in Akira can reach quenching fractions of about 30% if those accretion rates are sustained over 10 Myr, and potentially much more for longer times. Star formation in the so-called red geyser galaxies can be significantly impacted by winds produced from underfed SMBHs. We explored two different assumptions on the radial expansion of the wind. Both of them indicate that the kinetic and thermal energies can be carried over kiloparsec scales way beyond the SMBH gravitational sphere of influence. An important parameter in our results is the activity time of the LLAGN. If we want to explain the quiescence of the local universe galaxies as the effect of a steady and weak wind from a very faint AGN, this object must be active for a very long time. In figure 3, we can see in the left panel that only SMBHs with mass \(M_{\rm SMBH}\gtrsim 10^{9}M_{\odot}\) and \(\dot{m}\gtrsim 5\times 10^{-3}\) can noticeably impact the star-formation in \(\dot{m}_{\rm AGN}=1\)Myr. However, for a longer time as \(\Delta t_{\rm AGN}=10\)Myr, one LLAGN with \(\dot{m}\gtrsim 10^{-3}\) and masses \(M_{\rm SMBH}\gtrsim 10^{8}M_{\odot}\) can turn off more than 50% the stellar formation Figure 2: The quenching fraction as a function of the average heating of the region. As the temperature increases, the fraction of shut-down stellar formation sites increases. The different lines represent the different distribution possibilities for the protostellar cores (see equation (7)). sites. The star formation can be severely suppressed if the galaxy inflow can sustain the LLAGN accretion rate for a long enough time. One limitation of our model is that we are unable to give more details on specific types of stellar populations arising after quenching by the LLAGN winds. Modeling the vast dynamical range and the nonlinear physics involved in star formation is complex problem and outside the scope of this work - a simulation of effect feedback for an elliptical galaxy treated in much more detail can be seen in Yuan et al. (2018). One broad brush consequence of the suppression of star formation is that there will be a smaller amount of heavy elements being spewed out throughout the galaxy. Thus, galaxies under the influence of LLAGN feedback will have smaller metallicities. At the same time, and for the same reasons, we expect a smaller number of younger stars, so LLAGN winds tend to redden the host galaxy. Our model assumes a smooth wind that interacts with molecular clouds, heating them up over Myr timescales. In a more realistic setting, outflows likely strip gas clouds. The ensuing cloud mass decrease would further boosting the quenching fraction to higher values than we reported in figure 3. This possibility remains to be investigated in the future. ## 5 Summary The main conclusions of of our investigation can be summarised as follows: (i) Low-luminosity active galactic nuclei can have important feedback effects in their host galaxies by quenching star formation. This occurs via thermal winds emerging from the hot accretion flow which are able to heat up protostellar clouds and prevent them from gravitationally collapsing. (ii) The relevance of star formation quenching by LLAGN feedback is a function of the SMBH mass, mass accretion rate and the duration of the accretion episodes. In general, quenching is only relevant for accretion lasting longer than 1 Myr. (iii) For an accretion episode lasting 1 Myr, a \(10^{8}M_{\odot}\) or \(10^{9}M_{\odot}\) Figure 3: The plot shows the quenching fraction inside a region of 20kpc as a function of the LLAGN accretion rate. The increase in the accretion rate has a significant effect on the gas. Each colour represents a different SMBH mass. We can observe the importance of the system’s total mass; the quenching only occurs for the most massive SMBHs. The three different panels refer to the LLAGN activity time \(\Delta t\), long-lived LLAGN have a much more substantial impact on the gas temperature and subsequent quenching. The denoted regions represent the different distributions of the protostellar cores (see equation (7)); they are the region delimited by the lines shown in figure 2. SMBH needs \(\dot{M}\gtrsim 10^{-3}\dot{M}_{\rm Edd}\) to abort more than 10% of star formation. (iv) For an accretion episode lasting 10 Myr, a \(10^{8}\dot{M}_{\odot}\) or \(10^{9}\dot{M}_{\odot}\) SMBH needs \(\dot{M}\gtrsim 10^{-4}\dot{M}_{\rm Edd}\) to significantly impact its host galaxy via winds. (v) Thermal winds can reach kiloparsec scales, and beyond. Our model is subject to the limitations of our assumptions, mainly: the assumption of a spherical isothermal galaxy, steady state, lack of details on the treatment of the interstellar medium and the wind physics. Despite these idealizations, we hope that our calculations can offer insights on the galaxy-SMBH coevolution. In conclusion, our model demonstrates that feedback via winds from LLAGNs is an important suppressor of star formation in red sequence galaxies. LLAGNs, despite their low Eddington ratios, will keep a red-and-dead galaxy quiescent at late times. Thermally-driven winds from underfed SMBHs offer a third mode of AGN feedback, in addition to the quasar or radiative mode relevant at the peak of galaxy mergers, and the radio or jet mode relevant for radio galaxies in galaxy clusters. ## Acknowledgements We acknowledge useful discussions with Raniere de Menezes, Paula R. T. Coelho, Stephane V. Werner, Feng Yuan and Roger Blandford. This work was supported by FAPESP (Fundacao de Amparo a Pesquisa do Estado de Sao Paulo) under grants 2017/01461-2, 2019/10054-7 and 2022/10460-8. RN acknowledges a Bolsa de Produtividade from Conselho Nacional de Desenvolvimento Cientifico e Tecnologico. We used Python (Oliphant 2007; Millman & Aivazis 2011) to produce all scientific results and plots of this paper, including several packages such as NumPy (Van Der Walt et al. 2011), SciPy (Virtanen et al. 2019), and Matplotlib (Hunter 2007). ## Data Availability The data underlying this article will be shared on reasonable request to the corresponding author.
2301.10294
Photon echo in ring cavity: pulse area approach
Pulse area approach has been established as a versatile analytical tool for studying the resonant interaction between the light and the resonant atomic ensemble. In recent years photon and spin echoes in cavity assisted schemes become increasingly interesting. In this article we develop the photon echo pulse area approach to describe primary and multi-pulse echo generation in the atomic ensemble placed in the ring cavity. We show that the pulse area approach predicts relative echo magnitudes and whether the system is operating in a single- or a multi-pulse generation regime. We also analyze the conditions needed for the realization of these generation regimes. This work develops the pulse area theorem approach for analytical study of photon/spin echoes in optical and microwave cavities and echo based protocols of quantum memory.
Sergey A. Moiseev, Ravil V. Urmancheev
2023-01-24T20:27:39Z
http://arxiv.org/abs/2301.10294v1
# Photon echo in ring cavity: pulse area approach ###### Abstract Pulse area approach has been established as a versatile analytical tool for studying the resonant interaction between the light and the resonant atomic ensemble. In recent years photon and spin echoes in cavity assisted schemes become increasingly interesting. In this article we develop the photon echo pulse area approach to describe primary and multi-pulse echo generation in the atomic ensemble placed in the ring cavity. We show that the pulse area approach predicts relative echo magnitudes and whether the system is operating in a single- or a multi-pulse generation regime. We also analyze the conditions needed for the realization of these generation regimes. This work develops the pulse area theorem approach for analytical study of photon/spin echoes in optical and microwave cavities and echo based protocols of quantum memory. ## I Introduction Photon echo [1; 2] is an optical realization of the Hahn spin echo [3], which is a coherent response of inhomogeneously broadened resonant atomic ensemble to the action of two or more resonant light pulses. Since its discovery in the beginning of the second half of the previous century it has been established as a reliable and developing tool in nonlinear coherent spectroscopy, used to measure transition relaxation times and quantum dynamics of different resonant media [4; 5; 6; 7; 8; 9; 10]. It has also become the basis of the number of photon echo quantum memory protocols [11; 12; 13; 14; 15; 16]. Recent development of modern optical and microwave integral technologies initiates study of photon/spin echoes in optical and microwave cavities [17; 18; 19; 20], which are especially important for elaboration of quantum memory devices. Description of the photon echo is based on a solution of complicated nonlinear Maxwell-Bloch equations, which often compels to use only numerical methods [21]. In optically dense media, the task is also complicated by the presence of strong rephasing pulses also known as \(\pi\)-pulses that control atomic coherence. The pulse area theorem [22] was proposed to partly bypass these difficulties and provide an analytical tool to study general nonlinear properties of resonant pulse propagation. The theorem was later developed to consider photon (spin) echoes [23; 24; 25], three-level systems [26] and atoms placed in a Fabry-Perot cavity [27]. In this work we develop the pulse area theorem to analytically study the echo generation in a single-mode ring cavity, similar to our previous work on Fabry-Perot cavity[27]. We obtain the general equation describing the area of any pulse during a two-pulse echo generation. We then solve this equation analytically for the incoming signal pulses and numerically for the first three echo pulses. We also provide an approximate analytical solution for the primary echo pulse. This allows us to study the conditions for single- and multi-pulse echo generation that are in agreement with experimental investigation [21]. ## II Pulse area theorem ### Basic equations We consider an ensemble of N two-level atoms that is placed inside a single-mode ring cavity with the mode central frequency \(\omega_{0}\) being in resonance with the atomic transition. The atoms occupy a length \(L\) along the optical axis \(z\); \(L\) is greater than the light wavelength \(\lambda\) and smaller that the cavity length \(L_{c}\). We assume that the inhomogeneous broadening of the atomic transition \(\Delta_{inh}\gg\gamma\), where \(\gamma=1/T_{2}\) is the homogeneous linewidth and \(T_{2}\) is the coherence time of a single atom, \(T_{2}\) typically shorter than the lifetime of the optical transition \(T_{1}\). The electric field is described by a slowly varying amplitude \(\mathcal{E}(t)=\mathcal{E}_{0}a(t)\), where \(\mathcal{E}_{0}=\sqrt{\frac{k_{0}}{2\epsilon_{0}(\mathcal{E}+L_{v})S}}\), \(\varepsilon_{0}\) and \(\varepsilon\) are the vacuum and the atomic medium permittivities, \(L_{v}=L_{c}-L\) and \(V=SL_{c}\) denotes the mode volume. We assume a uniform excitation of the sample, so the coupling constant of the dipole interaction between the cavity mode and an atom \(g\) is the same for all the atoms: \(g=\frac{d}{\hbar}\mathcal{E}_{0}\), where \(d\) is the dipole moment of the atomic transition. We use the quantum Tavis-Cummings model for the interaction of \(N\) two-level atoms with the cavity mode and apply the input-output formalism of quantum optics [28] co couple the amplitudes of the cavity mode \(\mathcal{E}(t)\) to the amplitudes of the input \(\mathcal{E}_{in}(t)\) and output \(\mathcal{E}_{out}(t)\) field modes (where \(\mathcal{E}_{in,out}(t)=\sqrt{\frac{\pi\hbar_{0}}{\varepsilon_{0}S}}a_{in,out}(t)\), \(S\) is a cross-section of the light beam). In the limit of large number of atoms [29], these transfers to the system of semiclassical Maxwell-Bloch equations [22; 30] for atoms and the resonator field mode: \[\partial_{t}a =-\frac{\kappa+\kappa_{in}}{2}a+Ng\langle v\rangle_{\Delta}+\sqrt{ \kappa}a_{in}, \tag{1}\] \[\partial_{t}u =-\Delta v-\gamma u,\] (2) \[\partial_{t}v =\Delta u-\gamma v+\Omega(t)w,\] (3) \[\partial_{t}w =-\Omega(t)v,\] (4) \[a_{out} =\sqrt{\kappa}a-a_{in}, \tag{5}\] where \(\Omega(t)=ga(t)\), \(\kappa\) is a decay rate of cavity mode to the external waveguide modes and \(\kappa_{in}\) - internal losses of the cavity; \(u,v\) and \(w\) are the components of the Bloch vector, dependant on time \(t\), and detuning \(\Delta\) of the atom; \(\langle v\rangle_{\Delta}\equiv\int G(\Delta)v(t,\Delta)d\Delta\), where \(G(\Delta)\) is the inhomogeneous line shape. Equation (5) relates the cavity mode \(a\) to the input and output modes \(a_{in}\), \(a_{out}\) according to input-output approach [28]. We assume that a pulse of light comes at the moment \(t_{c}=(t_{0}+t_{1})/2\), where \(t_{0},t_{1}\) are two distant time moments. There might have been additional pulses before the time \(t_{0}\) or after \(t_{1}\), but there are no other pulses in the time interval \((t_{0},t_{1})\), which is much longer that the pulse duration \(\delta t\). Now we multiply both parts of the field equations (1) &(5) by \(g\) and integrate over time \(\int_{t_{0}}^{t_{1}}dt\) to arrive to the general pulse area equation: \[\frac{\kappa_{S}}{2}\Theta =\sqrt{\kappa}\Theta_{in}+Ng^{2}\int_{t_{0}}^{t_{1}}dt\int d \Delta G(\Delta)v(t,\Delta), \tag{6}\] \[\Theta_{out} =\Theta_{in}-\sqrt{\kappa}\Theta, \tag{7}\] where \(\kappa_{S}=\kappa+\kappa_{in}\). We also considered that \(\int_{t_{0}}^{t_{1}}\partial_{t}a(t)dt=a(t_{1})-a(t_{0})=0\) when \(t_{1}-t_{0}\gg\delta t\). The treatment of this equation is analogous to the case of photon echo area theorem in free space [31; 25] and Fabry-Perot cavity [25] obtained previously. This allows us to obtain the general equation for the pulse area of an incoming pulse or an arbitrary echo pulse (even in the presence of an external exciting pulse with a pulse area \(\Theta_{in}\) at the cavity entrance): \[\frac{\kappa_{S}}{2}\Theta =\sqrt{\kappa}\Theta_{in}+\frac{\varkappa}{2}\Big{[}2v_{0}\cos^{ 2}\frac{\Theta}{2}+w_{0}\sin\Theta\Big{]}, \tag{8}\] \[\Theta_{out} =\Theta_{in}-\sqrt{\kappa}\Theta,\] where \(v_{0}\) and \(w_{0}\) are resonant components of atomic polarization that phase and lead to the emission of an echo signal at the time \(t_{c}\); \(\varkappa=2Ng^{2}\pi G(0)=2Ng^{2}/\Delta_{inh}\) for the Lorentzian inhomogeneous broadening \(G(\Delta)\). Under classical treatment \(\varkappa\) corresponds to atomic absorption per round trip inside the cavity: \(\varkappa=2\alpha L/t_{rt}\) (\(\alpha L\) is the optical depth of the resonance transition, \(t_{rt}\) is the cavity round trip time). ### Incoming pulses The first incoming pulse arrives to the system in the ground state, so we substitute \(v_{0}=0,w_{0}=-1\) into Eq. (8) to get [32]: \[\frac{\kappa_{S}}{2}\Theta_{1}=\sqrt{\kappa}\Theta_{in,1}-\frac{\varkappa}{ 2}\sin\Theta_{1}, \tag{9}\] In the limit of weak signal pulse \(\Theta_{1}\ll 1\) we have \(\Theta_{out,1}=\frac{\varkappa\kappa+\kappa_{in}-\kappa}{\kappa_{S}+ \varkappa}\Theta_{in,1}\). Analogous to electronics we can derive an impedance matching condition, when \(\Theta_{out,1}=0\) and the incoming pulse is fully absorbed: \(\xi_{im}=\frac{\kappa}{\varkappa+\kappa_{in}}=1\). Parameter \(\xi_{im}\) defines the relative coupling strength of the interaction between the light mode with atoms and free propagating modes, which plays an important role in light-atoms dynamics. Behavior of pulse area in the impedance matching condition was studied in [32] for \(\kappa_{in}=0\). Figure 1 shows the dependence of the transition pulse area inside the ring cavity \(\Theta_{1}\) and at the output of the cavity \(\Theta_{out,1}\). Note that pulse areas outside and inside the cavity have different units and we use the additional factor \((2/\sqrt{\kappa})\) for pulse areas outside the cavity to compensate for that. The most notable feature of Fig.1 is the sharp rise of the output pulse area near the point \((2/\sqrt{\kappa})\Theta_{in,1}=\pi\). The sharpness of the rise depends on the ratio between the coupling constants \(\xi=\varkappa/\kappa_{S}\) and is the sharpest at the impedance matching condition. This feature is a more pronounced version of the self-induced transparency effect in free space [22]: if the incoming pulse area is smaller than the threshold value \((2/\sqrt{\kappa})\Theta_{in,1}=\pi\) then the output pulse area gravitates to \(0\), otherwise it comes close to \(2\pi(\sqrt{\kappa}/2)\). The introduction of the cavity increases the field-atom interaction and thus the nonlinear effects typical for the optically dense media appear. For the second pulse we get an equation similar to (9) \[\frac{\kappa_{S}}{2}\Theta_{2}=\sqrt{\kappa}\Theta_{in,2}-\frac{\varkappa}{2 }\cos\Theta_{1}\sin\Theta_{2}, \tag{10}\] where initial atomic inversion is modified by the action of first pulse \(w_{0}=-\cos\Theta_{1}\). Solution of Eq. (10) is very similar to solving of Eq. (9) when \(\Theta_{1}<\pi/2\). Below we will immediately move on to the study of the behavior of echo signals Figure 1: Transition pulse areas \(\Theta_{1},\Theta_{out,1}\) versus the incoming pulse area \(\Theta_{in,1}\) in the impedance-matched ring cavity (\(\varkappa=\kappa_{S}\)) in normalized coordinates. Blue solid line shows the pulse area inside the cavity \(\Theta_{1}/\pi\) and the orange dashed line shows the pulse area at the output of the cavity \(\Theta_{out,1}(2/\pi\sqrt{\kappa})\). The \((2/\sqrt{\kappa})\) factor conforms the areas inside and outside of the cavity. ## III Photon echo pulse area ### Total echo pulse area Using Eqs. (9),(10) for the incoming pulse we can derive the pulse area of the total pulse area of all echo pulses excited inside the two-level medium. To do this we view both incoming pulses as a single composite pulse and write Eq.(9) for this pulse: \[\frac{\kappa_{S}}{2}\Theta_{tot}=\sqrt{\kappa}(\Theta_{in,1}+\Theta_{in,2})- \frac{\varkappa}{2}\sin\Theta_{tot}, \tag{11}\] here \(\Theta_{tot}\) is the total pulse area of all pulses excited inside the cavity. To calculate the total pulse area of all echoes inside and outside the cavity we just need to subtract the areas of the incoming pulses: \[\Theta_{e,\Sigma}= \Theta_{tot}-\Theta_{1}-\Theta_{2}, \tag{12}\] \[\Theta_{out,\Sigma}= \Theta_{out,tot}-\Theta_{out,1}-\Theta_{out,2}. \tag{13}\] This can be used as an estimation tool for the echo pulse area. However Eq. (8) allows to study each echo signal individually. ### Linear solution Before studying the particular echo signals let us analyse some general properties of basic Eq. (8). An echo signal is irradiated without presence of external light pulse (\(\Theta_{in,e}=0\)): \[\Theta_{e}=\xi\Big{[}2v_{0}\cos^{2}\frac{\Theta_{e}}{2}+w_{0}\sin\Theta_{e} \Big{]}, \tag{14}\] where \(\xi=\varkappa/\kappa_{S}\). We can find from (14) that there is a particular solution: \(\Theta_{e}=2\pi\) at \(v_{0}\xi=\pi\), that is not longer available in its immediate vicinity and is therefore practically unrealizable. The solutions \(\Theta_{e}=\pi,3\pi\) are also impossible, so realistically we have \(0\leq\Theta_{e}<\pi\) and \(\pi<\Theta_{e}<3\pi\). It is also convenient to rewrite Eq. (14) in the form \[\Theta_{e}=2\xi\sqrt{v_{0}^{2}+w_{0}^{2}}\cos\frac{\Theta_{e}}{2}\sin\frac{( \varphi-\Theta_{e})}{2}, \tag{15}\] where \(\varphi=2\arctan\{-\frac{v_{0}}{w_{0}}\}\) and from Eq. (15) we find that \(\Theta_{e}<\varphi\). Let us first consider the case of a small signal pulse \(\Theta_{1}\), which is typical for optical quantum storage. It is particularly interesting since in this regime the echo pulse shape can be identical to the incoming (first) pulse shape, which means that the pulse area also characterises the energy and efficiency of the outgoing pulse. Consider at the beginning the limit of a weak signal pulse \(\Theta_{1}\) and an echo pulse \(v_{0}\ll 1\) and also that \(\Theta_{e}<1\). Then we can obtain the linear solution by keeping only the terms of the first order \(O(v_{0})\) in (14): \[\Theta_{e}=\frac{2\xi}{1-\xi w_{0}}v_{0}, \tag{16}\] Let us consider two interesting cases. The first is the formation of a two-pulse (primary) echo, and the second is the restoration of a suppressed echo (so called ROSE-protocol). We assume that the control exciting pulses are launched orthogonally to the resonator axis and have specified pulse areas \(\Theta_{2},\Theta_{3}\). By taking into account the relaxation of the atomic coherence for the time of the primary echo emission \(t_{e1}=2\tau\), where \(\tau\) is the time interval between the pulses, we have \(v_{0,pe}=\Gamma_{\tau}\sin\Theta_{1}\sin^{2}\frac{\Theta_{2}}{2}\), where \(\Gamma_{\tau}=e^{-\gamma t_{e1}}\) is the decoherence term and \(w_{0,pe}=-\cos\Theta_{1}\cos\Theta_{2}\)[24; 25]. By assuming weak pulse area in first signal pulse (\(\Theta_{1}\ll 1\)) for the primary echo we have: \(v_{0,pe}\cong\Gamma_{\tau}\Theta_{1}\sin^{2}(\Theta_{2}/2)\), \(w_{0,pe}=-\cos\Theta_{1}\cos\Theta_{2}\cong-\cos\Theta_{2}\) and for ROSE-echo [33]: \(v_{0,rose}=\Gamma_{\tau}\sin\Theta_{1}\sin^{2}(\Theta_{2}/2)\sin^{2}(\Theta_{ 3}/2)\cong\Gamma_{2\tau}\Theta_{1}\sin^{2}(\Theta_{2}/2)\sin^{2}(\Theta_{3}/2)\), \(w_{0,rose}=-\cos\Theta_{1}\cos\Theta_{2}\cos\Theta_{3}\)\(\cong\)\(-\cos\Theta_{2}\cos\Theta_{3}\). Using these formulas and assuming \(\Theta_{2}=\Theta_{3}=\pi\) we get: \(v_{0,pe}=v_{0,rose}=\Theta_{1}\) and \(w_{0,pe}=1\), \(w_{0,rose}=-1\). Substituting these values in Eq. (16) we obtain for the primary echo: \[\Theta_{pe}=\frac{2\xi\Gamma_{\tau}}{1-\xi}\Theta_{1}=\frac{2 \frac{\varkappa}{k}\xi_{im}\Gamma_{\tau}}{(1+2\frac{k_{in}}{k})\xi_{im}-1}\Theta _{1}=\\ =\frac{\varkappa}{k_{in}}\Gamma_{\tau}\Theta_{1}=\frac{\varkappa} {\varkappa+k_{in}}\frac{k}{k_{in}}\Gamma_{\tau}\Theta_{1}, \tag{17}\] and for the ROSE echo: \[\Theta_{rose}= \frac{2\xi\Gamma_{2\tau}}{1+\xi}\Theta_{1}=\frac{\varkappa}{ \varkappa+k_{in}}\frac{2\Gamma_{2\tau}}{1+\xi_{im}}\Theta_{1} \tag{18}\] \[= \frac{\varkappa\Gamma_{2\tau}}{\varkappa+k_{in}}\Theta_{1},\] where we have taken into account the impedance matching condition \(\xi_{im}=1\). Thus, as follows from Eq.(17), the pulse area of the primary echo \(\Theta_{pe}\) increases with decreasing resonator losses \(k_{in}\) and large values may be larger than the pulse area of the signal pulse \(\Theta_{1}\) when \(k\gg k_{in}\), \(\varkappa\sim k_{in}\). This behavior indicates the amplification of the primary echo signal in an inverted medium where it will already be necessary to solve the nonlinear Eq.(14). As it is seen in Eq.(18), unlike \(\Theta_{pe}\), the pulse area of the ROSE signal \(\Theta_{rose}\) is always smaller than the pulse area of the signal pulse \(\Theta_{1}\) and only maximally approaches its value with a decrease of resonator losses. This behavior corresponds to the fulfillment of the matching condition in the signal field absorption describes the complete recover of the signal pulse in the ROSE signal which is used in the impedance matched photon echo QM schemes [19; 33; 34]. Thus, pulse area approach provides a simple tool for studies of some basic properties of such QM schemes, especially, depended on nonlinear properties of coherent light-atom interactions. ### Nonlinear approximate solution To obtain a more general analytical solution of Eq.(16) we decompose Eq.(14) in series about the point \(\Theta_{e}=0,v_{0}=0\). But this time we keep the terms up to the third order of magnitude (\(O(v_{0}^{3})\) or \(O(\Theta_{e}^{3})\)). This way we arrive to the cubic equation: \[\Theta_{e}^{3}+\frac{3v_{0}}{2w_{0}}\Theta_{e}^{2}+\frac{6(1-w_{0}\xi)}{w_{0} \xi}\Theta_{e}-\frac{12v_{0}}{w_{0}}=0. \tag{19}\] Introducing \(\zeta=(1-w_{0}\xi)/w_{0}\xi\), we find the discriminant \[\Delta=-108\Big{\{}8\zeta^{3}+\frac{v_{0}^{2}}{w_{0}^{2}}\left[-\frac{3}{4} \zeta^{2}+18\zeta+36\right]+o(v_{0}^{3})\Big{\}}. \tag{20}\] In the quantum memory case (small incoming pulse and strong rephasing pulse) \(\Delta<0\), which means that Eq.(19) has a single real root: \[\begin{split}\Theta_{e}&=-\frac{v_{0}}{2w_{0}}+ \sqrt[3]{-\frac{\Delta_{1}}{2}+\sqrt{\Delta_{0}}}+\sqrt[3]{-\frac{\Delta_{1} }{2}-\sqrt{\Delta_{0}}},\\ \Delta_{1}&=-\frac{v_{0}}{w_{0}}\left[12+3\zeta- \frac{1}{4}\left(\frac{v_{0}}{w_{0}}\right)^{2}\right].\end{split} \tag{21}\] where we introduced \(\Delta_{0}=-\Delta/108\). This formula is valid for \(\Theta_{in,1}\ll 1\), increasing the signal area will eventually change the sign of \(\Delta\), at this point Eq. (19) starts having three real roots. ### Primary echo Now we take into account that all light pulses excite resonator mode. In our previous paper [25] we described the algorithm of finding resonant components of initial polarization and inversion \(v_{0}\) and \(w_{0}\) for each echo pulse. These results can be readily applied here. Substituting the expression for phasing polarization \(v_{0,pe}\) and atomic inversion \(w_{0,pe}\) for the primary echo signal (see above) in Eq. (14) we have the nonlinear equation for the primary echo pulse area for the sample placed inside the single mode ring cavity: \[\begin{split}\Theta_{e1}=\xi\Big{[}2\Gamma_{\tau}& \sin\Theta_{1}\sin^{2}\frac{\Theta_{2}}{2}\cos^{2}\frac{\Theta_{e1}}{2}\\ &-\cos\Theta_{1}\cos\Theta_{2}\sin\Theta_{e1}\Big{]},\\ \Theta_{out,e1}&=\sqrt{\kappa}\Theta_{e1}.\end{split} \tag{22}\] Note that Eq. (22) contains the difference of two terms and when these two terms become equal the echo area becomes zero. Now we can compare the numerical solution of Eq.(22) with approximate solutions obtained in the previous section. To test the accuracy of the approximate solution we study the echo pulse area \(\Theta_{out,e1}\) while varying the first pulse's incoming area \(\Theta_{in,1}\). The second pulse area we keep constant \((2/\sqrt{\kappa})\Theta_{in,2}=0.9\pi\). The term \(2/\sqrt{\kappa}\) accounts for the fact that pulse area inside the cavity \(\Theta\) differs from the incoming pulse area \(\Theta_{in}\). To find the values of \(\Theta_{1},\Theta_{2}\) used to calculate \(v_{0}\) and \(w_{0}\) we numerically solve Eqs. (9) and (10). Figure 2 shows the comparison between the linear approximation Eq.(16), cubic approximation Eq.(21) and numerical solution of Eq.(22). To plot the cubic solution we solved Eq.(19), which has a single real root for small \(\Theta_{in,1}\). For larger values of \((2/\sqrt{\kappa})\Theta_{in,1}>0.75\pi\) Eq. (19) has three real roots from which we choose the root that maintains the continuity of the solution. Figure 2 shows that the linear approximation works very well in the region \(\Theta_{e1}<0.2\pi\), but deviates rapidly after this point. The cubic approximation works well for the primary echo across all range of the incoming first pulse areas \(\Theta_{1}\in[0,2\pi)\), the relative error does not exceed \(11\%\). Note that the introduction of decoherence in the form of \(\Gamma_{\tau}=0.5\) term in Eq.(22) brings the cubic solution and numerical solution even closer. Decoherence makes the echo signal smaller and consequently the difference between numerical and approximate solutions becomes smaller too (see the caption for the Fig. 2). The primary echo experiences a sharp drop and a change of sign near the point \((2/\sqrt{\kappa})\Theta_{in,1}=\pi\). This is a consequence of the same nonlinear nature that causes the sharp rise of the transmitted pulse area of a single pulse in Fig. 1. We note that this drop is sharper in the case of the impedance-matched ring cavity than in the case of Fabri-Perot cavity [27]. The sharpness is higher in the case of the ring cavity due to the uniform excitation of the sample. In the Fabry-Perot cavity the field forms a standing wave, causing additional phase mismatch between different regions of the sample [27]. The peak value of the echo pulse area depends on the coupling constants ratio \(\xi=\varkappa/\kappa_{S}\). The echo pulse area is limited and does not exceed \(\pi\) for the impedance matched cavity. Figure 3 shows the dependence of the primary echo area efficiency \(\Theta_{out,e1}/\Theta_{in,1}\) versus the incoming pulse area of the second pulse \(\Theta_{in,2}\) for the impedance matched cavity \(\xi=1\) for negligibly low resonator losses \(k_{in}\ll k\). The incoming signal pulse area is constant \((2/\sqrt{\kappa})\Theta_{in,1}=\pi/10\). Figure 3 shows that the dependence is periodic and peaks near characteristic points \((2/\sqrt{\kappa})\Theta_{in,2}=\pi\cdot n\) where the efficiency exceeds \(1\), meaning that the echo pulse is amplified, similar to the free space echoes [25; 35]. The observed pulse area amplification is also confirmed by the echo pulse energy investigations [36; 37; 38]. The periodicity is broken in Fabry-Perot cavity, when the field forms a standing wave inside the cavity, leading to a more complicated dependence [27]. ### Secondary echoes After the primary echo pulses there may be secondary echo pulses generated in the medium under certain conditions. To calculate the pulse area of these echoes we need to find the corresponding phasing resonant components of \(v_{0}\) and \(w_{0}\) at the time of the pulse emission. For the second echo at the time \(3\tau\) we have [25]: \[\begin{split} v_{0,e2}=&\Gamma_{\tau}^{2}\cos\Theta_{1 }\sin\Theta_{2}\sin^{2}\frac{\Theta_{e1}}{2}\\ &+\frac{1}{2}\Gamma_{\tau}^{2}\sin\Theta_{1}\sin\Theta_{2}\sin \Theta_{e1},\\ w_{0,e2}=&-\cos\Theta_{1}\cos\Theta_{2}\cos \Theta_{e1}\\ &-\Gamma_{\tau}^{2}\sin\Theta_{1}\sin^{2}\frac{\Theta_{2}}{2} \sin\Theta_{e1}.\end{split} \tag{24}\] These expressions we substitute into Eq.(8) and solve numerically for \(\Theta_{e3}\) and \(\Theta_{out,e3}\). Similar but more complicated formulas are derived for the third echo pulse [25]. Those can be found in the Appendix. Figure 4 shows the dependence of the first three echo's pulse areas versus the relative coupling strength \(\varkappa/\kappa\). The incoming pulse areas are \((2/\sqrt{\kappa})\Theta_{in,1}=\pi/2\), \((2/\sqrt{\kappa})\Theta_{in,2}=0.9\pi\), close to the classic \((\pi/2,\pi)\) sequence of photon/spin echo [30; 2; 2]. The echo areas dependencies have two characteristic regions. In the small coupling regime (\(\varkappa/\kappa<0.5\)) the primary photon echo is dominating the process and the majority of the energy is emitted into the first echo pulse. In the strong coupling regime (\(\varkappa/\kappa>1\)) all three of the considered echo signals become comparable. The region (\(0.5<\varkappa/\kappa<1\)) is an intermediate regime, in which the multi-pulse echoes are still small and each consequent echo is smaller than the previous one: \(\Theta_{e,n}<\Theta_{e,n+1}\). The coupling strength "chooses" whether the system is in the single pulse and multi-pulse echo generation regime. This feature has been confirmed experimentally in the recent works [39; 21]. In the strong coupling regime there are multiple echo pulses generated inside the cavity. Although we calculate only the first three echo signals, we can show that there are actually more secondary echo signals generated by plotting the difference between the total echo pulse area of all echo pulses generated inside the media. To do so we use Eq.(13), where we have \(\Theta_{diff}=0.1(\Theta_{out,\Sigma}-\Theta_{out,e1}-\Theta_{out,e2}-\Theta_{ out,e3})/\Theta_{in,1}\). The term \(0.1\) is to agree the scales of different curves in Fig. 4. \(\Theta_{diff}\) is negligible in the single pulse regime meaning there are no additional echo signals excited in the medium. In the strong coupling regime it rises up to \(0.5\) meaning that the total pulse area of the consequent echoes is approximately \(5\) times larger than the Figure 2: Comparison between the numerical (blue solid curves) solution of Eq.14, approximate linear solution Eq.(16) (green dot-dashed line) and approximate cubic solution Eq.(21) (orange dashed lines); \((2/\sqrt{\kappa})\Theta_{in,2}=0.9\pi,\xi=\varkappa/\kappa=1\). The difference between the exact solution and the third order approximation does not exceed \(11\%\). The error is even less if we introduce the decoherence term \(\Gamma_{\tau}=0.5\) (the second, closer to the \(x-\)axis set of solid blue and dashed orange curves). signal pulse area \(\Theta_{in,1}\), which suggests the existence of multiple additional echo pulses. Each echo signal in Fig. 4 has a single maximum, corresponding to the single impedance matching condition. Previously we derived the impedance matching condition \(\varkappa=\kappa\) for the signal pulse but each echo pulse also has its own impedance matching condition depending on the incoming pulse areas. We show this by comparing the primary echo curves for two different signal pulse areas \((2/\sqrt{\kappa})\Theta_{in,1}=\pi/10\) (red dotted curve) and \((2/\sqrt{\kappa})\Theta_{in,1}=\pi/2\) (blue solid curve). The curves peak at different coupling strength, meaning that the primary echo pulse impedance matching condition depends on the incoming signal pulse area. In turn the second and third echo signals have their own impedance matching conditions and peak at different coupling strengths (orange dashed and green dash-dotted curves in Fig.4). This allows to optimize the impedance matching condition to a specific generation regime. ## IV Conclusion We applied the photon echo area theorem to the two-level atomic ensemble inside an optical ring cavity to find the pulse areas of the incoming pulses and photon echo pulses up to the third echo pulse. The approach successfully represents the nonlinear properties of the echo generation process. The echo strongly depends on the incoming pulse areas as well as on the coupling constant between the sample and the cavity mode. We see that the pulse area theorem successfully captures the main nonlinear features of the photon echo process: primary echo amplification in the inverse medium and multi-pulse echo signal excitation. The strong point of the approach is that it remains true for arbitrary input pulse areas. It can also be useful in the analytical study of nonlinear patterns of photon echo in multi-resonator systems [40; 41], which is the subject of subsequent research. This work constitutes an important step towards establishing photon echo area theorem approach as a general and reliable tool to analyze various photon echo schemes inside optical cavities and an analytical alternative to the computer simulations. This research was supported by the Ministry of Science and Higher Education of the Russian Federation (Reg. number NIOKTR 121020400113-1).
2310.19297
On Measuring Fairness in Generative Models
Recently, there has been increased interest in fair generative models. In this work, we conduct, for the first time, an in-depth study on fairness measurement, a critical component in gauging progress on fair generative models. We make three contributions. First, we conduct a study that reveals that the existing fairness measurement framework has considerable measurement errors, even when highly accurate sensitive attribute (SA) classifiers are used. These findings cast doubts on previously reported fairness improvements. Second, to address this issue, we propose CLassifier Error-Aware Measurement (CLEAM), a new framework which uses a statistical model to account for inaccuracies in SA classifiers. Our proposed CLEAM reduces measurement errors significantly, e.g., 4.98% $\rightarrow$ 0.62% for StyleGAN2 w.r.t. Gender. Additionally, CLEAM achieves this with minimal additional overhead. Third, we utilize CLEAM to measure fairness in important text-to-image generator and GANs, revealing considerable biases in these models that raise concerns about their applications. Code and more resources: https://sutd-visual-computing-group.github.io/CLEAM/.
Christopher T. H. Teo, Milad Abdollahzadeh, Ngai-Man Cheung
2023-10-30T06:33:48Z
http://arxiv.org/abs/2310.19297v1
# On Measuring Fairness in Generative Models ###### Abstract Recently, there has been increased interest in fair generative models. In this work, we conduct, for the first time, an in-depth study on **fairness measurement**, a critical component in gauging progress on fair generative models. We make three contributions. First, we conduct a study that reveals that the existing fairness measurement framework has considerable measurement errors, even when highly accurate sensitive attribute (SA) classifiers are used. These findings cast doubts on previously reported fairness improvements. Second, to address this issue, we propose CLassifier Error-Aware Measurement (CLEAM), a new framework which uses a statistical model to account for inaccuracies in SA classifiers. Our proposed CLEAM reduces measurement errors significantly, e.g., **4.98% \(\rightarrow\)0.62%** for StyleGAN2 _w.r.t._**Gender**. Additionally, CLEAM achieves this with minimal additional overhead. Third, we utilize CLEAM to measure fairness in important text-to-image generator and GANs, revealing considerable biases in these models that raise concerns about their applications. **Code and more resources:**[https://sutd-visual-computing-group.github.io/CLEAM/](https://sutd-visual-computing-group.github.io/CLEAM/). ## 1 Introduction Fair generative models have been attracting significant attention recently [1; 2; 7; 8; 9; 10; 11; 12; 13]. In generative models [14; 15; 16; 17; 18], fairness is commonly defined as equal generative quality [11] or equal representation [1; 2; 7; 9; 12; 19; 20]_w.r.t._ some _Sensitive Attributes_ (SA). In this work, we focus on the more widely utilized definition - _equal representation_. In this definition, as an example, a generative model is regarded as fair _w.r.t._**Gender**, if it generates Male and Female samples with equal probability. This is an important research topic as such biases in generative models could impact their application efficacy, e.g., by introducing racial bias in face generation of suspects [21] or reducing accuracy when supplementing data for disease diagnosis [22]. **Fairness measurement for generative models.** Recognizing the importance of fair generative models, several methods have been proposed to mitigate biases in generative models [1; 2; 7; 9; 12]. However, _in our work, we focus mainly on the accurate fairness measurement of deep generative models i.e. assessing and quantifying the bias of generative models._ This is a critical topic, as accurate measurements are essential to reliably gauge the progress of bias mitigation techniques. The general fairness measurement framework is shown in Fig. 1 (See Sec. 2 for details). This framework is utilized in existing works to assess their proposed fair generators. Central to the fairness measurement framework is a _SA classifier_, which classifies the generated samples _w.r.t._ a SA, in order to estimate the bias of the generator. For example, if eight out of ten generated face images are classified as Male by the SA classifier, then the generator is deemed biased at \(0.8\) towards Male (further discussion in Sec. 2). We follow previous works [1, 2, 12] and focus on binary SA due to dataset limitations. **Research gap.** In this paper, we study a critical research gap in fairness measurement. Existing works assume that when SA classifiers are highly accurate, measurement errors should be insignificant. As a result, the effect of errors in SA classifiers has not been studied. However, our study reveals that _even with highly accurate SA classifiers, considerable fairness measurement errors could still occur_. This finding raises concerns about potential errors in previous works' results, which are measured using existing framework. Note that the SA classifier is _indispensable_ in fairness measurement as it enables automated measurement of generated samples. **Our contributions.** We make three contributions to fairness measurement for generative models. _As our first contribution_, we analyze the accuracy of fairness measurement on generated samples, which previous works [1, 2, 7, 9, 12] have been unable to carry out due to the unavailability of proper datasets. We overcome this challenge by proposing new datasets of _generated samples_ with manual labeling w.r.t._ various SAs. The datasets include generated samples from Stable Diffusion Model (SDM) [5] --a popular text-to-image generator-- as well as two State-of-The-Art (SOTA) GANs (StyleGAN2 [3] and StyleSwin [4]) _w.r.t._ different SAs. Our new datasets are then utilized in our work to evaluate the accuracy of the existing fairness measurement framework. Our results reveal that the accuracy of the existing fairness measurement framework is not adequate, due to the lack of consideration for the SA classifier inaccuracies. More importantly, we found that _even in setups where the accuracy of the SA classifier is high, the error in fairness measurement could still be significant_. Our finding raises concerns about the accuracy of previous works' results [1, 2, 12], especially since some of their reported improvements are smaller than the margin of measurement errors that we observe in our study when evaluated under the same setup; further discussion in Sec. 3. To address this issue, _as our second (major) contribution_, we propose CLassifier Error-Aware Measurement (CLEAM), a new more accurate fairness measurement framework based on our developed statistical model for SA classification (further details on the statistical model in Sec. 4.1). Figure 1: **General framework for measuring fairness in generative models.** Generated samples with unknown ground-truth (GT) probability \(\mathbf{p^{*}}\)_w.r.t._ sensitive attribute (SA) are fed into a SA classifier to obtain \(\mathbf{\hat{p}}\). Existing framework (Baseline) uses the classifier output \(\mathbf{\hat{p}}\) as estimation of \(\mathbf{p^{*}}\). In contrast, our proposed CLEAM includes an improved estimation that accounts for inaccuracies in the SA classifier (see Alg. 1). **Our statistical model for fairness measurement.** This model accounts for inaccuracies in the SA classifier and is the base of our proposed CLEAM (see Sec. 4.1). **© Improvements with CLEAM.** CLEAM improves upon Baseline [1, 2] by reducing the relative error in estimating the GT \(p_{0}^{*}\) for SOTA GANs: StyleGAN2 [3] and StyleSwin [4], and Stable Diffusion Model [5]. First two displays the Baseline and CLEAM estimates for each GAN, using ResNet-18 as the SA classifier for Gender and BlackHair. The Baseline incurs significant fairness measurement errors (_e.g._ 4.98%), even when utilizing a highly accurate ResNet-18 (\(\approx\)97% accuracy). Meanwhile, CLEAM reduces the error significantly in all setups, _e.g._ in the first panel, the error is reduced: 4.98% \(\rightarrow\) 0.62%. Similarly, in the second row, CLEAM reduces measurement error significantly in the Stable Diffusion Model [5], using CLIP [6] as the SA classifier for Gender, _e.g._ first panel: 9.14% \(\rightarrow\) 0.05% (Detailed evaluation in Tab. 1 and Tab. 2). **Best viewed in color.** Specifically, CLEAM utilizes this statistical model to account for the classifier's inaccuracies during SA classification and outputs a more accurate fairness measurement. We then evaluate the accuracy of CLEAM and validate its improvement over existing fairness measurement framework. We further conduct a series of different ablation studies to validate performance of CLEAM. We remark that CLEAM is not a new fairness metric, but an improved fairness measurement framework that could achieve better accuracy in bias estimation when used with various fairness metrics for generative models. _As our third contribution_, we apply CLEAM as an accurate framework to reliably measure biases in popular generative models. Our study reveals that SOTA GANs have considerable biases _w.r.t._ several SA. Furthermore, we observe an intriguing property in Stable Diffusion Model: slight differences in semantically similar prompts could result in markedly different biases for SDM. These results prompt careful consideration on the implication of biases in generative models. **Our contributions are:** * We conduct a study to reveal that even highly-accurate SA classifiers could still incur significant fairness measurement errors when using existing framework. * To enable evaluation of fairness measurement frameworks, we propose new datasets based on generated samples from StyleGAN, StyleSwin and SDM, with manual labeling _w.r.t._ SA. * We propose a statistically driven fairness measurement framework, CLEAM, which accounts for the SA classifier inaccuracies to output a more accurate bias estimate. * Using CLEAM, we reveal considerable biases in several important generative models, prompting careful consideration when applying them to different applications. ## 2 Fairness Measurement Framework Fig.1(a) illustrates the fairness measurement framework for generative models as in [1; 2; 7; 9; 12]. Assume that with some input _e.g._ noise vector for a GAN or text prompt for SDM, a generative model synthesizes a sample \(\mathbf{x}\). Generally, as the generator does not label synthesized samples, the ground truth (GT) class probability of these samples _w.r.t._ a SA (denoted by \(\mathbf{p}^{\star}\)) is unknown. Thus, an SA classifier \(C_{\mathbf{u}}\) is utilized to estimate \(\mathbf{p}^{\star}\). Specifically, for each sample \(\mathbf{x}\in\{\mathbf{x}\}\), \(C_{\mathbf{u}}(\mathbf{x})\) is the argmax classification for the respective SA. In existing works, the expected value of the SA classifier output over a batch of samples, \(\mathbf{\hat{p}}=\mathbb{E}_{\mathbf{x}}[C_{\mathbf{u}}(\mathbf{x})]\) (or the average of \(\mathbf{\hat{p}}\) over multiple batches of samples), is used as an estimation of \(\mathbf{p}^{\star}\). This estimate may then be used in some fairness metric \(f\) to report the fairness value for the generator, _e.g._ fairness discrepancy metric between \(\mathbf{\hat{p}}\) and a uniform distribution \(\mathbf{\bar{p}}\)[1; 20] (see Supp A.3 for details on how to calculate \(f\)). Note that _the general assumption behind the existing framework is that with a reasonably accurate SA classifier_, \(\mathbf{\hat{p}}\)_could be an accurate estimation of \(\mathbf{p}^{\star}\)[1; 9]. In the next section, we will present a deeper analysis on the effects of an inaccurate SA classifier on fairness measurement. Our findings suggest that there could be a large discrepancy between \(\mathbf{\hat{p}}\) and \(\mathbf{p}^{\star}\), even for highly accurate SA classifiers, indicative of significant fairness measurement errors in the current measurement framework. One may argue that conditional GANs (cGANs) [23; 24] may be used to generate samples conditioned on the SA, thereby eliminating the need for an SA classifier. However, cGANs are not considered in previous works due to several limitations. These include the limited availability of large _labeled_ training datasets, the unreliability of sample quality and labels [25], and the exponentially increasing conditional terms, per SA. Similarly, for SDM, Bianchi _et al._[26] found that utilizing well-crafted prompts to mitigate biases is ineffective due to the presence of existing biases in its training dataset. Furthermore in Sec. 6, utilizing CLEAM, we will discuss that even subtle prompt changes (while maintaining the semantics) result in drastically different SA biases. See Supp G for further comparison between [26] and our findings. ## 3 A Closer Look at Fairness Measurement In this section, we take a closer look at the existing fairness measurement framework. In particular, we examine its performance in estimating \(\mathbf{p}^{\star}\) of the samples generated by SOTA GANs and SDM, a task previously unstudied due to the lack of a labeled generated dataset. We do so by designing an experiment to demonstrate these errors while evaluating biases in popular image generators. Following previous works, our main focus is on binary SA which takes values in \(\{0,1\}\). Note that, we assume that the accuracy of the SA classifier \(C_{\mathbf{u}}\) is known and is characterized by \(\mathbf{\alpha}=\{\alpha_{0},\alpha_{1}\}\), where \(\alpha_{i}\) is the probability of correctly classifying label \(i\). For example, for Gender attribute, \(\alpha_{0}\) and \(\alpha_{1}\) are the probability of correctly classifying Female, and Male classes, respectively. In practice, \(C_{\mathbf{u}}\) is trained on standard training procedures (more details in the Supp F) and \(\mathbf{\alpha}\) can be measured during the validation stage of \(C_{\mathbf{u}}\) and be considered a constant when the validation dataset is large enough. Additionally, \(\mathbf{p}^{\star}\) can be assumed to be a constant vector, given that the samples generated can be considered to come from an infinite population, as theoretically there is no limit to the number of samples from a generative model like GAN or SDM. **New dataset by labeling generators output.** The major limitation of evaluating the existing fairness measurement framework is the unavailability of \(\mathbf{p}^{\star}\). _To pave the way for an accurate evaluation, we create a new dataset by manually labeling the samples generated by GANs and SDM_. More specifically, we utilize the official publicly released pre-trained StyleGAN2 [3] and StyleSwin [4] on CelebA-HQ [27] for sample generation. Then, we randomly sample from these GANs and utilize Amazon Mechanical Turks to hand-label the samples _w.r.t._Gender and BlackHair, resulting in \(\approx\)9K samples for each GAN; see Supp H for more details and examples. Next, we follow a similar labeling process _w.r.t._Gender, but with a SDM [5] pre-trained on LAION-5B[28]. Here, we input prompts using best practices [26, 29, 30, 31], beginning with a scene description ("A photo with the face of"), followed by four indefinite (gender-neutral) pronouns or nouns [32, 33] - ("an individual", "a human being", "one person", "a person") to collect \(\approx\)2k high-quality samples. We refer to this new dataset as Generated Dataset (**GenData**), which includes generated images from three models with corresponding SA labels: GenData-StyleGAN2, GenData-StyleSwin, GenData-SDM. We remark that these labeled datasets only provide a strong approximation of \(\mathbf{p}^{\star}\) for each generator, however as the datasets are reasonably large, we find this approximation sufficient and simply refer to it as the GT \(\mathbf{p}^{\star}\). Then utilizing this GT \(\mathbf{p}^{\star}\), we compare it against the estimated baseline (\(\hat{\mathbf{p}}\)). One interesting observation revealed by GenData is that all three generators exhibit a considerable amount of bias (see Tab.1 and 2); more detail in Sec. 6. Note that for a fair generator we have \(p_{0}^{*}=p_{1}^{*}=0.5\), and measuring the \(p_{0}^{*}\) and \(p_{1}^{*}\) is a good proxy for measuring fairness. **Experimental setup.** Here, we follow Choi _et al._[1] as the _Baseline_ for measuring fairness. In particular, to calculate each \(\hat{\mathbf{p}}\) value for a generator, a corresponding batch of \(n=400\) samples is randomly drawn from GenData and passed into \(C_{\mathbf{u}}\) for SA classification. We repeat this for \(s=30\) batches and report the mean results denoted by \(\mu_{\texttt{Base}}\) and the 95% confidence interval denoted by \(\rho_{\texttt{Base}}\). For a comprehensive analysis of the GANs, we repeat the experiment using four different SA classifiers: Resnet-18, ResNet-34 [34], MobileNet2 [35], and VGG-16 [36]. Then, to evaluate the SDM, we utilize CLIP [6] to explore the utilization of pre-trained models for zero-shot SA classification; more details on the CLIP SA classifier in Supp. E. As CLIP does not have a validation dataset, to measure \(\mathbf{\alpha}\) for CLIP, we utilize CelebA-HQ, a dataset with a similar domain to our application. We found this to be a very accurate approximation; see Supp D.7 for validation results. Note that for SDM, a separate \(\hat{\mathbf{p}}\) is measured for each text prompt as SDM's output images are conditioned on the input text prompt. As seen in Tab. 1 and 2, all classifiers demonstrate reasonably high average accuracy \(\in[84\%,98.7\%]\). Note that as we focus on binary SA (_e.g._Gender:{Male, Female}), both \(\mathbf{p}^{\star}\) and \(\hat{\mathbf{p}}\) have two components _i.e._\(\mathbf{p}^{\star}=\{p_{0}^{*},p_{1}^{*}\}\), and \(\hat{\mathbf{p}}=\{\hat{p}_{0},\hat{p}_{1}\}\). After computing the \(\mu_{\texttt{Base}}\) and \(\rho_{\texttt{Base}}\), we calculate _normalized \(L_{1}\) point error \(e_{\mu}\)_, and _interval max error \(e_{\rho}\) w.r.t._ the \(p_{0}^{*}\) (GT) to evaluate the measurement accuracy of the baseline method: \[e_{\mu_{\texttt{Base}}}=\tfrac{1}{p_{0}^{*}}|p_{0}^{*}-\mu_{\texttt{Base}}| \quad;\quad e_{\rho_{\texttt{Base}}}=\tfrac{1}{p_{0}^{*}}\max\{|\min(\rho_{ \texttt{Base}})-p_{0}^{*}|,|\max(\rho_{\texttt{Base}})-p_{0}^{*}|\} \tag{1}\] **Based on our results in Tab. 1**, for GANs, we observe that despite the use of reasonably accurate SA classifiers, there are significant estimation errors in the existing fairness measurement framework, _i.e._\(e_{\mu_{\texttt{Base}}}\in[4.98\%,17.13\%]\). In particular, looking at the SA classifier with the highest average accuracy of \(\approx 97\%\) (ResNet-18 on Gender), we observe significant discrepancies between GT \(p_{0}^{*}\) and \(\mu_{\texttt{Base}}\), with \(e_{\mu_{\texttt{Base}}}=4.98\%\). These errors generally worsen as accuracy marginally degrades, _e.g._ MobileNetv2 with accuracy \(\approx 96\%\) results in \(e_{\mu_{\texttt{Base}}}=5.45\%\). These considerably large errors contradict prior assumptions - that for a reasonably accurate SA classifier, we can assume \(e_{\mu_{\texttt{Base}}}\) to be fairly negligible. Similarly, our results in Tab. 2 for the SDM, show large \(e_{\mu_{\texttt{Base}}}\)\(\in[1.49\%,9.14\%]\), even though the classifier is very accurate. We discuss the reason for this in more detail in Sec. 5.1. _Overall, these results are concerning as they cast doubt on the accuracy of prior reported results._ For example, imp-weighting [1] which uses the same ResNet-18 source code as our experiment, reports a 2.35% relative improvement in fairness against its baseline _w.r.t._Gender, which falls within the range of our experiments smallest relative error, \(e_{\mu_{\text{\tiny{max}}}}\)=4.98%. Similarly, Teo _et al._[2] and Um _et al._[12] report a relative improvement in fairness of 0.32% and 0.75%, compared to imp-weighting [1]. These findings suggest that some prior results may be affected due to oversight of SA classifier's inaccuracies; see Supp. A.4 for more details on how to calculate these measurements. **Remark:** In this section, we provide the keystone for the evaluation of measurement accuracy in the current framework by introducing a labeled dataset based on generated samples. These evaluation results raise concerns about the accuracy of existing framework as considerable error rates were observed even when using accurate SA classifiers, an issue previously seen to be negligible. ## 4 Mitigating Error in Fairness Measurements The previous section exposes the inaccuracies in the existing fairness measurement framework. Following that, in this section, we first develop a statistical model for the erroneous output of the SA classifier, \(\hat{\mathbf{p}}\), to help draw a more systematic relationship between the inaccuracy of the SA classifier and error in fairness estimation. Then, with this statistical model, we propose CLEAM - a new measurement framework that reduces error in the measured \(\hat{\mathbf{p}}\) by accounting for the SA classifier inaccuracies to output a more accurate statistical approximation of \(\mathbf{p}^{\star}\). ### Proposed Statistical Model for Fairness Measurements As shown in Fig.1(a), to measure the fairness of the generator, we feed \(n\) generated samples to the SA classifier \(C_{\mathbf{u}}\). The output of the SA classifier (\(\hat{\mathbf{p}}\)) is in fact a random variable that aims to approximate the \(\mathbf{p}^{\star}\). Here, we propose a statistical model to derive the distribution of \(\hat{\mathbf{p}}\). As Fig.1(b) demonstrates in our running example of a binary SA, each generated sample is from _class 0_ with probability \(p_{0}^{\star}\), or from _class 1_ with probability \(p_{1}^{\star}\). Then, generated sample from _class \(i\)_ where \(i\in\{0,1\}\), will be classified correctly with the probability of \(\alpha_{i}\), and wrongly with the probability of \(\alpha_{i}^{\prime}=1-\alpha_{i}\). Thus, for each sample, there are four mutually exclusive possible events denoted by \(\mathbf{c}\), with the corresponding probability vector \(\mathbf{p}\): \[\mathbf{c}^{T}=\begin{bmatrix}c_{0|0}&c_{1|0}&c_{1|1}&c_{0|1}\end{bmatrix}\quad, \quad\mathbf{p}^{T}=[p_{0}^{\star}\alpha_{0}\quad p_{0}^{\star}\alpha_{0}^{ \prime}\quad p_{1}^{\star}\alpha_{1}\quad p_{1}^{\star}\alpha_{1}^{\prime}] \tag{2}\] where \(c_{i|j}\) denotes the event of assigning label \(i\) to a sample with GT label \(j\). Given that this process is performed independently for each of the \(n\) generated images, the probability of the counts for each output \(\mathbf{c}^{T}\) in Eqn. 2 (denoted by \(\mathbf{N}_{\mathbf{c}}\)) can be modeled by a multinomial distribution, _i.e._\(\mathbf{N}_{\mathbf{c}}\sim Multi(n,\mathbf{p})\)[37; 38; 39]. Note that \(\mathbf{N}_{\mathbf{c}}\) models the _joint probability distribution_ of these outputs, _i.e._\(\mathbf{N}_{\mathbf{c}}\sim\mathbb{P}(N_{c_{0|0}},N_{c_{1|0}},N_{c_{1|1}},N_{c_{ 0|1}})\) where, \(N_{c_{i|j}}\) is the random variable of the count for event \(c_{i|j}\) after classifying \(n\) generated images. Since \(\mathbf{p}\) is not near the boundary of the parameter space, and as we utilize a large \(n\), based on the central limit theorem, \(Multi(n,\mathbf{p})\) can be approximated by a multivariate Gaussian distribution, \(\mathbf{N}_{\mathbf{c}}\sim\mathbf{\mathcal{N}}(\mathbf{\mu},\mathbf{\Sigma})\), with \(\mathbf{\mu}=n\mathbf{p}\) and \(\mathbf{\Sigma}=n\mathbf{M}\)[40; 39], where \(\mathbf{M}\) is defined as: \[\mathbf{M}=diag(\mathbf{p})-\mathbf{p}\mathbf{p}^{T} \tag{3}\] \(diag(\mathbf{p})\) denotes a square diagonal matrix corresponding to vector \(\mathbf{p}\) (see Supp A.1 for expanded form). The _marginal distribution_ of this multivariate Gaussian distribution gives us a univariate (one-dimensional) Gaussian distribution for the count of each output \(\mathbf{c}^{T}\) in Eqn. 2. For example, the distribution of the count for event \(c_{0|0}\), denoted by \(N_{c_{0|0}}\), can be modeled as \(N_{c_{0|0}}\sim\mathcal{N}(\mathbf{\mu}_{1},\mathbf{\Sigma}_{11})\). Lastly, we find the total percentage of data points labeled as class \(i\) when labeling \(n\) generated images using the normalized sum of the related random variables, _i.e._\(\hat{p}_{i}=\frac{1}{n}\sum_{j}N_{c_{ij}}\). For our binary example, \(\hat{p}_{i}\) can be calculated by summing random variables with Gaussian distribution, which results in another Gaussian distribution [41], _i.e._, \(\hat{p}_{0}\sim\mathcal{N}(\tilde{\mu}_{\hat{p}_{0}},\tilde{\sigma}_{\tilde{p} _{0}}^{2})\), where: \[\tilde{\mu}_{\hat{p}_{0}}= \frac{1}{n}(\mathbf{\mu}_{1}+\mathbf{\mu}_{4})=p_{0}^{\star}\alpha_{0}+p_ {1}^{\star}\alpha_{1}^{\prime} \tag{4}\] \[\tilde{\sigma}_{\hat{p}_{0}}^{2}= \frac{1}{n^{2}}(\mathbf{\Sigma}_{11}+\mathbf{\Sigma}_{44}+2\mathbf{\Sigma}_{1 4})=\frac{1}{n}[(p_{0}^{\star}\alpha_{0}-(p_{0}^{\star}\alpha_{0})^{2})+(p_{ 1}^{\star}\alpha_{1}^{\prime}-(p_{1}^{\star}\alpha_{1}^{\prime})^{2})]+\frac{2} {n}p_{0}^{\star}p_{1}^{\star}\alpha_{0}\alpha_{1}^{\prime} \tag{5}\] Similarly \(\hat{p}_{1}\sim\mathcal{N}(\tilde{\mu}_{\hat{p}_{1}},\tilde{\sigma}_{\tilde{p} _{1}}^{2})\) with \(\tilde{\mu}_{\hat{p}_{1}}=(\mathbf{\mu}_{2}+\mathbf{\mu}_{3})/n\), and \(\tilde{\sigma}_{\tilde{p}_{1}}^{2}=(\mathbf{\Sigma}_{22}+\mathbf{\Sigma}_{33}+2\mathbf{ \Sigma}_{23})/n^{2}\) which is aligned with the fact that \(\hat{p}_{1}=1-\hat{p}_{0}\). **Remark:** In this section, considering the probability tree diagram in Fig.1(b), we propose a joint distribution for the possible events of classification (\(N_{c_{|ij}}\)), and use it to compute the marginal distribution of each event, and finally the distribution of the SA classifier outputs (\(\hat{p}_{0}\), and \(\hat{p}_{1}\)). Note that considering Eqn. 4, 5, only with a perfect classifier (\(\alpha_{i}=1\), _i.e._ acc\(=100\%\)) the \(\tilde{\mu}_{\hat{p}_{0}}\) converges to \(p_{0}^{*}\). However, training a perfect SA classifier is not practical _e.g._ due to the lack of an appropriate dataset and task hardness [42; 43]. As a result, in the following, we will propose CLEAM which instead utilizes this statistical model to mitigate the error of the SA classifier. ### CLEAM for Accurate Fairness Measurement In this section, we propose a new estimation method in fairness measurement that considers the inaccuracy of the SA classifier. For this, we use the statistical model, introduced in Sec 4.1, to compute a more accurate estimation of \(\mathbf{p}^{*}\). Specifically, we first propose a Point Estimate (PE) by approximating the _maximum likelihood value_ of \(\mathbf{p}^{*}\). Then, we use the _confidence interval_ for the observed data (\(\mathbf{\hat{p}}\)) to propose an Interval Estimate (IE) for \(\mathbf{p}^{*}\). **Point Estimate (PE) for \(\mathbf{p}^{*}\)**. Suppose that we have access to \(s\) samples of \(\mathbf{\hat{p}}\) denoted by \(\{\mathbf{\hat{p}}^{1},\ldots,\mathbf{\hat{p}}^{s}\}\), _i.e._ SA classification results on \(s\) batches of generated data. We can then use the proposed statistical model to approximate the \(\mathbf{p}^{*}\). In the previous section, we demonstrate that we can model \(\hat{p}_{j}^{*}\) using a Gaussian distribution. Considering this, first, we use the available samples to calculate sample-based statistics including the mean and variance of the \(\hat{p}_{j}\) samples: \[\tilde{\mu}_{\hat{p}_{j}} =\tfrac{1}{s}\sum_{i=1}^{s}\hat{p}_{j}^{i} \tag{6}\] \[\tilde{\sigma}_{\hat{p}_{j}}^{2} =\tfrac{1}{s-1}\sum_{i=1}^{s}(\hat{p}_{j}^{i}-\tilde{\mu}_{\hat{ p}_{j}})^{2} \tag{7}\] For a Gaussian distribution, the Maximum Likelihood Estimate (MLE) of the population mean is its sample mean \(\tilde{\mu}_{\hat{p}_{j}}\)[44]. Given that \(s\) is large enough (_e.g._\(s>30\)), we can assume that \(\tilde{\mu}_{\hat{p}_{j}}\) is a good approximation of the population mean [45], and equate it to the statistical population mean \(\tilde{\mu}_{\hat{p}_{j}}\) in Eqn. 4 (see Supp A.2 for derivation). With that, we get the _maximum likelihood approximation of \(\mathbf{p}^{*}\)_, _which we call the CLEAM's point estimate, \(\mu_{\texttt{CLEAM}}\)_: \[\mu_{\texttt{CLEAM}}(p_{0}^{*})=(\tilde{\mu}_{\hat{p}_{0}}-\alpha_{1}^{ \prime})/(\alpha_{0}-\alpha_{1}^{\prime})\quad,\quad\mu_{\texttt{CLEAM}}(p_{1 }^{*})=1-\mu_{\texttt{CLEAM}}(p_{0}^{*}) \tag{8}\] Notice that \(\mu_{\texttt{CLEAM}}\) accounts for the inaccuracy of the SA classifier. **Interval Estimate (IE) for \(\mathbf{p}^{*}\).** In the previous part, we propose a PE for \(\mathbf{p}^{*}\) using the statistical model, and sample-based mean \(\tilde{\mu}_{\hat{p}_{0}}\). However, as we use only \(s\) samples of \(\mathbf{\hat{p}}\), \(\tilde{\mu}_{\hat{p}_{0}}\) may not capture the exact value of the population mean. This adds some degree of inaccuracy into \(\mu_{\texttt{CLEAM}}\). In fact, in our framework, \(\tilde{\mu}_{\hat{p}_{0}}\) equals \(\tilde{\mu}_{\hat{p}_{0}}\) when \(s\to\infty\). However, increasing each unit of \(s\) significantly increases the computational complexity, as each \(\mathbf{\hat{p}}\) requires \(n\) generated samples. To address this, we recall that \(\hat{p}_{0}\) follows a Gaussian distribution and instead utilize frequentist statistics [41] to propose a 95% confidence interval (CI) for \(\mathbf{p}^{*}\). To do this, first we derive the CI for \(\tilde{\mu}_{\hat{p}_{0}}\): \[\tilde{\mu}_{\hat{p}_{0}}-1.96\tfrac{\tilde{\alpha}_{\hat{p}_{0}}}{\sqrt{s}} \leq\tilde{\mu}_{\hat{p}_{0}}\leq\tilde{\mu}_{\hat{p}_{0}}+1.96\tfrac{\tilde{ \alpha}_{\hat{p}_{0}}}{\sqrt{s}} \tag{9}\] Then, applying Eqn.4 to Eqn.9 gives the lower and upper bounds of the approximated 95% CI for \(p_{0}^{*}\): \[\mathcal{L}(p_{0}^{*}),\mathcal{U}(p_{0}^{*})=(\tilde{\mu}_{\hat{p}_{0}}\mp 1. 96(\tilde{\sigma}_{\hat{p}_{0}}/\sqrt{s})-\alpha_{1}^{\prime})/(\alpha_{0}- \alpha_{1}^{\prime}) \tag{10}\] This gives us the interval estimate of CLEAM, \(\rho_{\texttt{CLEAM}}=[\mathcal{L}(p_{0}^{*}),\mathcal{U}(p_{0}^{*})]\), a range of values that we can be approximately 95% confident to contain \(p_{0}^{*}\). The range of possible values for \(p_{1}^{*}\) can be simply derived considering \(p_{1}^{*}=1-p_{0}^{*}\). The overall procedure of CLEAM is summarized in Alg. 1. Now, with the IE, we can provide statistical significance to the reported fairness improvements. \begin{table} \begin{tabular}{c Experiments In this section, we first evaluate fairness measurement accuracy of CLEAM on both GANs and SDM (Sec.5.1) with our proposed GenData dataset. Then we evaluate CLEAM's robustness through some ablation studies (Sec. 5.2). To the best of our knowledge, there is no similar literature for improving fairness measurements in generative models. Therefore, we compare **CLEAM** with the two most related works: a) the **Baseline** used in previous works [1; 2; 7; 9; 12] b) **Diversity**[46] which computes disparity within a dataset via an intra-dataset pairwise similarity algorithm. We remark that, as discussed by Keswani _et al._[46] Diversity is model-specific using VGG-16 [36]; see Supp. D.2 for more details. Finally, unless specified, we repeat the experiments with \(s=30\) batches of images from the generators with batch size \(n=400\). For a fair comparison, all three algorithms use the exact same inputs. However, while Baseline and Diversity ignore the SA classifier inaccuracies, CLEAM makes good use of it to rectify the measurement error. As mentioned in Sec. 4.2, for CLEAM, we utilize \(\mathbf{\alpha}\) measured on real samples, which we found to be a good approximation of the \(\mathbf{\alpha}\) measured on generated samples (see Supp. D.7 for results). We repeat each experiment 5 times and report the mean value for each test point for both PE and IE. See Supp D.1 for the standard deviation. ### Evaluating CLEAM's Performance **CLEAM for fairness measurement of SOTA GANs - StyleGAN2 and StyleSwin.** For a fair comparison, we first compute \(s\) samples of \(\hat{\mathbf{p}}\), one for each batch of \(n\) images. For Baseline, we use the mean \(\hat{\mathbf{p}}\) value as the PE (denoted by \(\mu_{\texttt{Base}}\)), and the \(95\%\) confidence interval as IE (\(\rho_{\texttt{Base}}\)). With the same \(s\) samples of \(\hat{\mathbf{p}}\), we apply Alg. 1 to obtain \(\mu_{\texttt{CLEAM}}\) and \(\rho_{\texttt{CLEAM}}\). For Diversity, following the original source code [46], a controlled dataset with fair representation is randomly selected from a held-out dataset of CelebA-HQ [27]. Then, we use a VGG-16 [36] feature extractor and compute Diversity, \(\delta\). With \(\delta\) we find \(\hat{p}_{0}=(\delta+1)/2\) and subsequently \(\mu_{\texttt{Div}}\) and \(\rho_{\texttt{Div}}\) from the mean and \(95\%\) CI (see Supp D.2 for more details on diversity). We then compute \(e_{\mu_{\texttt{CLEAM}}}\), \(e_{\mu_{\texttt{Div}}}\), \(e_{\rho_{\texttt{CLEAM}}}\) and \(e_{\rho_{\texttt{Div}}}\) with Eqn 1, by replacing the Baseline estimates with CLEAM and Diversity. As discussed, our results in Tab.1 show that the baseline experiences significantly large errors of \(4.98\%\leq e_{\mu_{\texttt{Base}}}\leq 17.13\%\), due to a lack of consideration for the inaccuracies of the SA classifier. We note that this problem is prevalent throughout the different SA classifier architectures, even with higher capacity classifiers _e.g._ ResNet-34. Diversity, a method similarly unaware of the inaccuracies of the SA classifier, presents a similar issue with \(8.98\%\leq e_{\mu_{\texttt{Dev}}}\leq 14.33\%\) In contrast, CLEAM dramatically reduces the error for all classifier architectures. Specifically, CLEAM reduces the average point estimate error from \(e_{\mu_{\texttt{Base}}}\geq 8.23\%\) to \(e_{\mu_{\texttt{CLEAM}}}\leq 1.24\%\), in both StyleGAN2 and StyleSwin. The IE presents similar results, where in most cases \(\rho_{\texttt{CLEAM}}\) bounds the GT value of \(\mathbf{p}^{\star}\). **CLEAM for fairness measurement of SDM.** We evaluate CLEAM in estimating the bias of the SDM _w.r.t._ Gender, based on the synonymous (gender-neutral) prompts introduced in Sec. 3. Recall that here we utilize CLIP as the zero-shot SA classifier. Our results in Tab 2, as discussed, show that utilizing the baseline results in considerable error (\(1.49\%\leq e_{\mu_{\texttt{Base}}}\leq 9.14\%\)) for all prompts, even though the SA classifier's average accuracy was high, \(\approx 98.7\%\) (visual results in Fig.2). A closer look at the theoretical model's Eqn. 4 reveals that this is due to the larger inaccuracies observed in the biased class (\(\alpha^{\prime}_{1}\)) coupled with the large bias seen in \(p^{\star}_{1}\), which results in \(\mu_{\texttt{Base}}\) deviating from \(p^{\star}_{0}\). In contrast, CLEAM accounts for these inaccuracies and significantly minimizes the error to \(e_{\mu_{\texttt{CLEAM}}}\leq 1.77\%\). Moreover, CLEAM's IE is able to consistently bound the GT value of \(p^{\star}_{0}\). ### Ablation Studies and Analysis Here, we perform the ablation studies and compare CLEAM with classifier correction methods. _We remark that detailed results of these experiments are provided in the Supp due to space limitations._ **CLEAM for measuring varying degrees of bias.** As we cannot control the bias in trained generative models, to simulate different degrees of bias, we evaluate CLEAM with a _pseudo-generator_. Our results show that CLEAM is effective at different biases (\(p^{\star}_{0}\in\) [0.5,0.9]) reducing the average error from \(2.80\%\leq e_{\mu_{\texttt{Base}}}\leq 6.93\%\) to \(e_{\mu_{\texttt{CLEAM}}}\leq 0.75\%\) on CelebA [47]_w.r.t._ {Gender,BlackHair}, and AFHQ [48]_w.r.t._ Cat/Dog. See Supp D.3 and D.4 for full experimental results. **CLEAM vs Classifier Correction Methods [49]**. CLEAM generally accounts for the classifier's inaccuracies, without targeting any particular cause of inaccuracies, for the purpose of rectifying the fairness measurements. This objective is unlike traditional classifier correction methods as it does not aim to improve the actual classifier's accuracy. However, considering that classifier correction methods may improve the fairness measurements by directly rectifying the classifier inaccuracies, we compare its performance against CLEAM. As an example, we utilize the Black Box Shift Estimation / Correction (BBSE / BBSC) [49] which considers the label shift problem and aims to correct the classifier output by detecting the distribution shift. Our results, based on Sec. 5.1 setup, show that while BBSE does improve on the fairness measurements of the baseline _i.e._ 4.20% \(\leq e_{\mu\text{name}}\leq\) 3.38%, these results are far inferior to CLEAM's results seen in Tab. 1. In contrast, BBSC demonstrates no improvements in fairness measurements. See Supp D.8 for full experimental results. We postulate that this is likely due to the strong assumption of label shift made by both methods. **Effect of batch-size.** Utilizing experimental setup in Sec. 5.1 for batch size \(n\in\)[100,600], our results in Fig. 9 show that \(n\)=400 is an ideal batch size, balancing computational cost and measurement accuracy. See Supp F for full experimental details and results. ## 6 Applying CLEAM: Bias in Current SOTA Generative Models In this section, we leverage the improved reliability of CLEAM to study biases in the popular generative models. Firstly, with the rise in popularity of text-to-image generators [50; 51; 52; 5], we revisit our results when passing different prompts, with synonymous neutral meanings to an SDM, and take a closer look at how subtle prompt changes can impact bias _w.r.t._\(\mathsf{Gender}\). Furthermore, we further investigate if similar results would occur in other SA, Smiling. Secondly, with the shift in popularity from convolution to transformer-based architectures [53; 54; 55], due to its better sample quality, we determine whether the learned bias would also change. For this, we compare StylesSwin (transformer) and StyleGAN2 (convolution), which are both based on the same architecture backbone. Our results, on SDM, demonstrate that the use of different synonymous neutral prompts [32; 33] results in different degrees of bias _w.r.t._ both \(\mathsf{Gender}\) and \(\mathsf{Smiling}\) attributes. For example in Fig. 2, a semantically insignificant prompt change from "one person" to "a person" results in a significant shift in \(\mathsf{Gender}\) bias. Then, in Fig. 4a, we observe that while the SDM _w.r.t._ our prompts appear to be heavily biased to not-\(\mathsf{Smiling}\), having "person" in the prompt appears to significantly reduce this bias. This suggests that for SDM, even semantically similar neutral prompts [32; 33] could result in different degrees of bias, thereby demonstrating certain instability in SDM. Next, our results in Fig. 4b compare the bias in StyleGAN2, StylesSwin, and the training CelebA-HQ dataset over an extended number of SAs. Overall, we found that while StyleSwin produces better quality samples [4], the same biases still remain statistically unchanged between the two architectures _i.e._ their IE overlap. Interestingly, our results also found that both the GANs were less biased than the training dataset itself. ## 7 Discussion **Conclusion.** In this work, we address the limitations of the existing fairness measurement framework. Since generated samples are typically unlabeled, we first introduce a new labeled dataset based on three state-of-the-art generative models for our studies. Our findings suggest that the existing framework, which ignores classification inaccuracies, suffers from significant measurement errors, even when the SA classifier is very accurate. To rectify this, we propose CLEAM, which considers these inaccuracies in its statistical model and outputs a more accurate fairness measurement. Overall, CLEAM demonstrates improved accuracy over extensive experimentation, including both real Figure 3: Comparing the point error \(e_{\mu}\) for Baseline and CLEAM when evaluating the bias of GenData-CelebA with ResNet-18, while varying sample size, \(n\). generators and controlled setups. Moreover, by applying CLEAM to popular generative models, we uncover significant biases that raise efficacy concerns about these models' real-world application. **Broader Impact.** Given that generative models are becoming more widely integrated into our everyday society text-to-image generation, it is important that we have reliable means to measure fairness in generative models, thereby allowing us to prevent these biases from proliferating into new technologies. CLEAM provides a step in this direction by allowing for more accurate evaluation. We remark that our work _does not introduce any social harm_ but instead improves on the already existing measurement framework. **Limitations.** Despite the effectiveness of the proposed method along various generative models, our work addresses only one facet of the problems in the existing fairness measurement and there is still room for further improvement. For instance, it may be beneficial to consider SA to be non-binary when hair color is not necessary fully black (grey). Additionally, existing datasets used to train classifiers are commonly human-annotated, which may itself contain certain notions of bias. See Supp. I for further discussion. ## Acknowledgements This research is supported by the National Research Foundation, Singapore under its AI Singapore Programmes (AISG Award No.: AISG2-TC-2022-007) and SUTD project PIE-SGP-AI-2018-01. This research work is also supported by the Agency for Science, Technology and Research (A*STAR) under its MTC Programmatic Funds (Grant No. M23L7b0021). This material is based on the research/work support in part by the Changi General Hospital and Singapore University of Technology and Design, under the HealthTech Innovation Fund (HTIF Award No. CGH-SUTD-2021-004). We thank anonymous reviewers for their insightful feedback and discussion. Figure 4: Applying CLEAM to further assess the bias in popular generative models.
2304.05721
Optimization of pencil beam scanning pattern for FLASH proton therapy
Purpose: The FLASH effect, which reduces the radiosensitivity of healthy tissue while maintaining tumor control at high dose rates, has shown potential for improving radiation therapy. Conformal FLASH proton therapy involves advanced beam-shaping technologies and specialized nozzle designs to confine the dose to the target volume. Optimizing the spot delivery pattern and range modulators can enhance the local dose rate, and genetic algorithms have been used to optimize scan patterns for stereotactic FLASH proton therapy of early-stage lung cancer and lung metastases. A fast and effective method based on graph theory is proposed to optimize the dose rate in specific regions of interest. Methods: We have created a graph-based algorithm to optimize the trajectory of proton spots to maximize the 100th percentile dose rate. Since this problem is NP-hard, we have employed an approximation algorithm that can solve this kind of Traveling Salesman Problem efficiently. Results: When compared to a conventional serpentine pattern, the optimized scanning trajectory led to a doubling of the median dose rate, but only a minor increase in DR95. Our approach is more efficient and requires fewer evaluations of the objective function and hyper-parameters compared to existing genetic algorithms. Conclusions: The optimized scanning trajectory led to a doubling of the median dose rate, but only a minor increase in DR95. The extent to which the dose rate can be increased depends on the size and shape of the region of interest. Future research could explore integrating FLASH objectives into treatment planning and incorporating the proposed method into plan optimization.
Sylvain Deffet, Edmond Sterpin
2023-04-12T09:24:30Z
http://arxiv.org/abs/2304.05721v1
# Optimization of pencil beam scanning pattern for FLASH proton therapy ###### Abstract **Background:** The FLASH effect, which reduces the radiosensitivity of healthy tissue while maintaining tumor control at high dose rates, has shown potential for improving radiation therapy. While the mechanisms behind the effect are not fully understood, it has been extensively studied with MeV electron beams and high-energy proton beams. However, to achieve FLASH proton therapy, changes equipment and delivery systems are needed. Conformal FLASH proton therapy involves advanced beam-shaping technologies and specialized nozzle designs to confine the dose to the target volume. Optimizing the spot delivery pattern and range modulators can enhance the local dose rate, and genetic algorithms have been used to optimize scan patterns for stereotactic FLASH proton therapy of early-stage lung cancer and lung metastases. **Purpose:** Maximize the dose rate within regions of interest through an efficient approach grounded in graph theory. **Methods:** We have created a graph-based algorithm to optimize the trajectory of proton spots to maximize the 100th percentile dose rate. Since this problem is NP-hard, we have employed an approximation algorithm that can solve this kind of Traveling Salesman Problem (TSP) efficiently. **Results:** When compared to a conventional serpentine pattern, the optimized scanning trajectory led to a doubling of the median dose rate, but only a minor increase in DR95. Our approach is more efficient and requires fewer evaluations of the objective function and hyper-parameters compared to existing genetic algorithms. **Conclusions:** The optimized scanning trajectory led to a doubling of the median dose rate, but only a minor increase in DR95. The extent to which the dose rate can be increased depends on the size and shape of the region of interest. Future research could explore integrating FLASH objectives into treatment planning and incorporating the proposed method into plan optimization. FLASH, Flash Proton Therapy, Dose Rate, PBS ## 1 Introduction The FLASH effect, which refers to a significant reduction of the radiosensitivity of healthy tissue while maintaining tumor control at ultrahigh dose rates, has garnered considerable interest in the radiation therapy community since it was reported in 2014[4]. Although FLASH has a huge potential to increasing the therapeutic bandwidth of radiation therapy, the mechanisms behind the effect are still not fully understood, but possible explanations focus on the role of oxygen[5], radiochemistry, and the immune system[14, 18, 12, 9, 8, 5]. The effect has been most extensively studied in radiobiological experiments with MeV electron beams, and a first patient has been successfully treated with FLASH electron beam therapy. Recent research has demonstrated that the FLASH effect is also present in high-energy proton beams[1], which may be particularly suited for deep-seated targets. One of the main challenges in developing FLASH proton therapy is the need to significantly increase the dose rate. Preclinical experiments are typically conducted using small fields covered by a passive scattering method, but the maximum achievable field size for a given dose rate is directly limited by the maximum current that the system can output. However, the pencil beam scanning (PBS) technique has the potential to locally achieve high dose rates due to the limited size of the spots that are delivered individually. To achieve a FLASH dose rate in PBS, a number of changes to the equipment and delivery system used in intensity modulated proton therapy (IMPT) are needed. In particular, the proton beam must be delivered at a much higher energy than is generally used in IMPT to ensure a high transmission efficiency[10]. Additionally, using a treatment plan that involves multiple energies incurs delays required by the system to switch from one energy to the next, negatively impacting the average dose rate. One approach to delivering a FLASH treatment involves shooting at maximum energy through the patient with so-called transmission beams, but this results in a significant amount of dose being delivered after the tumor, compromising the superior dosimetric potential of protons[15]. Conformal FLASH proton therapy, on the other hand, involves the use of advanced beam-shaping technologies and specialized nozzle designs to confine the dose to the target volume, similar to IMPT. A patient-specific range modulator, located between the nozzle and the patient, is used to tailor the range of the proton beam, enabling a conformal treatment plan with a single high-energy layer. Several methods have been proposed to optimize range modulators[16, 13, 20, 3]. In addition to utilizing high energy beams, optimizing the spot delivery pattern can enhance the local dose rate[11]. This optimization is a combinatorial problem that requires the use of approximation algorithms when the dimensionality of the problem increases. For instance, Jose Santo _et al.[11]_ applied genetic algorithms to optimize scan patterns for stereotactic FLASH proton therapy of early-stage lung cancer and lung metastases. These algorithms can easily adapt to complex cost functions such as maximizing the dose rate in specific organs at risk. However, there are costs associated with these methods, such as the need to tune hyper-parameters and the execution time required. We propose a fast and effective method to optimize the dose rate in specific regions of interest (ROIs) based on graph theory. In an _in silico_ study, the optimized pattern is then compared with the conventional serpentine pattern for a head and neck case in terms of computed dose rates. ## 2 Materials and Methods ### Dose rate definition In order to optimize the dose rate, an essential first step is to establish a clear definition. In PBS proton therapy, local variations in dose rate occur as each voxel receives dose contributions from nearby PBS spots. Therefore, it is essential to consider the irradiation time of the spots and the time taken for the pencil beam to move from one position to the next. These factors have been incorporated into the definition of the PBS dose rate which has been established by Folkerts _et al.[7, 6]_. This definition was later extended to explicitely include a dose threshold expressed as a percentage of the dose delivered to the voxel, resulting in the following percentile dose rate:[2]: \[\hat{D}_{i}^{P}=\frac{pD_{i}}{t_{1,i}-t_{0,i}} \tag{1}\] where \(t_{1,i}=t_{i}(\frac{(1-p)}{2}2D_{i})\), \(t_{0,i}=t_{i}(\frac{(1+p)}{2}D_{i})\), \(p\) is an arbitrary percentage, and \(D_{i}\) it the total accumulated dose received in voxel \(i\). The maximum percentile dose rate proposed by Deffet _et al.[2]_ extends the concept of percentile dose rate by considering all time windows in which the accumulated dose is at least \(pD_{i}\): \[\hat{D}_{i}^{p} =\ \max_{t_{0},t_{1}}\frac{J_{t_{0}}^{t_{1}}\,d_{i}(t)dt}{t_{1}-t_ {0}} \tag{2}\] \[s.t. \int_{t_{0}}^{t_{1}}d_{i}(t)dt\geq pD_{i}\] \[t_{1}>t_{0}\] where \(d_{i}(t)\) is the dose received in voxel \(i\) at time \(t\). In the present study, we focus on optimizing the 100-percentile dose rate for each voxel, which is the dose delivered to the voxel divided by the corresponding time interval over which it is delivered. Our selection of the 100-percentile dose rate as the optimization objective is the result of the graph representation that will be introduced later in the paper, and which will be thoroughly discussed. According to Eq. 1, the 100-percentile dose rate is: \[\hat{D}_{i}^{100}(\mathbf{S})=\frac{D_{i}}{t_{1}(\mathbf{S})-t_{0}(\mathbf{S})} =\frac{D_{i}}{T_{i}^{100}(\mathbf{S})} \tag{3}\] where \(\mathbf{S}\) is the spot sequence, \(t_{1}(\mathbf{S})=t_{i}(D_{i}^{-})\) and \(t_{0}(\mathbf{S})=t_{i}(0^{+})\) and where we introduce \(T_{i}^{100}(\mathbf{S})=t_{1}(\mathbf{S})-t_{0}(\mathbf{S})\) as the quantity will be used many times later. \(\hat{D}_{i}^{100}(\mathbf{S})\) is equivalent to the PBS and percentile dose rates with a dose threshold of \(0^{+}\ \mathrm{Gy}\). The use of a threshold of \(0^{+}\ \mathrm{Gy}\) instead of \(0\ \mathrm{Gy}\) is the mathematical expression that we do not want to count the time when no dose is given before the first dose contribution to the voxel and also after the last contribution to the voxel. ### Optimization problem The objective is to maximize the dose rate in some specific ROIs, ie. to solve: \[\arg\max_{\mathbf{S}}\sum_{i\in ROI}\hat{D}_{i}^{100}(\mathbf{S}) \tag{4}\] We relax this objective and rather consider the minimization of the time required to deliver the dose to each voxel: \[\arg\min_{\mathbf{S}}\sum_{i\in ROI}T_{i}^{100}(\mathbf{S}) \tag{5}\] In other words, we are going to determine the spot sequence that minimizes the averaged time required to deliver the dose to each voxel. One very important point is that for each voxel, we do not count the time when no dose is given before any dose contribution to the voxel and also after that the voxel has received its full dose. #### 2.2.1 Optimization of the delivery time of the whole field Our initial focus is on the fundamentals of graph theory as applied to the optimization of treatment field delivery time. While this is not our primary objective, this exercise is helpful in laying the groundwork for many of the definitions and concepts that will be utilized later on. All the possibles delivery timing can be represented by means of a fully connected graph, named \(G\). The corresponding adjacency matrix is called \(\mathbf{M}\). \(M_{s_{1},s_{2}}\) is the time, also named \(T_{s_{1}\to s_{2}}\), required to move from spot \(s_{1}\) to spot \(s_{2}\). \(M_{s_{1},s_{1}}\) is the time, also named \(T_{s_{1}}\), is the irradiation time associated to spot \(s_{1}\): \[\mathbf{M}=\begin{bmatrix}T_{s_{1}}&T_{s_{1}\to s_{2}}&T_{s_{1}\to s_{3}}& \ldots\\ T_{s_{2}\to s_{1}}&T_{s_{2}}&T_{s_{2}\to s_{3}}&\ldots\\ \vdots&\vdots&\ddots&\vdots\\ \ldots&\ldots&\ldots&\ldots\end{bmatrix}\] According to our simple delivery model, \(M\) is symmetrical, i.e. \(T_{s_{1}\to s_{2}}=T_{s_{2}\to s_{1}}\) for all \(s_{1},s_{2}\). In the context of optimizing the time required to deliver a treatment field, the eligible delivery sequences refer to all the possible routes that visit each spot exactly once. If a delivery sequence is feasible, the delivery time of the whole treatment field can be represented by the integrated distance along the sequence. Thus, we can optimize the time required to deliver the whole field by finding the shortest path in \(G(\mathbf{M})\). This can be viewed as an instance of the well-known Travelling Salesman Problem (TSP), for which numerous algorithms exist in the literature, and which can be selected depending on the scale of the problem at hand. An equivalent way to build such a graph is not to place the delivery times \(T_{s}\) on the vertices but to add them on the edges. The corresponding adjacency matrix is thus: \[\mathbf{M}=\begin{bmatrix}0&T_{s_{1}}&T_{s_{2}}&\ldots\\ \hline 0&0&T_{s_{1}\to s_{2}}+T_{s_{2}}&\ldots\\ 0&T_{s_{2}\to s_{1}}+T_{s_{1}}&0\ldots&\\ \vdots&\vdots&\ddots&\vdots\\ 0&\ldots&\ldots&\ldots\end{bmatrix} \tag{6}\] This matrix has zeros everywhere on the diagonal and is not symmetrical anymore. This matrix has one more row and columns with respect to the previous one. The first row ensures that the beam-on time of the first spot to be delivered will be accounted for. The first column set to zero everywhere ensures that the first row can only be reached once. In this representation, a notable advantage is that considering the shortest path on the corresponding graph results in the consideration of all possible starting spots simultaneously, due to the addition of an extra, fictional spot \(s_{0}\). However, if a specific starting point is desired, the first row and column can be removed, and the matrix can be reorganized such that the starting spot occupies the first row and column. #### 2.2.2 Optimization of the local dose rate In light of the above method for optimizing the delivery timing of the entire treatment field, we can now focus on the specific task of optimizing the sum of \(T_{i}^{100}\) across all voxels within a designated region of interest (ROI). Considering a sequence of spots \(\mathbf{S}\) which is a path \(P\) in the graph, the sum of \(T_{i}^{100}\) can be obtained by computing, for each voxel, the minimum spanning subgraph in \(G\) which spans over the spots which contribute to \(i\): \[\sum_{i}T_{i}^{100}(P,\mathbf{M})=\sum_{i}P^{i}(\mathbf{M}) \tag{7}\] where \(P^{i}\) is the minimal subpath in \(P\) that delivers the full dose to voxel \(i\), and \(P^{i}(\mathbf{M})\) is the length of \(P^{i}\) computed on \(G(\mathbf{M})\). To determine the optimal sequence we could use brute force: 1. Compute every feasible paths \(P\); 2. For each feasible path: 1. For each voxel \(i\), contribute the sets of spots which contribute to the dose of the voxel; 2. Find the minimum spanning subpath \(P^{sub}\in P\) which spans over all the spots which contribute to \(i\) and compute its cumulative length \(T_{i}\) 3. Add the length of this subpath to the cumulative sum \(\sum_{i}T_{i,p}^{100}\) The optimal path is the one that minimizes \(\sum_{i}T_{i,p}^{100}\) over all possible paths \(p\). Given the infeasibility of using brute force to optimize problems with a significant amount of delivery spots, we propose a modified approach to graph optimization. To prioritize routes that contribute to the largest number of voxels which should decrease \(\sum_{i}T_{i}^{100}\), we adjust the edge weights of the original graph used for optimizing the entire treatment field. Specifically, we construct an adjacency matrix \(\mathbf{E}\), where \(E_{s,s}=0\) for all \(s\), and \(E_{s_{1},s_{2}}\) represents the ratio of (1) the time required to deliver a dose from spot \(s_{2}\) when starting from spot \(s_{1}\), and (2) the number of voxels in the ROI that receive a dose contribution from spot \(s_{2}\). For the sake of conciseness, we define the number of voxels that receive contributions from spots \(s\) as follows. \[N_{s}=\sum_{i}\delta_{D_{i,s}>0} \tag{9}\] We now build the adjacency matrix \(\mathbf{E}\) similarly to \(\mathbf{M}\) but with \[E_{s_{1},s_{2}} =\frac{T_{s_{1}\to s_{2}}+T_{s_{2}}}{N_{s_{2}}} \tag{10}\] \[=\frac{M_{s_{1},s_{2}}}{N_{s_{2}}} \tag{11}\] The adjacency matrix \(\mathbf{E}\) is thus: \[\mathbf{E}=\mathbf{M}\left[\begin{array}{c|cccc}1&0&0&0&\ldots\\ \hline 0&\frac{1}{N_{s_{1}}}&0&0&\ldots\\ 0&0&\frac{1}{N_{s_{2}}}&0&\ldots\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 0&\ldots&\ldots&\ldots&\ldots\end{array}\right] \tag{13}\] The proposed approach aims to favor transitions based on the amount of voxels that receive dose at the end of the transition. The resulting algorithm for finding the optimal spot sequence is: 1. Select the subsets \(S\) of spots which contribute to the ROI 2. Compute \(T_{s_{i}}\forall s_{i}\in S\) 3. Compute \(T_{s_{i}\to s_{j}}\forall s_{i},s_{j}\in S\) 4. Compute dose influence matrix \(D_{i,s}\forall i\in ROI,s\in S\) 5. From \(D_{i,s}\), compute \(N_{s_{j}}\forall s_{j}\in S\). The utilization of a threshold may be implemented to eliminate any contributions to the dose that are deemed not significant enough to warrant inclusion in the calculation. 6. Compute \(\mathbf{E}\) from \(T_{s_{i}},T_{s_{i}\to s_{j}},N_{s_{j}}\) 7. Solve optimal_ordering = TSP(\(\mathbf{E}\)) ### _In silico assessment_ We applied spot pattern optimization on conformal FLASH treatment plans calculated using the methodology presented in our prior publication[3]. In this paper, a treatment plan was optimized on a head and neck case which is reused in the present publication. The PTV considered in the present study had a prescription of 54 Gy. However, dose rates must be computed per fraction. As FLASH treatments will most likely be hypofractionated[19], we considered that the dose per fraction would be around 8 Gy. The treatment plan and its associated range modulator were computed for an energy of 226 MeV, a spot size of (\(\sigma_{x}=4.5\;\mathrm{mm},\sigma_{y}=5\;\mathrm{mm})\). As the spot spacing has a direct impact on the scanning time and thus on the average dose rate, we considered two spot spacings: 4 mm which is close to that conventionnaly used, and 15 mm which is expected to yield higher dose rates. It is to be noted that because of the higher scattering introduced by the passive degradation of the beam, larger spot spacing may be used than in IMPT without introducing significant degradation of the dose[2]. To facilitate the comparison of the different dose rate formulas, we used a simple model where: 1. the nozzle output current was considered constant; 2. the time between 2 spots was proportional to the distance between the spots. In other words, 1. the irradiation time of a spot was the ratio between the charge of the spot and the current; 2. the time separating 2 spots was the ratio between the distance of the spots and an assumed scanning speed. The current was 500 nA (averaged on a pulse period) at the output of the nozzle and the scanning speed was 8000 mm/s. We considered that 1 MU corresponds to 152,880,000 protons. A dose influence matrix was calculated using MCsquare[17] to determine the contributions of each spot to the dose of each voxel. In order to showcase the potential of the proposed approach, we applied it to optimize the spot pattern on both regions of interest (ROIs) corresponding to organs at risk as well as an extention of the PTV. The resulting dose rate maps were then compared to that obtained with the conventional serpentine pattern. ## 3 Results In our study, the submandibular (brown contour) and parotid (red contour) glands in the head and neck region were used as examples of regions of interest in which we could optimize the dose rate. The submandibular gland, being proximal to the target volume, receives a significant dose, while the smaller parotid gland is mostly within the target volume. First, we optimized the dose rate exclusively in the submandibular gland using our algorithm. Then, we optimized the dose rate exclusively in the parotid. Finally, we sought to maximize the dose rate in both the parotid and submandibular glands using the algorithm. For each spot scanning pattern, we computed the 95-percentile dose rates and the corresponding DRVHs. The results are presented in Fig. 2 and Fig. 1 for a spot spacing of 15 mm. The dashed DRVHs represent the serpentine pattern and the solid DRVHs correspond to the optimized spot delivery pattern. The yellow DRVH in the figure corresponds to the PTV. The results show a much higher DR50 with the optimized scanning pattern than with the conventional serpentine pattern. However, the DR95 is only slighty improved. In addition, we see that the improvement is much higher when only one organ at risk is considered. Overall, this means that the improvement of the dose rate values is limited by the size and the complexity of the volume in which we want to maximize it, which is not suprising. We conducted the same optimization procedure on a treatment plan with a spot spacing of 4 mm. Fig. 4 shows that the resulting dose rate was lower than with a spot spacing of 15 mm which is due to the accumulated scanning time being larger as one can figure out by comparing spot maps in Fig. 1 and Fig. 3. After optimizing the spot pattern, we found that a DR50 greater than 40 Gy/s was achievable in either the parotid or the submandibular gland, but not both simultaneously. These results demonstrate the crucial role that spot spacing plays in determining the maximum achievable dose rate. ## 4 Discussion In PBS proton therapy, local variations in dose rate occur as each voxel receives dose contributions from nearby PBS spots. In order to accurately calculate the dose rate, it is essential to consider the irradiation time of a spot and the time taken for the pencil beam to move from one position to the next. These factors have been incorporated into the definition of the PBS dose rate, which we refer to as the percentile dose rate when the dose threshold is expressed as a percentage of the dose delivered to the voxel. In our study, we have created a graph-based algorithm that optimizes the spot trajectory with the goal of maximizing the 100-percentile dose rate. It is widely acknowledged that solving such a graph-based problem is more efficient than utilizing genetic algorithms and involves fewer hyper-parameters to be fine-tuned. However, owing to the NP-hard characteristic of the TSP problem, approximation algorithms are typically necessary. In addition, it should be noted that genetic algorithms possess certain advantages too, such as modularity and the capability to optimize various definitions of the dose rate. For instance, the genetic algorithm proposed by Jose Santos _et al._ could optimize both the 95- and 100-percentile dose rates. In contrast, it is not feasible to directly optimize the 95-percentile dose rate using our proposed method. Nonetheless, the 100-percentile dose rate can be considered the upper limit of the percentile dose rate, and optimizing it is expected to lead to an improvement in the 95-percentile dose rate too, as was observed in our _in silico_ study. In comparison to the serpentine pattern, our optimized scanning trajectory achieves a two-fold increase in the median dose rate. However, two important caveats should be noted. Firstly, the increase in DR95 is significantly lower than that of the median dose rate. Secondly, the degree to which the dose rate can be increased depends on the size and shape of the ROI being considered. It is unrealistic to expect that the simple optimization of the scanning trajectory can double the dose rate accross the entire CTV and its extension. Nevertheless, our results are promising with respect to organs at Figure 1: (a) Unoptimized spot pattern and (b) spot pattern optimized to maximize the dose rate in both the parotid and submadibular glands with a spot spacing of 15 mm. Figure 2: For a spot spacing of 15 mm, (a) 95-percentile dose rate for the unoptimized (serpentine) spot pattern and (b) for the spot pattern optimized to maximize the dose rate in both the parotid and submadibular glands. (c) 95-percentile dose rate volume histograms for unoptimized and optimized spot patterns. risk, which we prioritize to benefit from the FLASH effect. Integrating FLASH objectives into treatment planning and incorporating our proposed method into the optimization of the plan are potential avenues for future research. ## 5 Conclusions A graph-based algorithm has been developed to optimize spot trajectory to maximize the dose rate in ROIs. The optimized scanning trajectory achieved a two-fold increase in the median dose rate but only a limited increase in DR95. The degree to which the dose rate can be increased depends on the size and shape of the region of interest, and integrating FLASH objectives into treatment planning and incorporating the proposed method into plan optimization are potential future research avenues. ## 6 Acknowledgments This work was supported by the Walloon Region of Belgium through technology innovation partnership no. 8341 (EPT-1 - Emerging Proton Therapies Phase 1) co-led by MecaTech and BioWin clusters. ## 7 Conflict of Interest Statement We have no conflicts of interest to disclose.
2310.18189
Improving and extending non-Poissonian distributions for satellite galaxies sampling in HOD: applications to eBOSS ELGs
Halo Occupation Distribution (HOD) models help us to connect observations and theory, by assigning galaxies to dark matter haloes. In this work we study one of the components of HOD models: the probability distribution function (PDF), which is used to assign a discrete number of galaxies to a halo, given a mean number of galaxies. For satellite galaxies, the most commonly used PDF is a Poisson Distribution. PDFs with super-Poisson variances have also been studied, allowing for continuous values of variances. This has not been the case for sub-Poisson variances, for which only the Nearest Integer distribution, with a single variance, has been used in the past. In this work we propose a distribution based on the binomial one, which provides continuous sub-Poisson variances. We have generated mock galaxy catalogues from two dark-matter only simulations, UNIT and OUTERIM, with HOD models assuming different PDFs. We show that the variance of the PDF for satellite galaxies affects the one-halo term of the projected correlation function, and the Count-In-Cells (CIC) one point statistics. We fit the clustering of eBOSS Emission Line Galaxies, finding a preference for a sub-poissonian PDF, when we only vary the parameter controlling the PDF variance and the fraction of satellites. Using a mock catalogue as a reference, we have also included both the clustering and CIC to constrain the parameters of the HOD model. CIC can provide strong constraints to the PDF variance of satellite galaxies.
Bernhard Vos-Ginés, Santiago Avila, Violeta Gonzalez-Perez, Gustavo Yepes
2023-10-27T15:02:44Z
http://arxiv.org/abs/2310.18189v1
Improving and extending non-Poissonian distributions for satellite galaxies sampling in HOD: applications to eBOSS ELGs ###### Abstract Halo Occupation Distribution (HOD) models help us to connect observations and theory, by assigning galaxies to dark matter haloes. In this work we study one of the components of HOD models: the probability distribution function (PDF), which is used to assign a discrete number of galaxies to a halo, given a mean number of galaxies. For satellite galaxies, the most commonly used PDF is a Poisson Distribution. PDFs with super-Poisson variances have also been studied, allowing for continuous values of variances. This has not been the case for sub-Poisson variances, for which only the Nearest Integer distribution, with a single variance, has been used in the past. In this work we propose a distribution based on the binomial one, which provides continuous sub-Poisson variances. We have generated mock galaxy catalogues from two dark-matter only simulations, unit and outerim, with HOD models assuming different PDFs. We show that the variance of the PDF for satellite galaxies affects the one-halo term of the projected correlation function, and the Count-In-Cells (CIC) one point statistics. We fit the clustering of eBOSS Emission Line Galaxies, finding a preference for a sub-poissonian PDF, when we only vary the parameter controlling the PDF variance and the fraction of satellites. Using a mock catalogue as a reference, we have also included both the clustering and CIC to constrain the parameters of the HOD model. CIC can provide strong constraints to the PDF variance of satellite galaxies. keywords: (cosmology:) large-scale structure of Universe - Galaxy: halo ## 1 Introduction The nature of dark matter and dark energy are two of the greatest mysteries in cosmology. The large scale structure of the Universe is a powerful tool to investigate these two components, ant it has gained significant attention in the last 20 years (Colless et al., 2001; Alam et al., 2021; Abbott et al., 2022). During this period of time, large scale surveys have increased significantly the volume of the Universe that has been mapped. Cosmological simulations must adapt to this growing volumes (Heitmann et al., 2016, 2019; Chuang et al., 2019; Ishiyama et al., 2021; Maksimova et al., 2021). In general, the largest cosmological simulations only include dark matter particles information and gravity-only evolution (e.g. Heitmann et al., 2019; Maksimova et al., 2021), whereas other smaller simulations may include gas matter particles with hydrodynamical evolution (gas cooling, AGN feedback, etc.) as well (e.g. Pillepich et al., 2018; Schaye et al., 2023). Thus, for the large dark-matter only simulations a connection between dark matter and tracers such as galaxies needs to be applied (Somerville et al., 2001; Wechsler and Tinker, 2018). Several methods have been developed for this purpose, such as Semi-Analytical Models, which encapsulate the processes that we understand to be most important for the formation and evolution of galaxies into coupled differential equations (e.g. Baugh, 2006). A simpler model consists on Sub-halo Abundance Matching (SHAM), which links observed galaxies with simulated haloes, relying in a monotonic correspondence between halo mass functions and luminosity functions, with a certain level of scatter (e.g. Favole et al., 2016; Yu et al., 2023). An other relatively simple approach is the Halo Occupation Distribution (HOD) modelling (e.g. Benson et al., 2000; Seljak, 2000; Cooray and Sheth, 2002; Berlind et al., 2003; Zheng et al., 2005; Hearin et al., 2016). This can be applied to large simulations without the need to be complete in the subhalo mass function and without requiring resolving too much internal properties of the haloes. The parameter range can be explored fast, and they are typically used for very large cosmological simulations (Zehavi et al., 2005; Manera et al., 2013; Carretero et al., 2015; Avila et al., 2020; Yuan et al., 2023; Rocher et al., 2023). One key ingredient of HOD models is the mean HOD, which controls the expected number of galaxies to be placed in a halo of a given mass. Another key ingredient is the probability distribution function (PDF) used to sample that mean HOD. The Poisson distribution is the most commonly employed approach in the literature to place satellite galaxies within haloes (e.g. Zehavi et al., 2005; Carretero et al., 2015; Avila et al., 2018; Rocher et al., 2023). However, PDF with sub-Poisson and super-Poisson variances have been used in the literature (Jimenez et al., 2019; Avila et al., 2020). In the case of super-Poisson variances, the negative binomial distribution has been used, which can parametrize the variance continuously. In the case of sub-Poisson variances, the Nearest Integer function gives a single variance, which is the smallest possible one. Nonetheless, there remains a range of variances between the Nearest Integer and Poisson distributions that has yet to be explored. In this work we propose and validate a full prescription for the HOD PDF covering all the range in variances. The minimum variance is given by the Nearest Integer, then we cover the sub-Poissonian range with binomial distribution and an extension that we propose in Section 4, we also include trivially the Poisson distribution and we cover the super-Poissonian space with an improved prescription of the Negative Binomial distribution. Ongoing galaxy surveys such us Euclid (Laureijs et al., 2011) or DESI (Collaboration et al., 2016) will be able to constrain the satellite PDF for different tracers. On the one hand, these constraints can give a unique insight to the physics of galaxy formation that rule the galaxy-halo relation. On the other hand, we will show that the PDF variance can affect severely several clustering statistics (see also Avila et al., 2020). Hence, if this is not accounted for, it may bias our cosmological interpretation when studying galaxy clustering (see appendix B of Avila et al., 2020). In general, two-point correlation measurements are usually used for constraining HOD parameters, but in the scientific community an interest started for searching alternative statistics to constrain the HOD such as 2k-Nearest Neighbours Yuan et al. (2023), which have demonstrated to be at least as good as two-point statistics, providing complementary information and reducing possible degeneracies between parameters. In this work, we introduce Count-in-Cells (CIC) as an alternative statistics for constraining HOD and we will show that it has a very promising constraining power. The dark matter simulations and observational data from eBOSS used in this work are described in Section 2. Section 3 describes the components of the HOD model. In Section 4, we introduce the binomial and extended binomial distributions and we evaluate their performance in Section 5. In Section 6 we investigate the impact the probability distribution function (PDF) has on two-point statistics and we also fit the galaxy catalogues to eBOSS data in Subsection 6.2. In Section 7 we evaluate the constraining possibilities of Counts-in-Cells and we use it to reproduce a model catalogue of galaxies in Subsection 7.1. Finally we summarize the results in Section 8. ## 2 Simulations and observations In this work we make use of two dark matter-only simulations, unit and outerrm (SS 2.1). The cosmology and the properties of both simulations are summarized in Table 1. The clustering of model galaxies obtained from these simulations is later compared with SDSS-IV/eBOSS data (SS 2.2). ### Dark Matter Simulations The unit simulations (or UNITsim, Chuang et al., 2019) are full N-body dark matter-only simulations. Their initial conditions are set using the Zel'dovich Approximation (Zel'Dovich, 1970) provided by the FastPM code (Feng et al., 2016). The UNITsims were implemented with the _fixed & paired_ technique, where the Fourier-mode amplitudes are fixed to the ensemble-averaged power spectrum (_fixed_) and two pairs of simulations were run with a \(\pi\) offset on the phases within each pair (_paired_) (Angulo and Pontzen, 2016). Both techniques reduce the wavemodes variance considerably, raising the effective volume of the simulations up to \(V_{\rm eff}=150~{}h^{-1}\)Mpc. Nevertheless, we only use one of the four unit simulations in this work. The evolution of particles up to redshift \(z=0\) is done by L-Gadget, a version of the gravity solver code Gadget2 (Springel, 2005). The dark matter haloes of unit are obtained using the ROCKstar halo finder (Behroozi et al., 2013), and halo masses are defined using the virial theorem. The other simulation we use in this work is outerrm (Heitmann et al., 2019). It was run using the Hardware/Hybrid Accelerated Cosmology Code (HACC) (Habib et al., 2016). Haloes were obtained using the Friends-of-Friends (FoF) halo finder (Davis et al., 1985) with a linking length of b = 0.168. FOF linking lengths define halo masses. Here we analyse single simulation snapshots. For unit we consider \(z=0.8594\), which is the closest snapshot to the eBOSS ELG effective redshift \(z_{\rm eff}=0.845\)(Raichoor et al., 2020). In the case of outerrm, the closest snapshot corresponds to \(z=0.865\). For both simulations, we only consider haloes with at least 21 particles. #### 2.1.1 The bias function Following Avila et al. (2020), we first compute the bias and halo mass functions, which are used to set constrains in the HOD model as we will see in Subsection 3.1. For both functions we use halo mass bins that are narrow for small halo masses and become wider for higher masses to account for the lower number of haloes (see Figure 1). In Subsection 3.1 we relate those quantities to the HOD parameters. We calculate the halo bias function for outerrm following the same procedure as in Avila et al. (2020), in order to allow a direct comparison. We find differences in the halo bias function below 1 per cent with respect to Avila et al. (2020). However, in the case of unit, we obtain the halo bias function slightly different, by computing the power spectrum for each mass bin. We find this method provides more stable results than when considering the two-point correlation function used in Avila et al. (2020). For obtaining the bias from the power spectrum, we use the following limits \(k_{\rm min}=2\pi/L_{\rm box}+dk/2\) and \(k_{\rm max}=2\pi N_{\rm grid}/L_{\rm box}+dk/2\), with linear spacing \(dk=2\pi/L_{\rm box}\) and \(N_{\rm grid}=512\). For each mass bin we obtain the correspondent bias minimizing the \(\chi^{2}\) function: \[\chi_{l}^{2}(b)=\sum_{k=k_{\rm min}}^{k_{\rm max}}\frac{\left(b^{2}P_{\rm th }(k)-P_{h,i}(k)\right)^{2}}{\Delta P_{h,i}(k)^{2}}~{}~{}, \tag{1}\] where \(P_{\rm th}(k)\) is the linear matter power spectrum computed from unit dark matter particles, \(P_{h,i}(k)\) is the halo power spectra for each mass bin \(i\) and \(b\) is the linear bias. We use \(k_{\rm cut}=0.1~{}{\rm Mpc}^{-1}h\), since \begin{table} \begin{tabular}{c c c} & unit & outerrm \\ \hline \(\Omega_{\rm M}=1-\Omega_{\Lambda}\) & 0.3089 & 0.2648 \\ \(h=H_{0}/100~{}{\rm km}~{}{\rm s}^{-1}{\rm Mpc}^{-1}\) & 0.6774 & 0.7352 \\ \(\sigma_{8}\) & 0.8147 & 0.8 \\ \(n_{\rm g}\) & 0.9667 & 0.963 \\ \(V_{\rm box}=L_{\rm box}^{3}\) & \(\left[{\rm Gpc}^{3}h^{-3}\right]\) & 1 & 27 \\ \(N_{\rm P}\) & \(4096^{3}\) & \(10240^{3}\) \\ \(m_{\rm P}\) & \(\left[h^{-1}\,{\rm M}_{\odot}\right]\) & \(1.25\cdot 10^{9}\) & \(1.85\cdot 10^{9}\) \\ \end{tabular} \end{table} Table 1: unit and outerrm simulation parameters. \(\Omega_{\rm M}\) is the dimensionless dark matter density, obtained dividing the dark matter density by the critical density \(\rho_{\rm crit}\). Taking into account a negligible radiation density \(\Omega_{\rm r}\) and the assumption of an Euclidean Universe \(\Omega_{k}=0\), \(\Omega_{\rm M}\) is directly related to the dark energy density \(\Omega_{\Lambda}\) by \(\Omega_{\rm M}=1-\Omega_{\Lambda}\). \(\sigma_{8}\) is the amplitude of the density fluctuations; \(n_{\rm g}\) is the spectral index; \(V_{\rm box}\) is the volume of both cubic simulations; \(N_{\rm P}\) is the number of dark matter particles; and \(m_{\rm P}\) the mass of dark matter particles. the effect of non-linearities is non-negligible beyond this scale. The error in the power spectrum \(\Delta P_{h,i}(k)^{2}\) has the following expression: \[\Delta P_{h,i}(k)^{2}=\frac{(2\pi)^{2}}{k^{2}dk\,{\rm{box}}}\left(P_{h,i}(k)+ \frac{1}{n_{i}}\right)^{2}\ \, \tag{2}\] with the halo number density is \(n_{i}=N_{h,i}/V_{\rm{box}}\left[h^{-1}{\rm{Mpc}}\right]^{-3}\) at the mass bin \(i\), and \(N_{h,i}\) as the number of halos at that mass bin. We finally obtain the linear bias for each halo mass bin considering \(\chi^{2}_{\rm{min}}\) with a \(1\sigma\) confidence interval given by \(\Delta\chi^{2}=1\). In Figure 1 we represent the halo mass and the halo bias functions for outerrim and unr simulations. Higher-mass haloes are less frequent and they also have a higher bias. For both simulations we fit the resulting bias function with a fifth order polynomial (represented by a red dashed line in Figure 1). The polynomial fit is able to encapsulate the simulation measurements within \(1\sigma\), except for the two most massive bins of the unfit simulation. Nevertheless, we expect this effect to be negligible, as very few haloes are found within those two massive bins. ### Observational data In this work we generate mock catalogues with the number density and linear bias fixed to the Emission-Line Galaxies (ELG) from the extended Baryon Oscillation Spectroscopic Survey (eBOSS) from the DR16 Sloan Sky Digital Survey (SDSS) IV (Dawson et al., 2016; Ross et al., 2020; Alam et al., 2021). These galaxies have redshifts in the range \(0.6<z<1.1\). eBOSS ELGs are mostly star-forming galaxies with strong spectral emission lines that enable a fast and reliable determination of their redshift (Raichoor et al., 2020). We use HOD models to populate with galaxies the unit and outerrim simulations, which assume different cosmologies (see Table 1). Below we explain how we obtain the needed number density and linear bias from the observations. #### 2.2.1 Number density The number density, \(n_{\rm{gal}}\), is the total number of observed galaxies per observed volume. To take into account the incompleteness due to the photometric target selection, redshift failures and other effects such as fiber collision, a weight is associated with each galaxy (Raichoor et al., 2020; Ross et al., 2020). In order to compute the total number of observed galaxies we take into account those weights. Since the comoving survey volume depends on the assumed cosmology, \(n_{\rm{gal}}\) varies with cosmology. The survey volume is calculated as follows: \[V_{\rm{eBOSS}}=\frac{1}{3}\left(\chi(z_{\rm{max}})^{3}-\chi(z_{\rm{min}})^{3 }\right)\cdot A_{\rm{eff}}\left(\frac{\pi}{180{\rm{deg}}}\right)^{2}\, \tag{3}\] where \(\chi(z)\) is the comoving distance, which also depends on cosmology. \(A_{\rm{eff}}=A_{N}+A_{S}\), with \(A_{N(S)}=369.451(357.546)\ {\rm{deg}}^{2}\) is defined as the effective areas for the North (South) Galactic caps covered by eBOSS. Finally, \(z_{\rm{min}}=0.6\) and \(z_{\rm{max}}=1.1\) are the minimum and maximum redshifts of the observed ELGs considered in the LSS catalogues (Raichoor et al., 2020). We compute the volumes of the different galactic caps and the total volume of the eBOSS ELGs: \(V_{\rm{N}}=0.425(0.467)\ \left[h^{-1}{\rm{Gpc}}\right]^{3}\), \(V_{\rm{S}}=0.411(0.452)\ \left[h^{-1}{\rm{Gpc}}\right]^{3}\) and \(V_{\rm{eBOSS}}=0.836(0.919)\ \left[h^{-1}{\rm{Gpc}}\right]^{3}\) considering unit (outerrim) cosmologies. We obtain \(n_{\rm{gal}}=2.406\cdot 10^{-4}\left[{\rm{Mpc}}^{-1}h\right]^{3}\) for unit and \(n_{\rm{gal}}=2.187\cdot 10^{-4}\left[{\rm{Mpc}}^{-1}h\right]^{3}\) for outerrim cosmologies (Table 1). #### 2.2.2 Linear Bias The linear bias, \(b_{\rm{gal}}\), can be defined as the ratio between the overdensity of galaxies and the overdensity of dark matter at large scales. We calculate the bias using the Kaiser factor (Kaiser, 1987) to relate the observed galaxies and the dark matter monopoles \(\xi_{0}\) while accounting for Redshift Space Distortions (RSD). \[\xi_{0}(s)=\left(b_{\rm{gal}}^{2}+\frac{2}{3}b_{\rm{gal}}f+\frac{1}{5}f^{2} \right)\cdot\xi_{\rm{th}}(s), \tag{4}\] we use the approximation \(f(z)=(\Omega_{\rm{M}}(z))^{0.545}\)(Peebles, 1980; Linder, 2005) for the growth rate of structure, evaluated at \(z_{\rm{eff}}=0.845\), and \(\xi_{\rm{th}}\) is the monopole of the matter two-point correlation function. For both simulations we consider the range: \(15<s<75\ h^{-1}{\rm{Mpc}}\), with linear binning: \(\Delta s=5\ h^{-1}{\rm{Mpc}}\). For the unit simulation we calculate the monopole of dark matter particles for 0.5 per cent of the total particles 1. For the outerrim simulation, we use the monopole provided by camb linear theory evaluated at the outerrim cosmology since we do not have enough information on the particles. The monopoles from unit camb linear theory and dark matter particles are compatible for scales \(s=15-75\ h^{-1}{\rm{Mpc}}\), with errors below 4 per cent. Footnote 1: The differences between using 0.05 and 0.5 per cent of the total number of particles is below 3 per cent in this range Finally, for the unit simulation we need to include the simulation and survey volume ratio to correct the amplitude of the precision matrix in the \(\chi^{2}\) function. We compute the \(\chi^{2}\) as follows: \[\chi^{2}(b)=\sum_{s,s^{\prime}}\left(\xi_{0}(s)-\xi_{0,d}(s)\right)^{T}C^{-1}( s,s^{\prime})\left(\xi_{0}(s^{\prime})-\xi_{0,d}(s^{\prime})\right)\ \, \tag{5}\] with \(V_{\rm{box}}\) the volume of the simulation and \(C^{-1}(s,s^{\prime})\) is the inverse of the covariance or precision matrix. The covariance matrix is calculated using \(N_{\rm{EZ}}=1000\) Effective Zeldovich (EZ) Mocks (Zhao et al., 2021) for both simulations. The bias is found by minimising the \(\chi^{2}\), with a \(1\sigma\) confidence interval computed using \(\Delta\chi^{2}=1\). We obtain \(b_{\rm{gal}}=1.37\pm 0.03\) for unit and \(b_{\rm{gal}}=1.33\pm 0.02\) for outerrim cosmologies. ## 3 Model galaxies Halo Occupation Distribution (HOD) models populate dark matter haloes at a given redshift with galaxies using analytical equations that relate the probability of finding a galaxy of a certain type with the mass of the host halo. We distinguish between two types of galaxies: centrals and satellites. Central galaxies are placed at the center of host haloes and share their velocity. Due to this definition, a particular halo will not host more than one central galaxy. On the other hand, satellite galaxies do not have to share the position and velocity of its host halo. They are associated to haloes with a given radial and velocity profile. For a generic HOD model, equations are chosen to describe the following properties: * Shape of the mean Halo Occupation Distribution (HOD) (SS 3.1). * Probability distribution function (PDF) for both the number of central and satellite galaxies within a halo (SS 3.2). We will extend the satellite PDF in Section 4. * Spatial and velocity distribution of satellites within haloes (SS 3.3). ### Mean Halo Occupation Distribution The shape of the mean HOD, parametrises how many galaxies will be hosted, on average, by a halo of a given mass. Usually, two analytical expressions for the shape of the mean HOD distribution are used: one for satellites and another for centrals. Those expressions for the mean distribution of galaxies can be used to calculate the total number density and bias of the galaxy catalogue resulting from an HOD, given we know the halo mass function and halo bias functions (see Subsection 2.1): \[n_{\rm gal}=\int\frac{dn(M)}{dlogM}\left\{\left<N_{\rm cen}(M)\right>+\left<N_ {\rm sat}(M)\right>\right\}d\log\rm M \tag{6}\] \[b_{\rm gal}=\frac{1}{n_{\rm gal}}\int\frac{dn(M)}{dlogM}b(M)\left\{\left<N_{ \rm cen}(M)\right>+\left<N_{\rm sat}(M)\right>\right\}d\log\rm M \tag{7}\] We can also calculate the fraction of satellites as follows: \[f_{\rm sat}=\frac{1}{n_{\rm gal}}\int\frac{dn(M)}{dlogM}(N_{\rm sat}(M))d\log \rm M\ . \tag{8}\] We fix the number density of galaxies and the galaxy bias to the observed values of eBOSS ELG data calculated in Subsection 2.2. The fraction of satellites is set as free parameter. In this work we follow the assumptions presented in (Avila et al., 2020) for the modelling of satellite and central galaxies. Central galaxies follow an asymmetric Gaussian distribution. This description is motivated by semi-analytical models of galaxy formation and evolution (e.g. Gonzalez-Perez et al., 2018). For central galaxies we assume the following shape: \[\left<N_{\rm cen}(M)\right>=\frac{A_{c}}{\sqrt{2\pi}\sigma}\times\left\{ \begin{array}{ll}e^{-\frac{(\log M-2)^{2}}{2\sigma^{2}}}&\log M\leq\mu\\ \left(\frac{M}{10^{\mu}}\right)^{\gamma}&\log M\geq\mu\end{array}\right.\ \, \tag{9}\] where \(M\) is the halo mass and \(A_{c}\) determines the amplitude of the Gaussian, which has mean \(\mu\) and variance \(\sigma^{2}\). For \(\log M\geq\mu\), \(\gamma<0\) controls the sharpness of the decaying power law. For satellite galaxies, we assume they follow a power-law: \[\left<N_{\rm sat}(M)\right>=A_{s}\cdot\left(\frac{M-M_{0}}{M_{1}}\right)^{ \alpha}\ \ \ \ M>M_{0}\ \, \tag{10}\] where \(A_{s}\) controls the fraction of satellites and \(\alpha>0\) provides how steep is the power law. The average number of satellites is zero if \(M\leq M_{0}\) and it increases as the halo mass increases starting from this point. If \(A_{s}=1\) and \(M_{0}\ll M_{1}\), \(M_{1}\) represent the mass of haloes in which we expect 1 galaxy satellite on average. In this work we fix the mass parameters \(M_{0}\) and \(M_{1}\) with a relation to \(\mu\) given by \(\log\left(\rm M_{0}\right)=\mu-0.05\) and \(\log\left(\rm M_{1}\right)=\mu+0.35\)(Gonzalez-Perez et al., 2018). We can see the shape of the average number of satellites per halo mass in the black solid line of Figure 2. HOD parameters \(\alpha\), \(\sigma\) and \(\gamma\) are fixed throughout this work to the values shown in Table 2. \(\mu,A_{c},A_{s}\) are determined by the number density \(n_{\rm gal}\), the linear bias \(b_{\rm gal}\) from eBOSS ELG observations and also the fraction of satellites \(f_{\rm sat}\) (free parameter of our model), using Equation 6, Equation 7 and Equation 8, and the latter. We define a default HOD in Table 2 that will be used several times in this work, in which we fix \(n_{\rm gal}=6\cdot n_{\rm eBOSS}\), \(b_{\rm gal}=1.37\) and \(f_{\rm sat}=0.3\). ### Probability distribution function In order to place galaxies into haloes we need to use a discrete probability distribution function (PDF) that will determine how many galaxies of a given type, \(N\), we place given a mean number \(\left<N\right>\) determined by the mean HOD described above. For all discrete probability distribution functions considered in this Figure 1: _Left_: at the top we represent the unit halo mass function, in the center the halo bias function (black points) with \(1\sigma\) confidence interval calculated from \(\Delta_{\mathcal{X}}(b)^{2}=1\) and a fifth order polynomial (red dashed line) fitting the halo bias function. In the bottom we show the ratio between the polynomial and the halo bias function. _Right_: same for outerrim simulation. We find around 1 per cent differences in the halo bias function, when comparing outerrim case with Figure 1 of Avila et al. (2020). work, the number of central or satellite galaxies each halo will host is determined by drawing a random number \(\theta\in[0,1)\). Then, using the cumulative probability distribution function \(P_{\mathrm{C}}(N)=\sum_{N}^{N}P(x)\), we determine the value \(\zeta\) such that \(P_{\mathrm{C}}(N=\zeta)<\theta\) and \(P_{\mathrm{C}}(N=\zeta+1)>\theta\). Then, the number of central or satellite galaxies hosted by the halo are \(N=\zeta\). In the case of galaxy centrals, the Nearest Integer Distribution is always used as haloes can host either one or none central galaxy (for mean values between 0 and 1, the Nearest Integer distribution is identical to a Bernoulli distribution). In the case of satellite galaxies, PDFs with different variances have been used in the literature: Poisson distribution, the standard case for most HOD models (SS3.2.1), super-Poisson through the Negative Binomial distribution (NB, SS3.2.2) and sub-Poisson. So far in the literature, for this last case a Nearest Integer (NI) Distribution (SS3.2.3) has been used. However, this involves a single value for the sub-Poissonian variance. We will introduce in Section 4 two more PDFs that increase the flexibility when assigning satellites to dark matter haloes: the Binomial distribution (SS4.1) and an extended Binomial distribution (SS4.2). This is central focus of this work. Some examples of non-Poisson PDFs in the literature are described here. Jimenez et al. (2019) uses a super-Poisson PDF for satellites considering a Negative binomial distribution (NB). This was motivated for model star forming galaxies. Although a Nearest Integer distribution (NI) is used only for centrals, Berlind et al. (2003) found NI in better agreement with galaxies selected by Smoothed Particle Hydrodynamics (SPH) and Semi-Analytic models, independently of them being centrals or satellites. Several best fits on eBOSS data found in Avila et al. (2020) also show preference for NI and NB. However, there is a significant gap in the variance of the PDF from the Poisson distribution to the only sub-Poisson function considered (NI), motivating the modelling of a continuous distribution. In this work we quantify the deviations of the PDF variance with respect to that of a Poisson distribution by the parameter \(\omega_{\sigma}\): \[\sigma^{2}\equiv\lambda\left(1+\omega_{\sigma}\lambda\right) \tag{11}\] The parameter \(\omega_{\sigma}\) is defined differently for super and sub-Poisson variances, as it is detailed below and in Section 4. Equation 11 is adequate for a range of variances that can be continuous. Nevertheless, as it is detailed in Subsubsection 3.2.3, the Nearest Integer distribution is the only distribution considered in this work that has a single variance and its expression differs from the above equation. #### 3.2.1 Poisson Distribution The most commonly used probability distribution to determine how many satellite galaxies are placed in a particular halo of a given mass is the Poisson distribution. A Poisson PDF can be written as follows for a halo with an average satellite galaxy \(\lambda=\langle N_{\mathrm{sat}}\rangle\): \[P(N_{\mathrm{sat}};\lambda)=\frac{e^{-N_{\mathrm{sat}}\lambda}N_{\mathrm{sat }}}{N_{\mathrm{sat}}!} \tag{12}\] For this distribution, the variance is equal to the mean \(\sigma^{2}=\lambda=\langle N_{\mathrm{sat}}\rangle\). The shaded black region in Figure 2 represents the theoretical Poisson variance for a particular HOD model, and the green lines represent the observed variance after 20 realizations of galaxy catalogues, which are computed with a Poisson distribution. As expected, the black shaded region and green lines are in agreement. #### 3.2.2 Super-Poisson distribution: negative binomial The probability of getting an integer random variable, \(N_{\mathrm{sat}}\), for the negative binomial (NB) PDF can be written as follows: \[\mathrm{NB}(\mathrm{N_{sat}};\mathrm{p},\mathrm{q})=\frac{\Gamma(\mathrm{N_{ sat}}+\mathrm{q})}{\Gamma(\mathrm{q})\Gamma(\mathrm{N_{sat}}+1)}\mathrm{p}^{ \mathrm{q}}\left(1-\mathrm{p}\right)^{\mathrm{N_{sat}}}, \tag{13}\] where \(q\) traditionally describes the number of successes, hence is defined as a natural number. This quantity, \(q\), can be naturally extended to positive real numbers despite losing its original meaning: \(q\in\mathbb{R}^{+}\). \(0<p<1\) is the probability of success and \(N_{\mathrm{sat}}\) the number of failures, which is the random variable of this PDF. In that traditional interpretation \(N_{\mathrm{sat}}+q-1\) would represent the total number of trials, from which \(q\) can inherit the nature of a free parameter. In the context of HOD models, \(N_{\mathrm{sat}}\) is the number of satellite galaxies, and \(p\) and \(q\) are the PDF parameters that determine its mean, \[\lambda\equiv\langle N_{\mathrm{sat}}\rangle=\frac{pq}{1-p}\, \tag{14}\] and standard deviation, \[\sigma^{2}=\lambda\left(1+\frac{\lambda}{q}\right)\equiv\lambda\left(1+ \omega_{\sigma}\lambda\right). \tag{15}\] In this case, the parameter that controls the variance deviations with respect to Poisson, \(\omega_{\sigma}\), is defined as \(\omega_{\sigma}=\frac{1}{q}>0\) to parameterise. In the limit \(q\rightarrow\infty\) we recover the Poisson distribution, corresponding to \(\omega_{\sigma}=0\). Equation 11 also represent the variance of the binomial distribution (See Subsection 4.1) and the extended binomial distribution (See Subsection 4.2) for \(\omega_{\sigma}<0\). As we will see in Subsection 4.1 our code will be implemented with a free parameter \(\omega\), which although it is motivated to follow the behaviour of \(\omega_{\sigma}\) in Equation 11 for the negative binomial distribution, in some parts of our parameter space using sub-Poissonian distributions this is not possible. We represent the \(1-\sigma\) contours using the negative binomial distribution for \(\omega=0.5\)2 in the blue lines of Figure 2, computed using 20 galaxy catalogue realizations. Footnote 2: Note that we use \(\omega\) instead of \(\omega_{\sigma}\), since we are referring to the input for the HOD model, which in principle follow the behaviour of \(\omega_{\sigma}\) except in some small region of the parameter space (see Subsubsubsection 4.1.1) The negative binomial distribution used in Jimenez et al. (2019) and Avila et al. (2020) had a slightly different parametrization of the variance, with \(\sigma^{2}=\lambda\left(1+\beta\right)\) where \(\beta=\omega_{\sigma}\lambda\). #### 3.2.3 Sub-Poisson distribution: Nearest Integer The probability of getting an integer random variable, \(N_{\mathrm{sat}}\), with \(\lambda=\langle N_{\mathrm{sat}}\rangle\), for the Nearest Integer (NI) PDF is: \[\mathrm{NI}(\mathrm{N_{sat}};\lambda)=\left\{\begin{array}{ccc}1-\left( \lambda-\mathrm{trunc}(\lambda)\right)&\mathrm{if}&\mathrm{N_{sat}}=\mathrm{ trunc}(\lambda)\\ \lambda-\mathrm{trunc}(\lambda)&\mathrm{if}&\mathrm{N_{sat}}=\mathrm{trunc}( \lambda)+1\\ 0&\mathrm{otherwise}\end{array}\right., \tag{16}\] in which \(trunc(\lambda)\) is the closest lower integer to \(\lambda\). The variance of this distribution is: \[\sigma^{2}=\lambda^{\prime}\left(1-\lambda^{\prime}\right)\, \tag{17}\] with \(\lambda^{\prime}=\lambda-\mathrm{trunc}(\lambda)\), which is the smallest possible variance. This PDF was the only sub-Poisson PDF considered in Avila et al. (2020), with the drawback that there is only one possible variance for each value of \(\lambda\). This implies that \(\sigma^{2}\) is not an independent parameter from \(\lambda\), in contrast with the Negative Binomial case (which was already used in Avila et al. (2020) as a super Poisson PDF.) ( Equation 15). We represent the Nearest Integer \(1\sigma\) deviations by red solid lines in Figure 2, computed using 20 galaxy catalogue realizations. The Nearest Integer Distribution is also represented by a black dashed line in Figure 4. We note that in the case when \(0<\langle N\rangle<1\), the NI function is identical to the Bernouilli function, often quoted in HOD models. ### Spatial and velocity distributions of satellites In this work, positions of satellite galaxies within haloes are assigned following a Navarro-Frenk and White (NFW) radial profile (Navarro, Frenk & White, 1997). The velocity distribution of satellite galaxies are calculated considering the virial theorem, following Bryan & Norman (1998). The implementation of these two components is described in detail in Avila et al. (2020). ## 4 Extensions to the satellite PDF So far, in the literature, sub-Poisson variances for satellite galaxies have been only described with a Nearest Integer (NI) PDF (Berlind et al., 2003; Zheng et al., 2005; Jimenez et al., 2019; Avila et al., 2020). The NI PDF only admits one possible variance which is the smallest possible. This limits the parameter space that can be explored to find the best description of galaxies. In this section we introduce the extensions proposed in this work to sample sub-Poisson variances continuously. We also propose a solution to mitigate numerical errors affecting non-Poisson distributions in general, derived from \(\Gamma\) functions with large arguments. ### Binomial distribution The binomial distribution, B, provides a discrete range of possible sub-Poissonian variances. For this distribution, the probability of getting a certain number of satellite galaxies in a halo, \(N_{\rm sat}\), which is the random variable of the PDF, is given by: \[{\rm B}({\rm N_{sat}};{\rm q},{\rm p})=\frac{\Gamma\left({\rm q}+1\right)}{ \Gamma\left({\rm N_{sat}}+1\right)\Gamma\left({\rm q}+1-{\rm N_{sat}}\right)} ^{{\rm N_{sat}}}\left(1-{\rm p}\right)^{{\rm q}-{\rm N_{sat}}}\,, \tag{18}\] with the parameter \(q\in\mathbb{N}\) defined traditionally as the number of trials, which is the maximum number of satellites that can be placed with non-zero probability, and \(0<p<1\) the probability of success, i.e. of actually placing a satellite galaxy in a halo. The possible number of satellite galaxies has to be less than the number of trials, \(N_{\rm sat}\leq q\). Unlike in the negative binomial distribution, the possible values of \(q\) cannot be extended to real numbers, since negative probabilities could arise. Given the parameters \(p\) and \(q\), the mean of the binomial distribution, \({\rm B}({\rm N_{sat}};{\rm q},{\rm p})\), is: \[\lambda\equiv\langle N_{\rm sat}\rangle=qp\,. \tag{19}\] As we can see in Figure 2 and Equation 10, in our model, the mean number of satellite galaxies \(\lambda=\langle N_{\rm sat}(M)\rangle\) increases with the halo mass, \(M\). The variance of the binomial distribution is: \[\sigma^{2}=\lambda\left(1-\frac{\lambda}{q}\right)\equiv\lambda\left(1+\omega _{\sigma}\lambda\right)\;; \tag{20}\] Here we define \(\omega_{\sigma}\equiv-\frac{1}{q}\) as the parameter to control the variances lower than Poisson 3, following what we have previously done for the negative binomial distribution, NB. Unlike the Nearest Integer distribution, the binomial distribution has sub-Poissonian variances that not only depend on the mean number of satellites, \(\langle N_{\rm sat}\rangle\), but also on \(\omega_{\sigma}\). As it happened for the negative binomial distribution, in the limit \(q\rightarrow\infty\) we recover the Poisson distribution (\(\omega_{\sigma}=0\)). Footnote 3: Avila et al. (2020) and Jiménez et al. (2019) use another parametrization of the variance: \(\sigma^{2}=\lambda\left(1+\beta\right)\) when they consider the negative binomial distribution. In the context of the binomial distribution, the possible values of \(\beta\) are limited to \(-\beta<\lambda(M)\), since there are no mathematically possible variances lower than the Nearest Integer variance: \(\sigma^{2}=\lambda\left(1-\lambda\right)\), in which \(\lambda\) is an increasing function of the halo mass. Then, \(\beta\) is not a proper parameter to parametrize binomial variances for the entire HOD catalogue with \(\langle N(M)\rangle\), since its values have a halo-mass dependent limitation. We use instead \(\omega_{\sigma}=\beta/\lambda\). Now, \(\omega_{\sigma}\) has a constant limit from below \((-\omega_{\sigma}<1)\) for all halo masses. The variance of the distribution has exactly the same expression as in Equation 11. However, since for the binomial distribution \(q\in\mathbb{N}\), the only possible input values of \(\omega_{\sigma}\) are discrete: \(\omega_{\sigma}\in\left[-1,-\frac{1}{2},-\frac{1}{3},....,-\frac{1}{\infty}=0\right)\). We extend this range by introducing a new PDF in Subsection 4.2. \begin{table} \begin{tabular}{c c c c c c} \(\mu\) & \(A_{c}\) & \(A_{s}\) & \(\alpha\) & \(\sigma\) & \(\gamma\) \\ \hline 11.648 & 0.0368 & 0.03583 & 0.9 & 0.12 & \(-1.4\) \\ \end{tabular} \end{table} Table 2: Default mean HOD used in this work. The parameters \(\mu\), \(A_{c}\) and \(A_{s}\) are obtained considering \(f_{\rm sat}=0.3\), \(b_{\rm gal}=1.37\) and \(n_{\rm gal}=6\cdot n_{\rm ehOSS}\), in the context of unit simulation. Figure 2: Mean and standard deviation of satellite galaxies as a function of halo mass \(M\). The solid black line shows the mean default HOD, and the black shade is the theoretical \(1\,\sigma\) contour considering a Poisson distribution (\(\sigma=\sigma_{\rm p}\)). Solid colored lines represent the measured \(1\sigma\) standard deviation around the mean, for different values of \(\omega\). The skewness of the binomial distribution has the following expression: \[k_{3}=\frac{\left(1-\frac{2\lambda}{q}\right)}{\sigma}\,. \tag{21}\] We have provided expressions for the third first moments of the binomial distribution: the mean (Equation 19), the variance (Equation 20), and the skewness (Equation 21). These first three moments are properly defined with the above equations for \(q\geq 1\), \(q\geq 2\) and \(q\geq 3\), respectively. That is, if \(q<k\), the \(k\)-th moment of the distribution may follow another expression. So far, we have introduced \(\omega_{\sigma}\) to parametrize super-Poisson (Subsection 3.2.2) and sub-Poisson (Subsection 4.1) variances. Now we need to introduce \(\omega\), which will be the parameter that is input to the HOD model to control the variance of the satellite PDF. This parameter follows the behaviour of \(\omega_{\sigma}\), however for some regions in the parameter space \(\left(\lambda>1,\omega<-\frac{1}{\lambda}\right)\) negative probabilities arise from Equation 18 for certain \(N_{\text{sat}}\) (see red shaded area in Figure 3). We describe and address this limitation below. #### 4.1.1 Extension to avoid unphysical negative probabilities Since the mean number of satellites increases with halo mass, massive enough haloes will be able to host several satellite galaxies, \(\left<N_{\text{sat}}\right>>1\). The exact number of haloes that fulfill \(\left<N_{\text{sat}}\right>>1\) also depends on other HOD parameters, such as the fraction of satellites \(f_{\text{sat}}\). If we consider our default HOD presented in Table 2, which corresponds to a fraction of satellites \(f_{\text{sat}}=0.3\), we have that \(3.4\) (\(11.2\)) per cent of all galaxies (satellite galaxies) are attached to haloes with \(\left<N_{\text{sat}}\right>>1\). For those haloes that contain, on average, one or more satellite galaxies, if \(\omega<-1/\lambda\) (we remind that \(\omega\) is related to the variance) the PDF becomes negative and the expression of the binomial variance described by Equation 20 will give negative values of the variance, \(\sigma^{2}<0\) (red area in Figure 3). To avoid unphysical cases with negative probabilities the HOD model must correct the input value of \(\omega\) for those haloes, enhancing the variance only what is strictly necessary: \[\omega_{\sigma}(M)=\left\{\begin{array}{ll}-\frac{1}{\text{ trunc}(\left<N_{\text{sat}}(\text{M})\right>)+1}&\text{if}\;\;\omega<-\frac{1}{\text{ trunc}(N_{\text{sat}}(\text{M}))+1}\\ \omega&\text{otherwise}\end{array}\right. \tag{22}\] in which \(\omega\) is the input parameter for the HOD model used for the entire catalogue, and \(\omega_{\sigma}\) is the value that will be finally used for a given halo mass \(M\). We will use by default \(\omega_{\sigma}\) throughout this paper, except when it is an input parameter on the code. In this way, it is guaranteed that we always recover the correct mean number of satellites and a positive variance when we populate haloes with satellites. We represent in Figure 2 the mean number of satellites \(\lambda=\lambda(M)\) in a black solid line. We also focus on the orange solid lines, which represent the \(\lambda\pm 1\sigma\) contours, in which \(\sigma\) follows Equation 20 for \(\omega=-0.5\). Those lines are computed using \(20\) galaxy catalogue realizations. In Figure 3, we represent the parameter space \(\left\{q=\frac{1}{\omega_{\sigma}},\lambda\right\}\). We show the limit when the PDF starts to be negative (red line), together with the limitation we have set in the parameter space (green staggered line). Above the green staggered line we do not need corrections in the parameter space and have \(\omega_{\sigma}=\omega\) (green area and blue area, the latter representing \(\lambda<1\)). Below the red line we represent the region in which corrections have to be made: \(\omega_{\sigma}=-\frac{1}{\text{trunc}(\left<N_{\text{sat}}\right>)+1}\) (red area). Despite green and red regions being filled continuously, \(q\) is an integer. Therefore, the region between red and green lines is not populated by the binomial distribution. Figure 4 shows a set of binomial variances given by different input values of \(\omega\) and mean number of satellite galaxies, \(\lambda=\left<N_{\text{sat}}\right>\). They are compared with Poisson and Nearest Integer variances. As we can see, binomial variances are lower than the Poisson variance and higher than the Nearest Integer one: for low \(\lambda\) we see the inverted arc shape which is the natural shape of Equation 20. For high \(\lambda\) there is a serrated behaviour arising from the application of Equation 22 to avoid the unphysical negative values of the variance. This serrated behaviour is directly related to the green ladder in Figure 3. ### Extended Binomial distribution As we described in Subsection 4.1, \(\omega_{\sigma}\) is limited to discrete values in the binomial distribution, since \(\omega_{\sigma}=-1/q\) and \(q\) range is limited to natural numbers. Hence, the variance \(\sigma^{2}\) for a particular \(\left<N_{\text{sat}}\right>\), given by Equation 11, is also limited to discrete values. In this section our objective is to provide an extension to continuous values of \(\omega_{\sigma}\) allowing the variance to take continuous values for a given \(\left<N_{\text{sat}}\right>\). For this purpose, we define a new probability distribution function, the extended binomial distribution \(B_{\text{ext}}\): \[B_{\text{ext}}\left(N_{\text{sat}};\lambda,\omega_{\sigma}\right)=f_{\text{ Q}}\left(N_{\text{sat}};\lambda,\omega_{\sigma}\right)\cdot B\left(N_{\text{ sat}};p,q\right)\;\;\;, \tag{23}\] where \(B\left(N_{\text{sat}};p,q\right)\) is the binomial distribution. \(f_{\text{q}}\left(N_{\text{sat}};\lambda,\omega_{\sigma}\right)\) is introduced to extend our range of sub-Poissonian variances since our input parameter \(\omega_{\sigma}\) can now take continuous values: \(-1\leq\omega_{\sigma}<0\). We have that \(q\equiv\text{ceil}\left(-1/\omega_{\sigma}\right)\), in which \(\text{ceil}(x)\) is the closest upper integer to \(x\) (we justify in appendix A1 this relation between \(q\) and Figure 3: The parameter space \(\left(\lambda,q\equiv-1/\omega_{\sigma}\right)\) of the binomial distribution. Considering the default HOD model, an \(88.8\) per cent of satellites reside in the blue region, in which the binomial distribution does not give unphysical negative variances. The points leading to unphysical negative variances, cover the red region below the red line, including it. The red region above the red line is out of the parameter space, since \(q\) is an integer number. We use the ladder \(q=-1/\omega_{\sigma}=\text{trunc}\left(\left<N_{\text{sat}}\right>\right)+1\) as the preferred boundary for corrections, ensuring: 1) the minimum effective variance to be positive and 2) \(\left<N_{\text{sat}}\right>\) to be recovered. We show an example correcting a point in the red region of the parameter space: \(\lambda=2.5\), \(\omega=-\frac{1}{2}\) to the nearest point in the green region of the parameter space with the same mean \(\left<N_{\text{sat}}\right>\): \(\lambda=2.5\), \(\omega_{\sigma}=-\frac{1}{3}\). \(\omega_{\sigma}\)). Note that \(q\in\mathbf{N}\) as required. \(f_{\mathrm{q}}\left(N_{\mathrm{sat}};\lambda,\omega_{\sigma}\right)\) is also computed such that \(B_{\mathrm{ext}}\) matches the first three binomial moments (see the detailed calculations in appendix A). To evaluate \(f_{\mathrm{q}}\left(N_{\mathrm{sat}};\lambda,\omega_{\sigma}\right)\), \(\lambda\) is evaluated using Equation 10 as usual, \(\omega_{\sigma}\) is obtained from the input value of \(\omega\), applying the correction introduced in Subsubsection 4.1.1 and \(q\equiv\mathrm{ceil}\ (-1/\omega_{\sigma})\). Finally, to evaluate \(B\left(N_{\mathrm{sat}};p,q\right)\), we have again that \(q\equiv\mathrm{ceil}\ (-1/\omega_{\sigma})\) and also \(p=\lambda/q\). The calculation of the analytic expression of \(f_{\mathrm{q}}\left(N_{\mathrm{sat}};\lambda,\omega_{\sigma}\right)\) is detailed in appendix A. \(f_{\mathrm{q}}\left(N_{\mathrm{sat}};\lambda,\omega_{\sigma}\right)\) has to be calculated separately for each \(q\) solving \(q+1\) equations: a normalization equation (such that the PDF sums to unity), an equation for the mean number of satellites \(\lambda\) and \(q-1\) equations for the subsequent central moments. We solve for \(q=1\) (trivial case, \(\omega_{\sigma}=-1\)), \(q=2\left(\omega_{\sigma}\in\left(-1,-\frac{1}{2}\right]\right)\) and \(q=3\) (\(\omega_{\sigma}\in\left(-\frac{1}{2},-\frac{1}{3}\right]\)), by solving the corresponding equations. For \(q\geq 2\), we can choose the variance to have the expression of Equation 114. For \(q\geq 3\), we choose the skewness to match the expression given by Equation 215. For \(q\geq 4\), one would need to specify the following higher order central moments, becoming increasingly complicated. In appendix A we propose a generalisation of our \(q=2\) and \(q=3\) solutions for \(q\geq 4\). We find this generalisation to work well in most of the parameter space, but we also find a small part of the parameter space in which negative probabilities arise (\(f_{\mathrm{q}}<0\)). This has a very small impact in our parameterisation as we will see in Section 5. Footnote 4: The expression of the variance for \(q=1\), which cannot be imposed, coincides with Equation 11, see appendix A Footnote 5: The expression of the skewness for \(q=1\) and \(q=2\) cannot be imposed, and for \(q=2\), it does not coincide with Equation 21 (see gray region of Figure 6 and appendix A) Note that the binomial distribution is a particular case of \(B_{\mathrm{ext}}\), with \(\omega_{\sigma}=-1/q\). (See Subsection 4.1). In those cases \(f_{\mathrm{q}}=1\). ### Mitigating numerical limitations of the satellite PDF \(\Gamma\) functions are used in the negative binomial distribution, Equation 13, the binomial distribution, Equation 18, and its extension, Equation 23. Numerical errors for these \(\Gamma\) functions may arise for very large arguments: for \(\Gamma(y\geq y_{\mathrm{Tlin}})\) an overflow is produced. Then, the implementation of the probability distribution function fails in the task to place satellite galaxies in the haloes, obtaining a galaxy catalogue without satellites, thus decreasing the input number density of galaxies. \(\Gamma(y\geq y_{\mathrm{Tlin}})\) imposes a limitation on the parameter space \((q,N_{\mathrm{sat}})\): \[y=q+g(N_{\mathrm{sat}})<y_{\mathrm{max}}\,, \tag{24}\] where \(g(N_{\mathrm{sat}})\) is the remaining \(\Gamma\) argument. In this work, we compute \(\Gamma\) functions using tgamma from the \(math.h\) library in C. With this particularities, \(y_{\mathrm{max}}=171.7\). Since \(q\) is inversely proportional to \(\omega_{\sigma}\), Equation 24 sets a lower limit in \(\omega_{\sigma}\). As the \(\Gamma\) functions enter in both Equation 13 and Equation 18 as a division, we define the following function: \[Gm(N_{\mathrm{sat}},q)\equiv\frac{\Gamma(q+h(N_{\mathrm{sat}}))}{\Gamma(q+g(N _{\mathrm{sat}}))}\ \ ;\ \ h(N_{\mathrm{sat}})\geq g(N_{\mathrm{sat}})\ . \tag{25}\] The above function fails for \(\Gamma(y\geq y_{\mathrm{Tlin}})\) when Equation 24 is not satisfied due to numerical error. To avoid this we propose to use the product function, which is mathematically equivalent to Equation 25 and does not fail when Equation 24 is not satisfied: \[Pr(N_{\mathrm{sat}},q)\equiv\prod_{i=0}^{h(N_{\mathrm{sat}})-g(N_{\mathrm{ sat}})}\left(i+q+g(N_{\mathrm{sat}})\right)=Gm(N_{\mathrm{sat}},q) \tag{26}\] With this substitution, we can rewrite the binomial and negative binomial with products as follows: \[\mathrm{NB}_{\mathrm{I}}(N_{\mathrm{sat}};\mathrm{p},\mathrm{q})=\frac{\prod _{i=0}^{N_{\mathrm{sat}}}\left(i+\mathrm{q}\right)}{\Gamma(N_{\mathrm{sat}}+1 )}\mathrm{p}^{\mathrm{q}}\left(1-\mathrm{p}\right)^{N_{\mathrm{sat}}}\,, \tag{27}\] \[\mathrm{B}_{\mathrm{II}}(N_{\mathrm{sat}};\mathrm{q},\mathrm{p})=\frac{\prod_{i= 0}^{N_{\mathrm{sat}}}\left(i+\mathrm{q}+1-N_{\mathrm{sat}}\right)}{\Gamma \left(N_{\mathrm{sat}}+1\right)}\mathrm{p}^{N_{\mathrm{sat}}}\left(1-\mathrm{ p}\right)^{\mathrm{q}-N_{\mathrm{sat}}}\,. \tag{28}\] Except for the discussion introduced in this section, throughout this work we will always use Equation 27 and Equation 28 when Negative binomial and binomial functions are needed, respectively. In this section, they are denoted as \(\mathrm{NB}_{\mathrm{II}}\) and \(\mathrm{B}_{\mathrm{II}}\), but in further sections we will use the notation NB and B for simplicity. In our model, this correction is only applicable for both the negative binomial distribution and for the binomial distribution. The extended binomial distribution also suffers from \(\Gamma\) overflows on the \(f_{\mathrm{q}}\) function, but it cannot be corrected, since this correction is only applicable when two \(\Gamma(q+g(N_{\mathrm{sat}}))\) functions stay at both sides of a division. This is not the case of \(B_{\mathrm{ext}}\) (See Equation 23, Equation A26 and Equation A27). We now quantify the effect of this change on the PDFs. When \(\omega_{\sigma}>0\) we use the negative binomial distribution and when \(\omega_{\sigma}<0\), we use the binomial distribution. We present in Figure 5 a comparison between the input number density \(n_{\mathrm{gal}}\) and the HOD model output for this quantity, as a function of \(\omega\)6. We have computed model galaxy catalogues with Figure 4: _Top:_ Theoretical standard deviation \(\sigma^{2}\) of the Nearest integer (NI), Poisson (P) and binomial (B) distributions as a function of the mean \(\lambda\), for several input values of \(\omega\). _Bottom:_ ratio between all distributions and Poisson. \(f_{\text{sat,input}}=0.3\), \(n_{\text{gal,input}}=6n_{\text{eBOSS}}=1.44\cdot 10^{-3}\)\([h/\text{Mpc}]^{3}\) and \(\omega\in[10^{-4},4\cdot 10^{-2}]\). Figure 5 shows the negative binomial (Equation 27, green solid line) and the negative binomial using only \(\Gamma\) functions (Equation 13, orange solid line). In the latter case, since numerical errors imply losing satellite galaxies, we do not recover the input number density. We observe an abrupt decaying of \(n_{\text{gal}}\) at \(\omega=0.006\). A similar behaviour is found for the binomial distribution using products (Equation 28) and only \(\Gamma\) functions (Equation 18), respectively. In this work, for the negative binomial distribution we use the definition \(q=1/\omega_{\sigma}\), however in the literature \(q\) has been also defined as \(q=\langle N_{\text{sat}}\rangle/\beta=1/\omega_{\sigma}\)(Avila et al., 2020; Jimenez et al., 2019). In the first case we use \(\omega_{\sigma}\) to control the variance of the PDF and in the second case they use \(\beta\) for the same purpose. The variance can be expressed as \(\sigma^{2}=\lambda\left(1+\beta\right)\). We represent in Figure 5 the same comparison between the input number density and the HOD output, as a function of \(\beta\). We observe similar results as before: the use of negative binomial with products implies recovering the number density in the whole range (See blue dashed line), while in the case of using only \(\Gamma\) functions we observe a decay, which is smoother since \(q\) depends on \(\langle N_{\text{sat}}\rangle\propto M\). ## 5 Computational implementation of the satellite PDF Here we describe the algorithm we follow to choose a PDF for satellite galaxies given a target variance. This is quantified by an input parameter, \(\omega\), to our halo occupation distribution (HOD) model. This free parameter quantifies how far the variance of the satellite PDF is from that for a Poisson distribution. Given any input \(\omega\), unphysical negative probabilities can arise for some haloes containing more than one satellite galaxies, when \(\omega<-1/(\text{trunc}(\text{N}_{\text{sat}}))+1)\) (Subsubsection 4.1.1). To prevent this from happening, we modify the input parameter \(\omega\) when needed, following Equation 22. We let denote \(\omega_{\sigma}\) the input parameter with the possible correction. Depending on the value of \(\omega_{\sigma}\), the code for the HOD model will use different PDFs for satellite galaxies: * If \(\omega_{\sigma}=0\) we assume a Poisson PDF (Equation 12), with variance \(\sigma^{2}=\lambda\equiv\langle N_{\text{sat}}\rangle\). * If \(\omega_{\sigma}>0\) we assume a (corrected) negative binomial PDF (Equation 27) with super-Poisson variance, \(\sigma^{2}=\lambda\left(1+\omega_{\sigma}\lambda\right)\) (Equation 11). * If \(\omega_{\sigma}<0\) we have two options for the satellite PDF, with a sub-Poisson variance: * If \(\omega_{\sigma}<-1\) we assume a nearest integer PDF (Equation 16), which has the smallest possible variance. * Otherwise we assume the \(B_{\text{sub-P}}\) PDF (Section 4) with a continuous sub-Poisson variance, \(\sigma^{2}\equiv\lambda\left(1+\omega_{\sigma}\lambda\right)\) (Equation 20). The \(B_{\text{sub-P}}\) PDF is based on the (extended) binomial distributions introduced in Section 4. By default, we use the new PDF that we have introduced in Subsection 4.2 to allow for continuous values of \(\omega_{\sigma}\). This new PDF is the extended binomial, \(B_{\text{ext}}=f_{\text{q}}B\) (Equation 23). Note that \(B_{\text{ext}}\), is reduced to the binomial one, \(B\), when \(f_{\text{q}}=1\). This occurs when \(\omega_{\sigma}=1/\Psi_{\text{q}}\in\mathbb{N}^{+}\). We do not use the \(B_{\text{ext}}\) for the \(B_{\text{sub-P}}\) PDF, in one case. When \(\omega_{\sigma}>-1/\gamma_{\text{Tlin}}\) we use \(B\left(q=\text{ceil}\left(-1/\omega_{\sigma}\right)\right)\) (Equation 28). For this range of \(\omega_{\sigma}\), \(f_{\text{q}}\) cannot be evaluated due to the numerical limitations of using \(\Gamma\) functions (see Subsection 4.3 and Section A). And thus, \(B_{\text{ext}}=f_{\text{q}}B\) cannot be computed in this range. Below, we evaluate the numerical performance of \(B_{\text{sub-P}}\). ### Performance for a uniform distribution We evaluate the \(B_{\text{sub-P}}\) implementation introduced in Section 4 as a mathematical tool that can be applied when a continuous range of sub-Poisson variances are needed. For this purpose, we generate 3500 points distributed randomly with \(\omega\in[-1,0)\) and \(\lambda=[0,10]\) (this range is motivated in Subsection 5.2). This uniform distribution is shown in the top left panel of Figure 6. Such number of pairs \(\{\omega,\lambda\}\) can provide a reliable estimation of how \(B_{\text{sub-P}}\) behaves when \(f_{\text{q}}<0\) (red points in the top left of Figure 6). In the upper-left plot of Figure 6 we distinguish between four types of points, representing the three cases described above for \(B_{\text{sub-P}}\), that is, the use of \(B_{\text{ext}}\), green and red points in the top panels of Figure 6; \(B(q=\text{trunc}\left(-1/\omega_{\sigma}\right))\), orange points; and \(B(\omega_{\sigma})\), dark blue points. The red points in Figure 6, correspond to negative probabilities due to \(f_{\text{q}}\left(N_{\text{sat}};\lambda,\omega_{\sigma}\right)<0\). Points with \(f_{\text{q}}<0\), represent potentially problematic regions in the parameter space, where errors could arise when recovering the input mean, variance and higher-order moments of the distribution. This region is centered around \((\lambda>1,\omega_{\sigma}>-1/4)\). We evaluate the impact of these below. In the upper panels of Figure 6 we show the effect of correcting the input \(\omega\), dark blue points uniformly distributed on the left, when \(\omega<-1/(\text{trunc}(\left(N_{\text{sat}}\right)+1))\) (Subsubsection 4.1.1). The resulting \(\omega_{\sigma}\) parameter has a serrated behaviour, as shown by the dark blue points in the upper-right panel of Figure 6. We address the errors introduced by the corrections mentioned above by studying \(10^{6}\) realisations of \(B_{\text{sub-P}}\) for each pair \((\omega_{\sigma},\lambda)\) shown in the top right panel of Figure 6. We compute the first three central moments: the mean \(\lambda_{\text{recov}}\), the variance \(\sigma_{\text{recov}}\) and the skew Figure 5: Ratio between the recovered and the target number density multiplied by \(10^{3}\) for clarity, as a function of \(x>0\). The orange line is obtained using \(\text{NB}(\text{N}_{\text{sat}};q=1/\omega)\) Equation 13. When \(\Gamma(y>y_{\text{Tlin}})\) in Equation 13, numerical computation of this function falls and no satellite galaxies are assigned to dark matter haloes, and thus the recovered number density decreases abruptly. The red line is obtained considering \(\text{NB}(\text{N}_{\text{sat}};\text{q}=\lambda/\beta)\), which has a smoother decay when \(\beta\to 0\). Green and blue lines are obtained using \(\text{NB}(\text{N}_{\text{sat}};\text{q}=1/\omega)\) and \(\text{NB}_{\text{Tlin}}(\text{N}_{\text{sat}};\text{q}=\lambda/\beta)\), respectively. For both cases we recover correctly the number density in the whole range of \(\omega\). Similar results are found for \(\omega<0\) using \(\text{B}(\text{N}_{\text{sat}};\text{q}=-1/\omega)\), compared to \(\text{B}_{\text{Tlin}}(\text{N}_{\text{sat}};\text{q}=-1/\omega)\). We use the default HOD model to compute the galaxy catalogues (Table 2). ness \(k_{3,\mathrm{recov}}\) of those realisations (for the skewness we only consider \(\omega>-0.5\) since this is the range of applicability of Equation 21). We also compute \(\omega_{\sigma,\mathrm{recov}}\) from \(\lambda_{\mathrm{recov}},\sigma_{\mathrm{recov}}\) inverting Equation 11. Finally, we compare the results with the inputs \(\lambda\), \(\omega_{\sigma}\), \(\sigma\) and \(k_{3}\). In the middle and lower panels of Figure 6, we represent for each one of the points the following quantity: \(|\theta_{\mathrm{recov}}/\theta-1|\), for \(\theta=\lambda,\omega_{\sigma},\sigma,k_{3}\), which represents the differences between recovered and input moments. As expected, high values of \(|\theta_{\mathrm{recov}}/\theta-1|\) are found in the region containing points with \(f_{\mathrm{q}}(N_{\mathrm{sat}};\lambda,\omega_{\sigma})<0\), in which \(\lambda>2\) and \(\omega_{\sigma}>-1/3\). We get the largest errors when we consider the skewness \(k_{3}\) and the lowest errors when we consider the mean \(\lambda\). Errors arise also in the \(\theta=\omega_{\sigma}\) subplot for \((\lambda,\omega_{\sigma}\to 0)\): This is found to be numerical noise, and the relative errors get smaller as we increase the number of realisations. Finally, we also see deviations in the \(\theta=k_{3}\) subplot, near to the \(k_{3}=0\) curve, again simply due to statistical limitations. To sum up, in those two last cases there is not an intrinsic bias, since errors simply decrease when we consider a larger number of realizations. Intrinsic bias in the computation of moments arise in the region of the parameter space corresponding to \(f_{\mathrm{q}}(N_{\mathrm{sat}};\lambda,\omega_{\sigma})<0\). Since our main interest is to recover correctly the mean number of satellites and the variance, in Figure 7 we show the differences between recovered and input values of \(\lambda\) and \(\sigma\). As we can see, only red points (\(f_{\mathrm{q}}<0\)) have appreciable errors in our analysis. As we can see for those points, less of them ends up with a higher error values and higher discrepancies between \(\lambda\) and \(\lambda_{\mathrm{recov}}\) involve also higher discrepancies between \(\sigma\) and \(\sigma_{\mathrm{recov}}\) (Note that both parameters are related by Equation 11). In addition, we find \(\lambda_{\mathrm{recov}}<\lambda\) while \(\sigma_{\mathrm{recov}}>\sigma\). We discussed so far where we can find in our parameter space differences between recovered and input values of our parameters \(\lambda,\omega_{\sigma},\sigma\) and \(k_{3}\). Now we can focus on determining how frequent those errors are. Considering \(\theta=\lambda(\theta=\sigma)\), a 2 (2.8) per cent of the points in the parameters range uniformly explored in Figure 6 have errors greater than 0.5 per cent. In contrast, if we consider \(\theta=k_{3}\), a 9.5 per cent of the points have errors greater than 0.5 per cent. ### Performance for galaxy catalogues In the previous subsection we evaluated how \(B_{\mathrm{sub-P}}\) performed in a uniform \(\{\omega,\lambda\}\) space. Now, we evaluate the performance of \(B_{\mathrm{sub-P}}\) for galaxy catalogue generation considering a reduced parameter space to isolate points in which \(f_{q}<0\). First, let us recall that we are considering a growing power law for \(\langle N_{\mathrm{sat}}(M)\rangle\) (Equation 10, Figure 2). At the same time, the halo mass function is a decreasing power law: we have much less massive haloes than lighter ones. This translates to a lesser number of haloes with a higher average number of satellites. If we consider Figure 6, the difference we make is to consider progressively less number of points with higher \(\langle N_{\mathrm{sat}}\rangle\). We consider at first instance 1,000,000 points in the following parameter space: \(\left\{\omega\in\left(-\frac{1}{3},0\right),\lambda=\langle N_{\mathrm{sat}}( M)\rangle\right\}\), in which \(\lambda\) is obtained from \(1,000,000\) unti7 halo masses selected randomly applying Equation 10 and considering \(f_{\mathrm{sat}}=0.65\)8. All the evaluated points stay at \(0<\lambda<10\), motivating the limits imposed in the parameter space considered in Subsection 5.19. Note that we skipped \(\omega\in\left(-1,-\frac{1}{3}\right)\), since in those cases \(f_{\mathrm{q}}\geq 0\)\(\forall N_{\mathrm{sat}}\). We motivate below further cuts to isolate the problematic region (\(f_{q}<0\)) found in the last subsection, to avoid a large number of unnecessary \(B_{\mathrm{sub-P}}\) evaluations. Footnote 7: We expect almost no variation in the result if outherri halo masses are used, since both simulations share similar halo mass functions. Footnote 8: For a particular halo mass, when the fraction of satellites is higher, \(\lambda\) is also higher. Since the effect we want to evaluate arises for high values of \(\lambda\) we are considering a high value of the fraction of satellites as a worst-case scenario. We exclude points with \(\lambda\leq 1\), since it can be demonstrated mathematically that \(f_{\mathrm{q}}\left(N_{\mathrm{sat}};\lambda\leq 1,\omega\right))\geq 0\). This is also demonstrated numerically in our previous test (Subsection 5.1). Finally, we also exclude points with \(\omega<-1/(\mathrm{trunc}(\lambda)+1)\) (equivalent to blue points in Figure 6). For those points we remind that Equation 22 is applied and the input value of \(\omega\) is substituted by \(\omega_{\sigma}=-1/(\mathrm{trunc}(\lambda)+1)\). All values of \(\omega_{\sigma}=-1/(\mathrm{trunc}(\lambda)+1)\) fall in the range of the binomial distribution, and then we have that \(f_{\mathrm{q}}=1\) for all those points (that is, \(B_{\mathrm{ext}}=B\)). We remind that those selections we made are motivated to isolate the problematic region of the parameter space in which \(f_{\mathrm{q}}<0\). We have that 851 points (from the initial \(1,000,000\) points) passed the selection. We focus on those points to determine errors in the central moments. In particular we compute, as in Subsection 5.1, \(10^{6}\) realizations of \(B_{\mathrm{sub-P}}\) for all selected points. We use the results to compare the recovered and fiducial values of \(\lambda\), \(\omega_{\sigma},\sigma\) and \(k_{3}\). In this test, we focus now on quantifying how frequent errors on \(\theta=\lambda\), \(\omega_{\sigma},\sigma\) and \(k_{3}\) are. In this particular test, we are interested only in the errors arising when \(f_{\mathrm{q}}\big{(}N_{\mathrm{sat}};\lambda,\omega_{\sigma}\big{)}<0\). Thus, the statistics are obtained considering the number of points with errors greater than \(|\theta_{\mathrm{recov}}/\theta-1|\) (\(\theta=\lambda,\omega_{\sigma},\sigma,k_{3}\)) for the 851 selected points, but we normalize the result adding zero errors for the remaining points in which we already knew that \(f_{\mathrm{q}}\geq 0\) (In total, \(1,000,000\) points). Note that in order to obtain a reliable statistic for galaxy catalogue generation, we need to consider all the original points. The selection was done to compute the errors only in the isolated region of the parameter space in which \(f_{\mathrm{q}}<0\) was possible, thus avoiding a high computational cost. In Figure 8 we present the number of \(B_{\mathrm{sub-P}}\) realizations that have an error greater than the \(|\theta_{\mathrm{recov}}/\theta-1|\) values on the x-axis: only 6 per million points have and error greater than 0.5 per cent recovering the mean. In the same way we find that only 17 per million points have an error greater than 0.5 per cent recovering \(\sigma\) and only 131 per million points recovering the skewness. The discrepancies between the fiducial and recovered values arise at high values of \(\lambda\), which correspond to very massive haloes, that are much less frequent in nature. To sum up, in the particular task of mock generation, numerical errors are not expected to have an impact in the resulting galaxy distribution of the catalogues. ## 6 2pcf and the satellite PDF We expect the satellite PDF to affect the pair-counts of galaxies within a halo, i.e., the 2-point correlation (2PCF) 1-halo term. On the other hand at large scales (in the 2-halo term), the 2PCF would be unaffected by the PDF itself. This intuition is indeed confirmed when we measure the 2PCF HOD catalogues generated with different PDF variances. In Figure 9 we can see how the variance of the probability distribution used to populate haloes with satellite galaxies affects the projected correlation function \(w_{\mathrm{p}}(r_{\mathrm{p}})\). In the upper panel we see the general shape of \(w_{\mathrm{p}}(r_{\mathrm{p}})\). As we can see, it follows the typical decaying power law as a function of the projected distance. In the middle panel we divide all \(w_{\rm p}(r_{\rm p})\) considered by the Poisson \(w_{\rm p}\),p(\(r_{\rm p}\)). We can see that small scales are affected as expected: clustering rises when the variance considered is larger. A larger variance favor more instances with more than one satellite in the same halo, increasing precisely the 1-halo term. Since the linear bias has been fixed, intermediate and large scales are mostly not affected since galaxy pairs comes from different haloes and now former differences are averaged out. In the lower panel we can see the difference between all \(w_{\rm p}\) and the Poisson \(w_{\rm p,p}\) divided by the error calculated using jackknife resampling. We detail the calculation of jackknife errors in the following subsection. We show that variations in the one-halo term of the projected correlation function induced by changes in the PDF are clearly significant, showing the constraining power that \(w_{\rm p}(r_{\rm p})\) has on the variance on small scales. We do not show it here but we find that the dependence of the monopole and quadrupole on the PDF variance is more modest, in line with the results from Avila et al. (2020). Since we use mostly the one-halo term of the projected correlation function when fitting galaxy catalogues with eBOSS data, we also use the linear scales of Figure 6: _Top left_: 3500 random points with \(\omega\in[-1,0)\) and \(\lambda\in(0,10)\). We distinguish green points: \(f_{\rm q}(N_{\rm sat};\lambda,\omega)>0\)\(\forall N_{\rm sat}\), red points, in which \(f_{\rm q}(N_{\rm sat};\lambda,\omega)<0\) for some \(N_{\rm sat}\). We also consider points with \(\omega<-1/(\rm trunc(\lambda)+1)\) and finally those with \(q>y_{\rm Ylim}\), represented in orange. _Top right_: correcting \(\omega\) in favor of \(\omega_{\sigma\sigma}\) for blue points. _Middle left_: we compare the mean recovered \(\lambda_{\rm rrecov}\) with its respective fiducial value \(\lambda\) for all points. _Middle right_: we make the same comparison for \(\omega_{\sigma\sigma}\). _Lower left_: same for \(\sigma\). _Lower right_: we do the same analysis for the skewness \(k_{3}\). We represent 3290 points, since we exclude points placed in the grey area and we also represent \(k_{3}=0\). \(\xi_{0}(s)\) and \(\xi_{2}(s)\) in our fits. In this regime, the monopole depends more strongly on the linear bias, and there is no appreciable dependence in the quadrupole, which is more affected by the velocity distribution. ### Jackknife resampling To estimate the covariance of the two-point functions \(\mathrm{y}=(w_{\mathrm{p}},\xi_{0},\xi_{2})\) introduced in this section, and also for the Count in Cells estimator introduced in Section 7 we use jackknife resampling. In the case of two-point functions using unit simulation, we construct \(N_{\mathrm{box}}=1000\) copies of a catalogue of galaxies, with \(n=n_{\mathrm{eBOSS}}\) and \((\omega,f_{\mathrm{sat}})=(\cdot 0.8,0.4)\). To each copy we extract a different sub-box of \(L_{\mathrm{cell}}=100\)\(h^{-1}\)Mpc. We compute \(y_{i}=(w_{\mathrm{p,i}},\xi_{0,i},\xi_{2,i})\) for each copy and we apply: \[C_{\mathrm{Jack}}=\frac{N_{\mathrm{box}}-1}{N_{\mathrm{box}}}\sum_{i=1}^{N_{ \mathrm{box}}}\sum_{j=1}^{N_{\mathrm{box}}}\left(y_{i}-y\right)\cdot\left(y_ {j}-y\right) \tag{29}\] We finally need to rescale the variance, comparing the volume of the simulation and the eBOSS ELG volume: \[C=C_{\mathrm{Jack}}\cdot\frac{V_{\mathrm{box}}}{V_{\mathrm{eBOSS}}} \tag{30}\] In the case of unit simulation, \(V_{\mathrm{box}}=1\)\(h^{-1}\)Gpc. Nevertheless, when considering two-point function using outer-rim simulation, we have that \(V_{\mathrm{box}}=3\)\(h^{-1}\)Gpc. In that case, we divide the latter box in 27 sub-boxes of 1 \(h^{-1}\)Gpc, we follow the steps outlined above for each one of the sub-boxes and we finally average the errors. Finally, we compute the jackknife covariance for Count-in-cells following the steps described above, but using \(N_{\mathrm{box}}=125\). ### Application to model eBOSS ELGs As an application we fit the eBOSS ELG data using HOD models with a range of satellite PDFs. This is done as an exercise and thus, we will only vary 2 parameters: the fraction of satellites \(f_{\mathrm{sat}}\) and the Figure 8: We represent the normalised number of \(B_{\mathrm{sub-P}}\) realization errors (multiplied by \(10^{4}\) due to axis clarity) higher or equal to \(|\theta_{\mathrm{recov}}/\theta-1|\) divided by the total number of points considered, \(N=10^{6}\), with colours as indicated by the legend Figure 7: Difference between the recovered and input values of \(\sigma\) and \(\lambda\), divided by its respective input values. We represent 3500 points, divided in four flags as in the upper plots of Figure 6. The most relevant errors which introduce a real bias in the results are found for \(f_{0}(N_{\mathrm{sat}};\lambda,\,\omega_{\sigma})<0\). The errors for points corresponding to the other flags are best shown in the zoom panel. Those are smaller and due to the limited number of realizations taken into account to compute \(\lambda\) and \(\sigma\). Figure 9: Projected correlation functions calculated from unit simulations \(w_{\mathrm{p}}\) considering a fraction of satellites \(f_{\mathrm{sat}}=0.3\). We consider Negative Binomial distributions (\(\omega>0\)), Poisson distribution (\(\omega=0\)), \(B_{\mathrm{sub-P}}=(-1\leq\omega<0)\) and Nearest Integer distribution. _Top:_ projected correlation function \(w_{\mathrm{p}}\). _Middle:_ ratio between all \(w_{\mathrm{p}}\) with Poisson \(w_{\mathrm{p,P}}\) (\(\omega=0\)). _Bottom:_ difference between all \(w_{\mathrm{p}}\) and Poisson \(w_{\mathrm{p,P}}\) divided by the error. PDF variance by setting free \(\omega\). We get independent fits for unit and outerrh simulations. We consider the following PDFs: Nearest Integer (NI), \(B_{\rm sub-P}\) with \(\omega\in[-1,-0.05]\), Poisson (\(\omega=0\)) and Negative binomial \(\omega\in[0.05,0.5]\). We consider a step \(\Delta\omega=0.05\) for the \(B_{\rm sub-P}\) and the NB PDFs. This parameter space is shared by the two simulations used in this work. We also vary the fraction of satellites. For unit, we consider \(f_{\rm sat}\in[0.05,0.75]\) with \(\Delta f_{\rm sat}=0.05\). We exclude \(f_{\rm sat}>0.75\) since in this case \(\mu<M_{\rm min}=10^{10.42}~{}M_{\odot}h^{-1}\), implying a model in which most of the central galaxies have to be placed at small-mass unresolved haloes. We point out that this issue is particular to the HOD considered in this work. Other HOD models may allow more freedom in the fraction of satellites range. We exclude as well \(f_{\rm sat}=0\), corresponding to a galaxy catalogue populated only with central galaxies. For outerrh, we consider \(f_{\rm sat}\in[0.05,0.6]\), since \(f_{\rm sat}>0.6\) is implying \(\mu<M_{\rm min}=10^{10.58}~{}M_{\odot}h^{-1}\). All remaining parameters are set to their default values (See Table 2). To increase the signal-to-noise ratio, the galaxy catalogues constructed have a number density \(n_{\rm gal}=6n_{\rm eBOSS}\). We compute the \(\chi^{2}\) to compare the reference data catalogue with each one of our galaxy catalogues. The expression of the \(\chi^{2}\) as follows: \[\chi^{2}\left(\theta\right)=\left(y_{\rm data}-y_{\rm sim}\left(\theta\right) \right)^{T}C^{-1}\left(y_{\rm data}-y_{\rm sim}\left(\theta\right)\right) \tag{31}\] in which \(y=\left(w_{\rm p},\xi_{0},\xi_{2}\right)\), \(\theta=\left(f_{\rm sat},\omega\right)\) and \(C\) is the covariance matrix computed in Subsection 6.1. Clustering information is computed following Avila et al. (2020): we compare the projected correlation function \(w_{\rm p}(r_{\rm p})\) evaluated at \(0.19<r_{\rm p}(h^{-1}{\rm Mpc})<4.5\), the monopole \(\xi_{0}(s)\) evaluated at \(20<s(h^{-1}{\rm Mpc})<45\) and the quadrupole \(\xi_{2}(s)\) evaluated at \(10<s(h^{-1}{\rm Mpc})<25\) (22 points in total). Therefore, we consider (22-2 = 20) degrees of freedom. Finally, we find the best fit galaxy catalogue extracting \((f_{\rm sat},\omega)\) values that correspond to the minimum \(\chi^{2},\chi^{2}_{\rm min}\). In the top panel of Figure 10 we show a comparison between our unit-based galaxy catalogues and eBOSS ELG data as our reference catalogue through the evaluation of \(\chi^{2}(f_{\rm sat},\omega)-\chi^{2}_{\rm min}\). As we can see, there is some degeneracy between the two parameters. If we increase the variance of the distribution \(\omega\) or the fraction of satellites \(f_{\rm sat}\) it also increases the one halo term of the projected correlation function. Both effects seem to compensate each other. Besides that, we find that the preferred galaxy catalogue has \((f_{\rm sat},\omega)=(0.35,-0.65)\), corresponding to a catalogue in which the 35 per cent of galaxies are satellites, and satellites are placed with a sub-Poisson variance, modelled with \(B_{\rm ext}\) (\(\omega=-0.65\)). In the bottom panel of Figure 10 we show \(\chi^{2}(f_{\rm sat},\omega)-\chi^{2}_{\rm min}\) for galaxy catalogues computed using outerrh simulation. The best fit prefers satellite galaxies to be distributed following a Nearest Integer distribution, with 40 per cent of the total number of galaxies as satellites \((f_{\rm sat}=0.4)\). We can see that outerrhim \(\chi^{2}(f_{\rm sat},\omega)\) function is smoother than the same quantity computed using unit simulation, since the volume of the simulation is 27 times bigger (See Table 1), unit fixed technique suppresses variance on large scales (Chuang et al., 2019), but since small scales have an important weight in our \(\chi^{2}\) fitting, it is not possible to benefit from this suppression. The results are not fully consistent between the two simulations. In particular, the difference of the best fits found for both simulations, considering outerrhim simulation is around \(\sqrt{\Delta\chi^{2}}\approx 3.5\sigma\). Nevertheless, this inconsistency is not surprising since both simulations are run with different cosmologies (See Table 1). The results obtained are just a first test aiming to give a first use to the \(B_{\rm sub-P}\) extension in a simple context of two free-parameter fitting \((f_{\rm sat},\omega)\) to eBOSS ELG data, using standard clustering measurements \((w_{\rm p}(r_{\rm p}),\xi_{0}(s),\xi_{2}(s))\), following Avila et al. (2020). We also want to see possible simulation-dependent differences in the results, as the use of different volumes and cosmologies on the \(\chi^{2}\) fitting results. The actual best fit values are not so important in this work: here the focus is on the probability distribution function, and thus we fixed most HOD parameters. ## 7 Count in cells In Avila et al. (2020) the authors explored the HOD parameter space using clustering statistics, and degeneracies appeared between some of the parameters. We find similar degeneracies in this work in Figure 10. Given that the CIC statistics is indeed evaluating the PDF of galaxy number counts in cubic cells, we expect it to have a strong constraining power on the PDF of satellites within a halo. The CIC estimator we consider, \(n_{\rm CIC}\left(N_{\rm gal}\right)\), is obtained following a similar approach as (Yang and Saslaw, 2011): the simulation box is divided in cubic cells of side \(L_{\rm cell}=5~{}h^{-1}{\rm Mpc}\), then we count how many galaxies have each cell and finally we find out how many cells have \(N_{\rm gal}\) galaxies. Finally, we normalize over the number of cells: \[n_{\rm CIC}\left(N_{\rm gal}\right)=N_{\rm cells}\left(N_{\rm gal}\right) \cdot\left(\frac{L_{\rm cell}}{L_{\rm box}}\right)^{3} \tag{32}\] \(n_{\rm CIC}(N_{\rm sat})\) is estimated using 100 realizations of galaxy catalogues computed from unit simulations, with \(f_{\rm sat}=0.3\) and \(n=n_{\rm eBOSS}\). We point out that Counts-in-Cells, unlike two-point statistics, depends Figure 10: _Top_: comparison between eBOSS ELG data and galaxy catalogues obtained form very similar with \(\omega\) and \(f_{\rm sat}\) as free parameters. Specific ranges of the projected correlation function, the monopole and the quadrupole are used for the comparison. Each point represents the \(\chi^{2}-\chi^{2}_{\rm min}\) for a particular point of \((\omega_{\rm s}\),\(f_{\rm sat})\) parameter space. \(\chi^{2}-\chi^{2}_{\rm min}=0\) represented by a red point is the best fit to the data, corresponding to \((\omega,f_{\rm sat})=(-0.65,0.35)\). Since \(\omega<0\), a sub-Poisson distribution is preferred, parametrised by the extended binomial distribution. _Bottom_: same comparison using outerrhim galaxy catalogues. The best fit prefers a Nearest Integer distribution and a fraction of satellites \(f_{\rm sat}=0.4\). on the number density of the galaxy catalogue taken into account. If the number density is higher, we will have more cells filled with a higher number of galaxies, and less cells with a lower number of galaxies. In Figure 11 we can see how counts in cells vary with respect to \(\omega\). We remind that \(\omega\) is related to the variance of the distribution (See Equation 11). In the upper panel we see the general trend of \(n_{\rm CIC}(N_{\rm gal})\): fewer cubic cells in the box are expected to contain more galaxies. This decay is roughly exponential. The middle panel of Figure 11 shows better the variation on the number of cells depending on the value of \(\omega\). All lines are divided by Poisson \(n_{\rm CIC,p}\) (\(\omega=0\)). For \(N_{\rm gal}/V_{\rm cell}\geq 2\) we start to see that for \(\omega>0\), more number of cells are filled with higher number of galaxies. The trend is opposite for \(\omega<0\). In the lower panel we can see the potential of CIC constraining \(\omega\). The highest constraining is achieved considering cells with 2 or 3 galaxies inside. We can see also that noise increases for cells filled with higher number of galaxies. ### Proof of concept: using CIC to reproduce a model catalogue of galaxies In this section we compare the constraining power of Count-in-Cells and two-point statistics, using unit simulation. We use as our reference catalogue a galaxy catalogue with \((\omega,f_{\rm sat})=(-0.8,0.4)\). We consider the same parameter space as in Subsection 6.2. For two-point statistics, the galaxy catalogues constructed have a number density \(n_{\rm gal}=6n_{\rm eBOSS}\), as in Subsection 6.2. In the case of CIC, since the results depend on the number density, we have constructed 10 realizations with number density \(n_{\rm gal}=n_{\rm eBOSS}\) for each point in the parameter space to achieve a similar resolution. We consider the same two-point statistics as before: \((w_{\rm p}(r_{\rm p}),\xi_{0}(s)\) and \(\xi_{2}(s)\)), but now we compare our galaxy catalogues to the reference galaxy catalogue generated with \(\omega=-0.8\), \(f_{\rm sat}=0.4\), instead of eBOSS data. For the Count-in-Cells information, we use \(n_{\rm CIC}(N_{\rm gal})\) evaluated at \(N_{\rm gal}/V_{\rm cell}\) from 0 to 5, since beyond some catalogues have no cells with \(N_{\rm gal}/V_{\rm cell}>5\). Finally, we also join all statistics. In the top panel of Figure 12 we show the \(\chi^{2}\) of the combined projected correlation function, monopole and quadrupole. As it can be seen, \(f_{\rm sat}\) and \(\omega\) are partially degenerated. That is, points far from each other in the parameter space may share similar \(\chi^{2}\) near to the minimum. In the middle panel we show the performance of the Count-in-Cells and in the bottom panel we sum CIC + 2PCFs. CIC turns to be promising when fitting the HOD parameters, since it has more constraining power on \((\omega,f_{\rm sat})\) than two-point functions. Also, it complements 2PCFs, providing a different orientation of the degeneracy contour, helping to break previous degeneracies between \(f_{\rm sat}\) and \(\omega\). We also expect other statistical methods based on counting galaxies in different volumes such as counts in cylinders and 2D k-Nearest Neighbours (Yuan et al., 2023) to be promising to break degeneracies in the parameter space. Nevertheless, despite CIC has potential as an statistic to describe the distribution of galaxies, more work needs to be done to properly account for data systematics such as the survey window, masks or redshift evolution of the number density \(n(z)\) and linear bias \(b(z)\), when fitting our galaxy catalogues to data surveys (Salvador et al., 2019). Moreover, the use of lightcone catalogues would also be more appropriate for this type of measurements. Figure 11: Count-in-Cells calculated from unit simulations considering a fraction of satellites \(f_{\rm sat}=0.3\). We consider the negative binomial distribution \((\omega>0)\), Poisson distribution \((\omega=0)\), \(B_{\rm sub-p}\) (\(-1\leq\omega<0\)) and Nearest Integer distribution. _Top_: Count-in-Cells estimator \(n_{\rm CIC}\) (\(N_{\rm gal}\)). _Middle_: ratio between all \(n_{\rm CIC}\) and Poisson \(n_{\rm CIC,p}\) (\(\omega=0\)). _Bottom_: difference between \(n_{\rm CIC}\) and Poisson \(n_{\rm CIC,p}\) divided by the Jackknife error \(\sigma\). Figure 12: _Top_: \(\chi^{2}(f_{\rm sat},\omega)\) obtained considering \(w_{\rm p}\), \(\xi_{0}\) and \(\xi_{2}\) clustering measurements. _Middle_: \(\chi^{2}(f_{\rm sat},\omega)\) obtained considering Count-in-Cells. _Bottom_: \(\chi^{2}\) joined from both contributions. Our reference galaxy catalogue has \((\omega,f_{\rm sat})=(-0.8,0.4)\), with \(\chi^{2}=0\). ## 8 Summary and conclusions We have generated catalogues with model galaxies using probability distribution functions (PDF) with a range of variances for deciding the number of satellite galaxies to be placed in a dark matter halo of a given mass. These catalogues are generated with HOD models with fixed number density and bias (SS 3). We have used dark matter haloes from the unit and outerrin at \(z=0.8594\) and \(0.865\), respectively. In this work we have expanded the range of possible variances of the PDF for placing satellite galaxies in dark matter haloes with halo occupation distribution (HOD) models. In particular, we have made possible to have continuous sub-Poissonian variances (\(\sigma^{2}<\lambda\equiv(N_{\rm sat})\)), introducing an extension to the binomial distribution, \(B_{\rm ext}(N_{\rm sat};\lambda,\omega)=f_{\rm d}B(N_{\rm sat};p,q)\) (SS 4). \(f_{\rm d}\) is computed such that the extended binomial matches the moments of the binomial distribution (the calculations are detailed in appendix A). We parameterize the PDF variance with \(\omega_{\sigma}\), \(\sigma^{2}=\lambda(1+\omega_{\sigma}\lambda)\) (Equation 11). This parameter quantifies the deviation of the variance with respect to one corresponding to a Poisson distribution, \(\sigma^{2}=\lambda\). The binomial distribution, \(B(N_{\rm sat};p,q)\), can have discrete values for the variance \(-1\leq\omega_{\sigma}=-\frac{1}{q=1,2,\ldots}<0\) and the extended binomial distribution covers it continuously-\(1\leq\omega_{\sigma}<0\). We find mathematical limitations for the binomial (and also extended binomial) distribution functions. We introduce \(\omega\) as the input parameter of \(\omega_{\sigma}\) in our code by the user. Then, the region of the parameter space delimited by (\(\omega<-1/\lambda,\lambda>1\)), this PDF has negative probabilities. To correct this issue, the HOD code uses \(\omega_{\sigma}=-1/(\rm trunc(\lambda)+1)\) instead of \(\omega\) (See Equation 22). This change allows to maximally explore sub-Poissonian variances with a same parameter \(\omega\) for the entire catalogue without obtaining unphysical results. As the mean number of satellites in a halo typically increases with halo mass, \(\lambda(M)\), this correction only is needed for high-mass haloes, which are a minority. Negative probabilities can also arise for \(B_{\rm ext}\) when considering a non-trivial subset of the following parameter space (\(\lambda>1,0>\omega>-\frac{1}{\lambda}\)). This is due to \(f_{\rm d}\), the factor that allows the variance of \(B_{\rm ext}\) to be continuous, being negative. For the model catalogues we produce, this problem only affects 6 per million haloes with an error in the mean number of satellites greater than 0.5 per cent, and 17 per million haloes are affected with an error in \(\sigma\) greater than 0.5 per cent (See Subsection 5.2). For all the PDFs for the number of satellite galaxies, we have mitigated a computational limitation that happens when using \(\Gamma(x)\) for big enough arguments. This has been done by swapping ratios of \(\Gamma(x)\) for products, as they are mathematically equivalent (SS 4.3). Varying the PDF for satellite galaxies impacts the small scale clustering. PDFs with larger variances enhance the 1-halo projected two-point correlation function in redshift space in the scales \(r_{\rm p}\lesssim 1\)\(h^{-1}\)Mpc, beyond the statistical errorbars. The effect on the monopole and quadrupole of the two-point correlation function is negligible since we are using linear scales. Assuming different variances for the PDF of satellite galaxies also affects the Count-in-Cells (CIC). We have considered cubic cells of side \(5h^{-1}\)Mpc. This volume is small enough to see the PDF impact on CIC and big enough to make CIC calculations computationally feasible. The highest constraining power of CIC is found in the range \(N_{\rm gal}/V_{\rm cell}=2-3/\big{[}5h^{-1}\rm Mpc\big{]}^{3}\) considering \(n=n_{\rm eBOSS}\). We have applied our extension to find the best HOD model describing the clustering of eBOSS Emission-Line Galaxies (ELGs) at \(z=0.6-1.1\). For this exercise, only two parameters have been set free: \(\omega\), which controls the variance of the PDF for satellite galaxies; and \(f_{\rm sat}\), which controls the fraction of galaxies that are satellites (SS 6.2). We have fitted simultaneously the observed monopole, \(\xi_{0}(s)\), quadrupole, \(\xi_{2}(s)\) and the projected correlation function, \(w_{\rm p}(r_{\rm p})\). For the error bars we have used jackknife covariance matrices (SS 6.1). A sub-Poisson distribution together with a high fraction of satellites are preferred to reproduce the clustering of eBOSS ELGs. In particular, for unit we find: \(\omega=-0.65\), \(f_{\rm sat}=0.35\); and for outerrin: \(\omega<1\) (minimal variance, corresponding to a nearest integer distribution), \(f_{\rm sat}=0.4\). As an exercise to show the potential of CIC to constrain the PDF variance, and using a mock catalogue as a reference, we have included both the clustering and CIC to constrain the HOD model parameters. Again, we have only set free \(\omega\) and \(f_{\rm sat}\). CIC has more constraining power over the variance of the PDF for satellite galaxies. This estimator can reduce the degeneracies found between \(\omega\) and \(f_{\rm sat}\) when fitting only the clustering (SS 7.1). However, more work needs to be done to fit mock catalogues to observed CIC, as several other aspects need to be taken into account that were beyond the scope of this article. These include observational systematic errors, survey window function, the effect of density and bias evolution (\(n(z)\), \(b(z)\)), etc. In summary, we have developed a robust way to generate HOD galaxy catalogues with a range of variances for the PDF of satellite galaxies within dark matter haloes. We have shown that this variance has a large impact on one and two point summary statistics. In particular, CIC is a promising statistic to constrain the distribution of galaxies within dark matter haloes. Likewise, the PDF of satellites needs to be understood well, if the CIC statistics is to be used to constrain cosmology. Our extension to the HOD modelling and the proposal to use the observed CIC in the future give us the opportunity to gain insight of the galaxy-halo connection with existing and upcoming data. Reciprocally, it is fundamental to understand the impact of galaxy-halo connection ingredients on different galaxy clustering statistics, since some of these ingredients could be degenerated with cosmological parameters. These topics are of great relevance now that we have entered the stage-IV of cosmological surveys with the data acquisition by the Dark Energy Spectroscopic Instrument (DESI) and Euclid. ## 9 Data availability The data that support the findings of this study are available on request. ## Acknowledgements We thank Andrew Hearin, Sihan Yuan and Antonela Taverna for useful discussions. BVG, SA, VGP have been or are supported by the Atraccion de Talento Contract no. 2019-T1/TIC-12702 granted by the Comunidad de Madrid in Spain. This work has also been supported by Ministerio de Ciencia e Innovacion (MICINN) under the following research grants: PID2021-122603NB-C21 (BVG, VGP, GY), PID20201-123012NB-C41 (main support for SA), PID2021-123012NB-C43 (BVG) and PGC2018-094773-B-C32 (BVG, SA). IFAE is partially funded by the CERCA program of the Generalitat de Catalunya. The UNIT simulations have been run in the Mare-Nostrum Supercomputer, hosted by the Barcelona Supercomputing Center, Spain, under the PRACE project number 2016163937
2310.11813
Determining the Origin of Very-high-energy Gamma Rays from Galactic Sources by Future Neutrino Observations
Recently, the Large High Altitude Air Shower Observatory (LHAASO) identified 12 $\gamma$-ray sources emitting gamma rays with energies above 100 TeV, making them potential PeV cosmic-ray accelerators (PeVatrons). Neutrino observations are crucial in determining whether the gamma-ray radiation process is of hadronic or leptonic origin. In this paper, we study three detected sources, LHAASO J1908+0621, LHAASO J2018+3651, and LHAASO J2032+4102, which are also the most promising galactic high-energy neutrino candidate sources with the lowest pre-trial p-value based on the stacking searches testing for excess neutrino emission by IceCube Neutrino Observatory. We study the lepto-hadronic scenario for the observed multiband spectra of these LHAASO sources considering the possible counterpart source of the LHAASO sources. The very-high-energy gamma rays are entirely attributed to the hadronic contribution, therefore the most optimistic neutrino flux can be derived. Then, we evaluate the statistical significance (p-value) as a function of the observation time of IceCube and the next-generation IceCube-Gen2 neutrino observatory respectively. Our results tend to disfavor that all gamma rays above $100\,\rm GeV$ from LHAASO J1908+0621 are of purely hadronic origin based on current IceCube observations, but the purely hadronic origin of gamma rays above $100\,\rm TeV$ is still possible. By IceCube-Gen2, the origin of gamma rays above $100\,\rm TeV$ from LHAASO J1908+0621 can be further determined at a $5\sigma$ significance level within a running time of $\sim 3$ years. For LHAASO J2018+3651 and LHAASO J2032+4102, the required running time of IceCube-Gen2 is $\sim 10$ years ($3\sigma$) and $\sim 10$ years ($5\sigma$), respectively. Future observations by the next-generation neutrino telescope will be crucial to understanding the particle acceleration and radiation processes inside the sources.
Bo-Heng Song, Tian-Qi Huang, Kai Wang
2023-10-18T09:09:25Z
http://arxiv.org/abs/2310.11813v2
Determine the Origin of Very-high-energy Gamma Rays from Galactic Sources by the Prospect of Observing Neutrinos ###### Abstract Recently, the Large High Altitude Air Shower Observatory (LHAASO) identified 12 \(\gamma\)-ray sources emitting above 100 TeV gamma rays, making them potential PeV cosmic-ray accelerators (PeVatrons). Neutrino observations are crucial in determining whether the gamma-ray radiation process is due to hadronic or leptonic origin. In this paper, we study three detected sources, LHAASO J1908+0621, LHAASO J2018+3651, and LHAASO J2032+4102, which are also the most promising galactic high-energy neutrino candidate sources with the lowest pre-trial p-value based on the stacking searches testing for excess neutrino emission by IceCube Neutrino Observatory. We study the lepto-hadronic scenario for the observed multiband spectra of these LHAASO sources considering the possible counterpart source of the LHAASO sources. The very-high-energy gamma rays are entirely attributed to the hadronic contribution, therefore the most optimistic neutrino flux can be derived. Then, we evaluate the statistical significance (p-value) as the observational time of the IceCube and next-generation IceCube-Gen2 neutrino observatory respectively. We find that the origin of gamma rays totally from the hadronic process or at most partially from the hadronic process can be determined by IceCube-Gen2 for LHAASO J1908+0621 at a \(5\sigma\) significance level with a running time of \(\sim 10\) months. For LHAASO J2018+3651 and LHAASO J2032+4102, the required running time is \(\sim 10\) years (\(3\sigma\)) and \(\sim 4\) years (\(5\sigma\)), respectively. The future confirmation by the next-generation neutrino telescope will be crucial to understanding the particle acceleration and the radiation processes inside the sources. High-energy astrophysics; Gamma-rays; Supernova remnants; Pulsars; Neutrino ## 1 Introduction The origin of high-energy cosmic rays (CRs) has been a long-standing question in particle astrophysics. The cosmic ray spectrum is typically described by a power law with an index of \(\sim 2.7\) up to the so-called "knee" at around 3 PeV, beyond which the spectrum softens (Abbasi et al., 2018). This suggests the existence of powerful astrophysical proton accelerators in our Galaxy, which can accelerate protons to energies up to a PeV, commonly referred to as "PeVatrons". The potential galactic PeVatrons can be identified by the very-high-energy (VHE, \(>100\) GeV) gamma-ray detection and have been explored by ground-based telescopes, such as H.E.S.S. (High Energy Stereoscopic System) (HESS Collaboration et al., 2016; Abdalla et al., 2018), MAGIC (Major Atmospheric Gamma Imaging Cherenkov) (Acciari et al., 2020), HAWC (High-Altitude Water Cherenkov) (Albert et al., 2020), HAWC (High-Altitude Water Cherenkov), and LHAASO (Large High Altitude Air Shower Observatory) (Cao et al., 2021, 2023; 2023). Especially, LHAASO can capture gamma rays with energies from hundreds of GeV to exceeding PeV (LHAASO collaboration et al., 2010), the sources of which have strong possibilities of being galactic PeVatrons. However, the origin of VHE gamma-rays is still in debate. The VHE gamma rays can be produced through the decay of pions which are generated by hadronic processes between the accelerated high-energy cosmic rays and the surrounding medium. An alternative scenario is leptonic processes, such as inverse Compton scattering and bremsstrahlung of the accelerated high-energy electrons, which can also produce high-energy gamma rays. Therefore, confirming the gamma-ray origin is crucial to identifying the composition of accelerated particles and the radiation processes for PeVatrons. A significant probe to determine the VHE gamma-ray origin is the high-energy neutrino, which is concomitantly produced with gamma rays of hadronic origin. Therefore, the detection or non-detection of high-energy neutrinos can be a diagnosis of the hadronic or leptonic origin of VHE gamma rays. Very recently, neutrino emission from the galactic plane has been identified at the \(4.5\sigma\) level of significance by IceCube Neutrino Observatory (Abbasi et al., 2023), which implies that galactic sources can generate high-energy neutrinos. In addition, for the LHAASO sources, Abbasi et al. (2023) conducted stacking searches testing for excess neutrino emission from 12 LHAASO sources which are identified with emissions above 100 TeV (Cao et al., 2021) and thought of as PeVatron candidates. Although no significant neutrino emissions were found, three LHAASO sources, i.e., LHAASO J1908+0621, LHAASO J2018+3651, and LHAASO J2032+4102, present the lowest p-values, making them as the promising sources to identify the possible neutrino emission in the future and then judge the origin of VHE gamma rays. Due to the complex spatial morphology of three LHAASO sources, the gamma-ray counterparts for these sources are still uncertain, which can be supernova remnants (SNRs), pulsar wind nebulae (PWNe), or young massive star clusters (YMCs). SNRs are widely thought of as the primary galactic cosmic-ray sources, and the particles can be accelerated by diffusive shock acceleration in their forward shocks generated by the interaction of supernova ejecta with the interstellar medium (ISM). The production of gamma rays and high-energy neutrinos from interactions of accelerated CR protons and nuclei with ambient medium (Gabici & Aharonian, 2007). CR protons can also be accelerated and trapped in PWNe and YMCs and then produce gamma rays and neutrinos (Di Palma et al., 2017). In this paper, we collect the multiband spectra observed from the direction of these sources and consider the possible counterpart sources of the LHAASO sources. With the proposed theoretical scenario, the multiband spectral modeling is implemented and VHE gamma rays are mainly attributed to the hadronic process. With the most optimistic neutrino production in the sources, we evaluate the statistical significance as the observational time for three sources, i.e., LHAASO J1908+0621, LHAASO J2018+3651, and LHAASO J2032+4102, using the IceCube and next-generation IceCube-Gen2 neutrino observatory respectively. The remaining part of this paper is organized as follows. In section 2, we obtain the SED of sources through the lepto-hadronic scenario. In section 3, we calculate the corresponding neutrino SED from the hadronic interaction and compare the calculation with the sensitivity of the IceCube-Gen2 observatory. In section 4, we estimate the statistical significance of neutrino signals from LHAASO sources using both the IceCube and the proposed IceCube-Gen2. Finally, section 5 is the discussion and summary. ## 2 Theoretical Modeling ### Lhaaso J1908+0621 Although the nature of LHAASO J1908+0621 is still unknown, it is considered one of the most promising Pevatron candidates in the galaxy due to its extended bright TeV emissions (Cao et al., 2021). LHAASO J1908+0621 is spatially associated with a middle-aged and shell-like supernova remnant SNR G40.5-0.5 (20-40 kyr; Downes et al., 1980), an energetic gamma-ray pulsar PSR J1907+0602 (age of 19.5 kyr, distance of \(3.2\pm 0.6\) kpc, spin-down luminosity of \(\sim 2.8\times 10^{36}\) erg/s) (Abdo et al., 2010; Li et al., 2021), and an energetic radio pulsar PSR J1907+0631 (age of 11 kyr, distance of 7.9 kpc, spin-down luminosity of \(\sim 5\times 10^{35}\) erg/s) (Lyne et al., 2017). The distance estimates of SNR G40.5-0.5 place it at a distance of 3.5 kpc using CO observations (Yang et al., 2006) or a more distant position of 5.5-8.5 kpc (Downes et al., 1980) or 6.1 kpc (Case & Bhattacharya, 1998) using the \(\Sigma\)-D relation. Besides, an unidentified GeV source, 4FGL J1906.2+0631, is also spatially associated with LHAASO J1908+0621 as reported in Li et al. (2021). In addition, by analyzing the distribution of the CO gas, the molecular clouds (MCs) spatially correlated with SNR G40.5-0.5 and the gamma-ray emission are identified (Duvidovich et al., 2020; Li et al., 2021). The origin of the gamma-ray emission from LHAASO J1908+0621 is still under debate due to its complex spatial morphology. In principle, the pulsar PSR J1907+0631 is unlikely responsible for the gamma-ray emission due to its location lacking gamma-ray emission and its significant offset from the position of the gamma-ray emission (Duvidovich et al., 2020). The possible scenarios can be concluded as follows. A leptonic component from the PWN powered by PSR J1907+0602 was initially proposed as the origin of VHE gamma rays (Abdo et al., 2010). The combination of leptonic and hadronic scenarios has been proposed as well for the origin of gamma-ray emission in this region (Duvidovich et al., 2020). et al., 2020; Crestan et al., 2021; Li et al., 2021; Albert et al., 2022; De Sarkar and Gupta, 2022). The hadronic component is usually related to SNR G40.5-0.5. As suggested in De Sarkar and Gupta (2022), we consider the \(pp\) interaction of escaped protons accelerated by the shock of SNR G40.5-0.5 with the materials of MCs in the MC region, and a leptonic component from SNR G40.5-0.5, which is located at a distance of 2.37 kpc, similar to Albert et al. (2021) and Cao et al. (2023). The acceleration mechanisms of cosmic rays have been studied for a long time. One of the most popular acceleration mechanisms is the diffusive shock acceleration related to the supernova remnants (Fermi, 1949; Drury, 1983; Schure et al. (2012)). Charged particles would be accelerated by the shock waves produced by the supernova explosion. The shock wave from the supernova explosion expands and eventually reaches the MCs (Fujita et al., 2009). Subsequently, CRs escape the confinement region and begin to seep into the MCs when the escaping boundary contacts the surface of the MCs. Consequently, the hadronic component comprises the \(\gamma\)-ray produced from the interaction between escaped protons from SNR G40.5-0.5 and cold protons inside the associated MCs (Makino et al., 2019; De Sarkar and Gupta, 2022). We use the semi-analytical formulation by Kelner et al. (2006) to calculate the \(\gamma\)-ray spectra from interactions between injected CR protons and the MC materials that surround the SNR G40.5-0.5. CR protons with an exponential cutoff power law distribution are adopted, i.e., \(N_{p}\propto\gamma_{p}^{-\alpha_{p}}\mathrm{e}^{-\gamma_{p}/\gamma_{p,cut}}\), where \(N_{p}\) is the number of protons in a unite volume and in the energy interval (\(\gamma_{p},\gamma_{p}+d\gamma_{p}\)), \(\gamma_{p}\) is the proton Lorentz factor, \(\alpha_{p}\) is the spectral index and \(\gamma_{p,\mathrm{cut}}\) is the cutoff Lorentz factor. In this work, we invoke two scenarios to be responsible for the origin of VHE gamma-rays. The first scenario (Case 1) is that gamma rays above 100 GeV are totally attributed to the hadronic origin, and the second (Case 2) is that only gamma rays above 100 TeV are contributed by the hadronic process as a possible spectral hardening at higher energies may originate from the hadronic process (Albert et al., 2022) and Case 1 presents a slight spectral deviation from observations above 10 TeV (see next spectral fittings). In addition to the contributions of hadrons discussed above, we also consider the contribution from the leptonic emission of relativistic electrons in the SNR+MC system. We have considered different leptonic radiation mechanisms, such as synchrotron, bremsstrahlung and IC (Blumenthal and Gould, 1970; Baring et al., 1999). To calculate the IC contribution from MCs, we have considered the contribution from the interstellar radiation field (ISRF) model (Popescu et al., 2017) and Cosmic Microwave Background (CMB) (see the left panel of Fig. 1). The spectrum of the electron distribution is assumed to be a single power law with an exponential cutoff as for protons, i.e., \(N_{e}\propto\gamma_{e}^{-\alpha_{e}}\mathrm{e}^{-\gamma_{e}/\gamma_{e,cut}}\), where \(N_{e}\) is the number of electrons in a unite volume and in the energy interval (\(\gamma_{e},\gamma_{e}+d\gamma_{e}\)), \(\gamma_{e}\) is the electron Lorentz factor, \(\alpha_{e}\) is the index and \(\gamma_{e,\mathrm{cut}}\) is the electron cutoff Lorentz factor. The multi-wavelength spectral energy distribution (SED) of the source LHAASO J1908+0621 is shown in Fig. 2 including two different scenarios. We implement a spectral fitting and the adopted model parameters are summarized in Table 1. It is worth noting that in the energy range of 1-10 TeV, we give priority to the HAWC data rather than H.E.S.S. data during the spectral fitting. In terms of the additional component in the MeV-GeV range shown in Fig. 2, a consistent emission from the bremsstrahlung process is derived for Case 1, while for Case 2, the bremsstrahlung radiations move to higher energy band and therefore a direct fitting result from Li et al. (2021) is adopted to avoid involving the second population of electrons or protons. For two cases, the total energies of the injected protons required during spectral fittings are 1.1 \(\times\) 10\({}^{47}\) or 5 \(\times\) 10\({}^{46}\) erg, respectively (see Table 1). Both values are lower than the usual \(1-10\%\) of the kinetic energy released in SNRs (typically, E\({}_{\mathrm{SN}}=10^{51}\) erg) (Aharonian et al., 2004). This could be attributed to the SNR age or the strength of the magnetic field in the SNR+MCs system. The adopted magnetic field strength and the MC number density are the same as in Albert et al. (2022). ### Lhaaso j2018+3651 MGRO J2019+37, the MILAGRO counterpart of LHAASO J2018+3651, is one of the brightest sources in the sky at TeV energies. This source is suspected to be associated with the GeV pulsar J2021+3651 (Abdo et al., 2009). The estimated distance of PSR J2021+3651 ranges from 2 to 12 kpc (Hou et al., 2014). Here we adopt a distance of 1.8 kpc as the same as in Fang et al. (2020) and the radius of its PWN is 24.6 pc. The PWN G75.2+0.1 of the pulsar PSR J2021+3651 is treated as a source of radiations of LHAASO J2018+3651. In the PWN scenario, the leptonic origin for the multi-wavelength emissions of LHAASO J2018+3651 can be naturally expected. For the leptonic processes including synchrotron radiations and IC scatterings, the same electron distribution as used for LHAASO J1908+0621 is adopted, i.e., a single power law with an exponential cutoff. For the IC scatterings, as shown in the middle panel of Fig. 1, the contributions from the ISRF model and CMB are taken into account as well. In fact, apart from gamma rays produced by leptons, there may also be hadronic components involved. A portion of the pulsar's spin-down power can be converted into a stream of nuclei and nuclei can be accelerated in pulsar magnetospheres (Hoshino et al., 1992; Arons and Tavani, 1994; Gallant and Arons, 1994). Accelerated nuclei can undergo photodisintegration when they collide with low-energy photons generated in the nonthermal radiation fields of the pulsar's outer magnetosphere. This process results in the release of energetic neutrons that subsequently decay either within or outside the Nebula. When the protons resulting from neutron decay collide with the matter within the nebula, they generate gamma-rays and neutrinos (Bednarek and Protheroe, 1997; Liu and Wang, 2021). Therefore, in addition to the IC process, the hadronic components can contribute \(\gamma\)-rays in the PWN scenario. An exponential cutoff power Figure 1: Radiation field of each source. The red dashed line is the cosmic microwave background (CMB) energy density. The grey curve is the sum of CMB and interstellar radiation field (ISRF) at diverse galactocentric positions. Figure 2: Multi-wavelength SED and fitting for LHAASO J1908+0621. Datapoints obtained from different observations by LHAASO (red) (Cao et al., 2021), H.E.S.S. (green) (Aharonian et al., 2009), HAWC (black) (Abeysekara et al., 2020), _Fermi_-LAT (blue (Li et al., 2021) are shown in the figure. The XMM-Newton upper limit obtained from Pandel (2015) is shown in purple. The XMM-Newton upper limits obtained from Li et al. (2021) and Crestan et al. (2021) are shown in brownness and light blue respectively. (a) The synchrotron (Orange dashed), bremsstrahlung (purple dotted), and IC (brown dot-dashed) components are shown. Also the pink dotted line corresponds to the hadronic component from SNR G40.5-0.5 shown. The sum of these components is shown with a black solid line. (b) The gray dashed line shows an additional component of the MeV-GeV range (Li et al., 2021). The synchrotron (Orange dashed, covered by the solid black line), bremsstrahlung (purple dotted), IC (brown dot-dashed) components are shown. Also, the pink dotted line corresponds to the hadronic component from SNR G40.5-0.5 shown. The sum of these components is shown with a black solid line as well. The model parameters of the two panels are summarized in Table 1. law distribution of protons is employed as the same as used for LHAASO J1908+0621. The multi-wavelength SED of LHAASO J2018+3651 and the spectral modeling are shown in Fig. 3(a) and the adopted parameters are summarized in Table 2. Several studies have associated LHAASO J2018+3651, the counterpart of HAWC J2019+368 or MGRO J2019+37, with the PWN G75.2+0.1 powered by PSR J2021+3651 (Albert et al., 2021; Fang et al., 2020). They explain the multiband non-thermal emission via synchrotron radiation and inverse Compton scattering in the leptonic scenario (also see Woo et al. (2023)). Hou et al. (2014) respectively employed the separated leptonic model or hadronic model responsible for the VHE gamma rays. Here, we adopt a lepto-hadronic model to interpret multiband emission. The ambient proton density of the PWN is still unknown (Beacom and Kistler, 2007) and we adopt 1 cm\({}^{3}\), which is the same as the average density of the interstellar medium. The magnetic field is consistent with Albert et al. (2021). ### LHAASO J2032+4102 The extended TeV gamma-ray source ARGO J2031+4157 (or MGRO J2031+41), the counterpart of LHAASO J2032+4102, is positionally coincident with the Cygnus Cocoon. No significant changes in morphology or spectrum have been observed for this extensive region (Ackermann et al., 2011). However, the energy spectrum from 1 GeV to 10 TeV suggests that the Cygnus Cocoon might either be an unidentified SNR or that the particle acceleration within a superbubble is similar to that within an SNR (Bartoli et al., 2014). Although PSR J2032+4127 may contribute to the multiband emission of the source LHAASO J2032+4102, we choose an unknown SNR+MC system as the potential single source to power multiband observations from the LHAASO J2032+4102 region since the observed extended X-ray emission may be the counterpart of the TeV emission (Bartoli et al., 2014). The magnetic field \(B\approx 3\,\mu\)G is involved in order to be consistent with previous works (Horns et al., 2007). For the IC scatterings, the seed photons from the ISRF model and CMB are considered (see right panel of Fig. 1). Additionally, we attribute the observed \(\gamma\)-rays to the decay of \(\pi_{0}\) mesons generated through inelastic collisions between accelerated protons and target gas in an unidentified SNR+MC system as for LHAASO 1908+0621. Both the electron and proton distributions are assumed to be a power-law function with an exponential cutoff as above. The multiwavelength SED and spectral modelings of LHAASO J2032+4102 are shown in Figure 3(b) and the adopted parameters are concluded in Table 2. The MC number density is adopted as a value of 30 cm\({}^{-3}\). The total energy (\(\simeq W_{p}\)) is 3 \(\times\) 10\({}^{50}\) erg, which can be reasonably provided by one supernova, which typically releases \(\sim 10^{51}\) erg, and about 10% of which can be transferred to the accelerated particles. ## 3 Neutrino Flux Neutrinos are produced alongside \(\gamma\)-rays in hadronic \(pp\) interactions. Therefore, if there are \(\gamma\)-ray sources powered by hadronic interactions, it is also expected that neutrinos will be emitted from the same source \begin{table} \begin{tabular}{c c c c} \hline \hline Component & Parameter & Fig.2 (case 1) & Fig.2 (case 2) \\ \hline Hadronic & Spectral index (\(\alpha_{p}\)) & 1.4 & 1.6 \\ & Minimum energy (E\({}_{p,\rm min}\)) & 10 GeV & 10 GeV \\ & Cutoff energy (E\({}_{p,\rm cut}\)) & 126 TeV & 631 TeV \\ & Total energy (W\({}_{p}\)) & 1.1 \(\times\) 10\({}^{47}\) erg & 5 \(\times\) 10\({}^{46}\) erg \\ & Magnetic field (B) & 30 \(\mu\)G & 3 \(\mu\)G \\ & Number density (n) & 60 cm\({}^{-3}\) & 60 cm\({}^{-3}\) \\ \hline Leptonic & Spectral index (\(\alpha_{e}\)) & 2.8 & 1.6 \\ & Minimum energy (E\({}_{e,\rm min}\)) & 511 MeV & 511 MeV \\ & Cutoff energy (E\({}_{e,\rm cut}\)) & 19.4 TeV & 20.5 TeV \\ & Total energy (W\({}_{e}\)) & 1.6 \(\times\) 10\({}^{48}\) erg & 3.5 \(\times\) 10\({}^{47}\) \\ & Magnetic field (B) & 30 \(\mu\)G & 3 \(\mu\)G \\ & Number density (n) & 60 cm\({}^{-3}\) & 60 cm\({}^{-3}\) \\ \hline \end{tabular} \end{table} Table 1: Parameters used during spectral fittings for LHAASO J1908+0621. region. MGRO J1908+06, MGRO J2019+37 and MGRO J2031+41, which are the MILAGRO counterparts of LHAASO J1908+0621, LHAASO J2018+3651, and LHAASO J2032+4102, may be neutrino sources due to their extended nature and hard TeV \(\gamma\)-ray spectrum (Gonzalez-Garcia et al., 2009; Halzen et al., 2017). IceCube neutrino telescope has conducted searches for point-like source emissions in the vicinity of these sources, which further supports this possibility. To calculate the flux of the muonic neutrinos reaching the Earth, we use the semi-analytical formulation developed in Kelner et al. (2006). The derived muon neutrino flux is shown in Fig. 4 after neutrino oscillations. As shown in Fig. 4, the model predicts a neutrino flux that exceeds the sensitivity limit of next-generation IceCube-Gen2. This indicates that if the total observed TeV/PeV gamma-ray emission from these LHAASO sources originates from the hadronic processes, the accompanying neutrino flux can be identified by IceCube-Gen2. As no accompanying neutrinos will be expected if VHE gamma rays are produced by leptonic processes, the detection or non-detection by IceCube-Gen2 in the future can provide further insights and help to determine the origin of VHE gamma rays from these LHAASO sources. ## 4 Future prospects In this section, we evaluate the detection of neutrinos from three sources by IceCube and IceCube-Gen2. The number of signal events from the point-like source at the declination \(\delta_{s}\) is expressed as \[n_{s}=t\int\mathrm{d}E_{\nu}\frac{\mathrm{d}N_{\nu}(E_{\nu})}{\mathrm{d}E_{\nu }}A_{\mathrm{eff}}(E_{\nu},\delta_{s}), \tag{1}\] where \(A_{\mathrm{eff}}\) is the effective area for through-going track events (IceCube Collaboration, 2021). The background events from the three sources are mainly induced by atmospheric neutrinos. Thus, the number of background events is expressed as \[n_{b}=t\int\mathrm{d}\Omega\int\mathrm{d}E_{\nu}I_{\nu,\mathrm{atm}}(E_{\nu}, \theta_{z})A_{\mathrm{eff}}(E_{\nu},\theta_{z}), \tag{2}\] where \(I_{\nu,\mathrm{atm}}\) is the atmospheric neutrino flux calculated by MCEq (Fedynitch et al., 2015). Low-energy cuts are applied, aiming for the neutrino-induced muons with reconstructed energy \(E_{\mathrm{rec}}<50\) TeV (Omeliukh et al. Figure 3: (a) Multi-wavelength SED and fitting for LHAASO J2018+3651. The synchrotron (light blue dashed), IC (light blue dot-dashed) components are shown. The red solid line corresponds to the hadronic component from PWN G75.2+0.1 shown. The sum of IC and hadronic components is shown with a black solid line. VHE gamma-ray data points are obtained from eHAWC J2019+368 (Abeysekara et al., 2020), VER J2019+368 (Abeysekara et al., 2018), TASG J2019+368 (Amenomori et al., 2021), LHAASO (Cao et al., 2021). The X-ray observation is obtained by _Suzaku_(Mizuno et al., 2017). (b) Multi-wavelength SED and fitting for LHAASO J2032+4102. The synchrotron (light blue dashed), IC (light blue dot-dashed) components are shown. Also, the pink solid line corresponds to the hadronic component. Gamma-ray data points are obtained from 3HWC J2031-415 (Albert et al., 2020), TeV J2032+4130 (Aliu et al., 2014), MGRO J2031+41 (Abdo et al., 2007), ARGO J2031+4157 (Bartoli et al., 2014), TASG J2032+414 (Amenomori et al., 2021), LHAASO (Cao et al., 2021), Fermi-LAT (Ackermann et al., 2011). The radio flux of a possible nonthermal extended radio source at \(\lambda=20\) cm (Paredes et al., 2006) is assumed to be an upper limit of the actual radio emission. The upper limits between 0.5–5 keV (CX, Chandra) and 20–40 keV (ISGRI, INTEGRAL) are taken from (Butt et al., 2006). The model parameters of the two panels are summarized in Table 2. 2021). The reconstructed energy distribution of neutrino events follow the smearing matrix released with the ten-year muon-track data (IceCube Collaboration et al. 2021). The effective area for IceCube-Gen2 is assumed to be 7.5 times that of IceCube (Schumacher et al., 2022). We estimate the statistical significance of observation with a p-value analytically expressed as (The ATLAS Collaboration, 2011; Halzen et al., 2017) \[p_{\rm value}=\frac{1}{2}\left[1-\mathrm{erf}\left(\sqrt{q_{0}^{\rm obs}/2} \right)\right], \tag{3}\] where \[q_{0}^{\rm obs}=2\left[Y_{b}-N_{D}+N_{D}\ln\left(\frac{N_{D}}{Y_{b}}\right) \right], \tag{4}\] \(Y_{b}\) is the expected number of background events, and \(N_{D}\) is the median of Poisson-distributed events containing both signal and background. The event numbers are counted within the solid angle \(\Omega=1.6\sigma\), where \(\sigma\) is the angular resolution of the detector. This solid angle contains 72% of the signal from a point-like source (Alexandreas et al., 1993). The angular resolution \(\sigma\) of IceCube is assumed to be the median angular uncertainty of the muon-track events (\(E_{\rm rec}<50\,\mathrm{TeV}\)) observed within the declination band \(\delta_{s}\pm 1^{\circ}\). Omeliukh et al. (2021) reports the angular resolutions for low-energy samples observed by IceCube-Gen2 (see Table 2 therein), but only three zenith bins are offered. Thus, we extrapolate the angular resolution to the source declination \(\delta_{s}\) as \(\sigma(\delta_{s})=\sigma_{0}(\delta_{s})+0.05^{\circ}\), where \(\sigma_{0}\) represents the angular resolution of IceCube-Gen2 for 10 TeV muons (see Figure 24 in Aartsen et al. (2021)). The results for the statistical significance are reported in Fig. 5. LHAASO J1908+0621 will be detected at 5\(\sigma\) level in less than 10 months for Case 1, while for Case 2 will be detected at 5\(\sigma\) level in less than 4 years with IceCube-Gen2 detector, considering the relevant parameters as reported in Table 1. The source LHAASO J2018+3651 and LHAASO J2032+4102 could be detected at 3\(\sigma\) level in \(\sim\)10 years and 5\(\sigma\) level in \(\sim\)4 years respectively, considering the relevant parameters as reported in Table 2. However, there is no significant statistical significance with IceCube. ## 5 Discussion and Summary Galactic high-energy neutrinos have been long expected. Galactic sources detected by LHAASO are thought of as the potential PeVatrons since gamma rays with energies larger than 100 TeV have been found from them. However, it is still under debate whether VHE gamma rays are produced by the leptonic process or the hadronic process. Neutrino observations can be an important probe to distinguish the origin of VHE gamma rays since the hadronic process (\(pp\) collisions or photomeson production process) will produce the accompanying high-energy neutrinos. In this paper, we investigate multiband spectra of three LHAASO sources, i.e., LHAASO J1908+0621, LHAASO J2018+3651, and LHAASO J2032+4102, which are the most promising galactic neutrino sources. We propose reasonable lepto-hadronic scenarios to implement the multiband spectral modeling. Assuming gamma rays are entirely hadronic, we calculate the most optimistic flux of muonic neutrinos generated from the hadronic process. Furthermore, we estimate the statistical significance (p-value) as a function of time for three sources using both the IceCube neutrino observatory and the proposed second-generation IceCube-Gen2. Our results indicate that LHAASO J1908+0621 can be detected at a 5\(\sigma\) level within 10 months for Case 1, and within 4 years for Case 2 by IceCube-Gen2. Similarly, high-energy neutrinos from LHAASO J2018+3651 and LHAASO J2032+4102 can be respectively detected at a 3\(\sigma\) level in \(\sim\)10 years and a 5\(\sigma\) level in \(\sim\)4 years if the VHE gamma rays are entirely hadronic. However, no significant statistical significance is observed with the IceCube detector in the next 10 years. Future observations by IceCube-Gen2 or other more advanced next-generation neutrino telescopes at the positions of three sources will be important to untangle the exact nature of these enigmatic sources. ## Acknowledgments Figure 5: The statistical significance (p-value) as a function of observation time for three sources. The blue and orange lines represent the p-values obtained using the IceCube and IceCube-Gen2 respectively. The starting point of the blue line is the ten-year (2008-2018) neutrino source search by IceCube. We acknowledge support from the National Natural Science Foundation of China under grant No.12003007 and the Fundamental Research Funds for the Central Universities (No. 2020kfyXJJS039)
2306.12424
VisoGender: A dataset for benchmarking gender bias in image-text pronoun resolution
We introduce VisoGender, a novel dataset for benchmarking gender bias in vision-language models. We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas, where each image is associated with a caption containing a pronoun relationship of subjects and objects in the scene. VisoGender is balanced by gender representation in professional roles, supporting bias evaluation in two ways: i) resolution bias, where we evaluate the difference between pronoun resolution accuracies for image subjects with gender presentations perceived as masculine versus feminine by human annotators and ii) retrieval bias, where we compare ratios of professionals perceived to have masculine and feminine gender presentations retrieved for a gender-neutral search query. We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes. While the direction and magnitude of gender bias depends on the task and the model being evaluated, captioning models are generally less biased than Vision-Language Encoders. Dataset and code are available at https://github.com/oxai/visogender
Siobhan Mackenzie Hall, Fernanda Gonçalves Abrantes, Hanwen Zhu, Grace Sodunke, Aleksandar Shtedritski, Hannah Rose Kirk
2023-06-21T17:59:51Z
http://arxiv.org/abs/2306.12424v3
# VisoGender: A dataset for benchmarking gender bias in image-text pronoun resolution ###### Abstract We introduce VisoGender, a novel dataset for benchmarking gender bias in vision-language models. We focus on occupation-related gender biases, inspired by Winograd and Winogender schemas, where each image is associated with a caption containing a pronoun relationship of subjects and objects in the scene. VisoGender is balanced by gender representation in professional roles, supporting bias evaluation in two ways: i) _resolution bias_, where we evaluate the difference between gender resolution accuracies for men and women and ii) _retrieval bias_, where we compare ratios of male and female professionals retrieved for a gender-neutral search query. We benchmark several state-of-the-art vision-language models and find that they lack the reasoning abilities to correctly resolve gender in complex scenes. While the direction and magnitude of gender bias depends on the task and the model being evaluated, captioning models generally are more accurate and less biased than CLIP-like models. Dataset and code are available at [https://github.com/oxai/visogender](https://github.com/oxai/visogender) ## 1 Introduction Vision-language models (VLMs) are advancing rapidly and reaching ever-wider audience across numerous applications, such as classification and captioning, as well as text-to-image retrieval and generation. However, these models are pre-trained from uncurated image-text pairs scraped from the internet [1; 2] and so, their outputs can perpetuate or amplify societal biases [3; 4; 5; 6]. How the VLM is used determines the mechanisms of bias transfer from pre-training to risks of representational and/or allocational harm [7]. For example, a VLM used for image retrieval may skew towards returning more images of male doctors, thus entrenching societal perceptions on the association between gender and career success; or a VLM used for captioning may more frequently misgender women or non-binary image subjects, aligning with a capability (un)fairness [8]. Despite the growing body of work on evaluating and mitigating the bias of VLMs [9; 10; 11; 12; 13], there is a dearth of specifically-designed benchmark datasets to evaluate the presence of societal biases across downstream tasks - such as captioning or image retrieval, with the majority of work measuring biases with pre-existing image datasets such as FairFace [14] or COCO [15; 16], despite their limited real-world transferability and spurious correlations [12; 17]. In this paper, we introduce the VisoGender benchmark for evaluating bias in VLMs. The design of VisoGender is inspired by two prior bodies of works. Firstly, we apply stress-testing of vision-linguistic reasoning capabilities of VLMs as in the Winoground benchmark [18] but introduce the dimension of societal biases. Secondly, we adopt the templated structure to test gender bias in occupational pronoun resolution from NLP research, specifically the WinoGender [19] and WinoBias [20] frameworks, in turn inspired by Winograd Schema [21], but apply it to the vision-language domain. To our knowledge, VisoGender is the first dataset to combine both of these contributions by stress-testing _gender bias_ in visual-linguistic reasoning and coreference resolution capabilities. VisoGender contains images of a person in an occupation ("the doctor"), combined with either an object ("the stethescope") or a participant ("the patient"). Each image is annotated for the gender of the occupation and/or the participant, and the dataset is balanced across different genders occupying these roles. Using these components, we construct groundtruth natural language captions containing a possessive pronoun relationship ("the doctor and his/her patient"). We test bias in two tasks (see Fig. 1): pronoun resolution and image retrieval. In the resolution task, the model is provided with a single image (either of an occupation-object or occupation-participant scene) and ranks the likelihood of captions containing different gender pronouns. There are varying levels of difficulty in the resolution task - from a single person resolution in the occupation-object case, to two person resolution in the occupation-participant case, where either both subjects are the same gender (easier) or opposite gender (harder). In the retrieval task, the model is provided with a single gender-neutral caption and must retrieve images from a set containing different genders of the occupation. We measure resolution bias using the gender accuracy gap in correct pronoun resolution (corresponding to capability fairness) and retrieval bias using commonly-used metrics such as Bias@K, MaxSkew and NDKL (corresponding to representational fairness). We present preliminary results for six state-of-the-art CLIP-like models [1; 22; 2; 23; 24; 25] and two state-of-the-art captioning models [26; 27]. We find that models still struggle to correctly infer the pronoun especially when there are two people in the image of different genders, where performance is close to random. Our benchmark also recovers that models display substantial accuracy gaps between men and women subjects, indicating the presence of resolution bias, and predominately rank images of men higher than women, indicating a retrieval bias. We compare these results to US labor force statistics (as in [28; 20; 19]) and find some correlations between model bias and societal occupational skew. Our findings demonstrate there is still substantial progress to be made in improving resolution capabilities, as well as reducing the gender gap in resolution performance and retrieval outcomes. The pace at which VLMs are developed is only set to grow in coming years - VisoGender provides a much-needed benchmark to evaluate their potential downstream harms before large-scale deployment. ## 2 Related Works **Bias in coreferences in NLP** Coreference resolution aims to identify which mentions in a natural language document link to the same real-world entity [29]. In the past decade, significant progress has been made moving from rule-based systems and expert knowledge [30], to statistical models [31; 32] and deep neural networks [33; 34; 35; 36; 37]. Pronoun resolution involves linking a noun such as "doctor" to a pronoun in the sentence. Biases have been identified, with respect to machine translation [38], non-binary pronouns [39], and favouring masculine entities when resolving gender ambiguous cases [40]. Our work is most similar to gender pronoun resolution tasks based on Winograd schemas [21], like Winogender[19] and WinoBias [20] which investigate occupational Figure 1: **Resolution of gender pronouns and retrieval with a neutral query. We resolve gender by (i) using zero-shot classification with CLIP-like models, and (ii) next-token prediction with captioning models, such as BLIP. We have an additional simpler task to resolve the gender of a single person, e.g. with a template “The doctor and her / his stethoscope”.** related biases. Both of these works demarcate "hard" and "easy" cases based on (anti-)stereotypical gender-occupation associations as measured relative to US labour force statistics. We extend this work to the vision-language domain. In our resolution task, we modify the typical Winograd scheme because the correct resolution is unambiguous, i.e., there is a correct caption (and pronoun) for a corresponding image. However, our retrieval task is a closer vision-language analogy to [19; 20] because there is no groundtruth for a "correct" ranking of images given a gender-neutral search query. **Evaluating visual reasoning** There is an emerging body of work on visual reasoning tasks [41], such as VQA [42; 43; 44], visual word sense disambiguation [45], compositionality understanding [46; 47; 48], comprehension [49] or visual entity linking [50]. Most similar to our work, Winoground [18] evaluates vision-linguistic compositional reasoning by tasking a model to match two images with two captions containing the same set of words, only in a different order - such as "there is a mug in some grass" vs. "there is some grass in a mug". The task is challenging, with state-of-the-art VLMs rarely performing better than chance, though [51] demonstrate some of these failures may be due to atypical images in the dataset. Our vision-linguistic stress-tests are inspired by adapting Winoground to societal biases, but a key difference is that our caption-image pairs do not contain the exact same set of words - for example, matching "the doctor and her patient" versus "the doctor and his patient". **Measuring bias in vision-language models.** Measuring the societal bias of VLMs is a growing area of research. While early works measure misclassification rates into harmful categories [9; 52], more recent methods investigate face-to-text retrieval [11; 10; 53; 54], or captioning [55]. However, these approaches rely on off-the-shelf datasets, such as COCO [16], which have been shown to contain spurious correlations [17] and thus are not suitable for evaluating model bias [12]. Similar to [12], we balance our dataset by gender across different occupational settings, but instead using naturally-occuring images rather than synthetic edits. ## 3 The VisoGender Benchmark The VisoGender dataset contains 690 images of people in various occupational settings, where each image is annotated for the inferred groundtruth gender of the subject(s) in the image. We use these annotations to construct a templated caption of a correct pronoun relationship. The dataset covers 23 unique occupations in a hierarchical taxonomy. Each occupation appears in the dataset with two template forms - either as a single person in the image with a possessive pronoun to an object ("the doctor and his/her stethoscope"), or as one of two people in the image with a possessive pronoun to a participant ("the doctor and his/her patient") (see Sec. 3.1). A summary of the dataset is presented in Tab. 1. In the following subsections, we first present further details of the templates (Sec. 3.1). We then introduce the two types of VLMs which are compatible with VisoGender (Sec. 3.3), and finally, define the two tasks in which we measure model bias (Sec. 3.4). ### Templates Each templated caption contains three components, adapted from [19]: * Occupation: a person refered to by an occupational noun and definite article, "the doctor" * Pronoun: a pronoun corresponding to the ground truth gender of the occupation in the image, e.g., "her" or "his" * either Object: a noun corresponding to typical professional items, e.g., "the stethoscope" * or Participant: A second person in a typical professional relationship with the occupation, e.g., "the patient". For occupations, we use the list from [19], but remove (i) occupations without a clear possessive pronoun relationship between the occupation and participant, e.g., "the plumber and their houseowner" is not semantically correct, and (ii) occupations without sufficient open-domain images across genders (for both men and women occupying the occupation and participant roles). We classify the remaining occupations into a hierarchical taxonomy to permit aggregate group-wise analysis: **Sector** describes the general field, and includes _education_, _medical_, _office_, _retail_ and _service_; **Specialisation** describes subcategories within the sector, where for example _services_ includes _food services_, _fashion_, _animal_ or _household_; and finally **Occupations** are nested within specialisations, where for example _food services_ contains _waiter_, _bartender_, and _baker_. Similar to [19; 20; 28], we match US labour force statistics on the percentage of men working in each occupation to compare model biases to occupational societal skew. The full taxonomy and list of occupations is presented in the Supplementary Materials. We also source the list of participants from [19] but replace any references to children as participants and in some cases, make modifications for a more natural possessive pronoun, e.g., "the lawyer and the witness" becomes "the lawyer and their client". For objects, we manually define a typical professional item for each occupation. Using these components, we construct three templates (subtasks) of increasing difficulty for coreference resolution: * **Single Subject:** The template of captions is "The {occupation} and {his/her} {object}", e.g. _the doctor and her stethoscope_. For each occupation, we collect 10 occupation-object images, 5 for each gender. Here, models only need to resolve the pronoun of one subject in the image, thus testing simple gender detection capabilities. * **Two Subjects of the Same Gender**: The template of the captions is "The {occupation} and {his/her} {participant}" e.g. _the doctor and her patient_. In this case, the gender of the occupation and the participant are the same (both men or both women). Per occupation, we collect 5 images for each of these two cases (M-M, W-W). Here, the model must resolve the gender of two subjects but assigning which subject is the occupation and which is the participant does not affect the correct pronoun. * **Two Subjects of Different Gender**. Finally, we use the same occupation-participant template but now the participant and the occupation are of opposite genders (one man and one woman). Per occupation, we collect 5 images for each of case (M-W, W-M). Here, the model must resolve the gender of the subject, _and_ inferring from image context which is the occupation and which is the participant to identify the correct pronoun. ### Dataset Collection The VisoGender dataset comprises image URLs with annotations for the occupation noun, the participant or object noun, and the inferred groundtruth gender of the occupation and participant. These annotations can be used to reconstruct the templated captions. Data collection was carried out by the authors of the paper from March to May 2023 on a variety of image databases and search providers, such as Pexels and Google Image Search. We followed a set guidelines to specify exclusion and inclusion criteria, detailed in the Supplementary Materials. We ensure that there are no duplicate images nor invalid URLs across the dataset, i.e., no overlaps between occupations. In the early stages of data collection, we used the entire list of occupations from [19]. However, we only include those with at least 20 viable URLs (5 per gender pair) for occupation-participants and 10 viable URLs for occupation-object (5 per gender). The image curation process (and availability of viable URLs) is dependent on the retrieval of different gendered roles across occupational search queries and so therefore compromises by inherent representational biases in these systems. We mitigate effects of imbalance across genders by only including occupations with a full set of images (equal images across all gender pairs) but this may introduce a sample selection bias to the included occupations. Furthermore, inferring gender from an image depends on ingrained biases of the dataset curators. We discuss limitations and biases of data collection in Sec. 5.1, and possible expansions in the future with more resources e.g, partnering with a stock photo company. The dataset is accompanied by a Datasheet for Datasets [56] in the Supplementary Materials. \begin{table} \begin{tabular}{c c c c c|c c c c} \hline \hline & \multicolumn{4}{c|}{**Categories**} & \multicolumn{4}{c}{**Number of images**} \\ \hline & \multicolumn{1}{c}{Sect.} & \multicolumn{1}{c}{Spec.} & \multicolumn{1}{c}{Occ.} & \multicolumn{1}{c|}{Gender Pairs} & \multicolumn{1}{c}{Images per Occ.} & \multicolumn{1}{c}{Images per Gender Pair} & \multicolumn{1}{c}{Images per Gender Pair} & \multicolumn{1}{c}{Paired Overall} \\ & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ **Single person** & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ (occupation-object) & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ **Two-person** & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ (occupation-participant) & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline \hline \end{tabular} \end{table} Table 1: **VisoGender dataset summary**, showing the hierarchy of included Sectors, Specialisations, and Occupations; the gender pairs per template type, and the counts of images within each split of the dataset. ### Two Supported Types of Vision-Language Models VisoGender is designed to accommodate two types of VLMs. Here we discuss their properties, and how bias can be measured in common usecases. **CLIP models** CLIP-like models, first introduced by [1], have separate vision and language encoders and are trained to jointly match images and text. Given an image \(i\in\mathcal{R}^{3\times H\times W}\) and text \(t\), a CLIP model outputs a score \(s(i,t)\) that expresses the degree of compatibility between the image and text. The first common use case of CLIP models is zero-shot classification of images [1, 57, 58]. This is done by providing a query image \(i_{q}\) and text prompts \(t_{n},n\in 1,\ldots,N\). For example, if we wish to zero-shot classify the gender of a doctor in an image using pronoun resolution, we can provide text prompts _"This is an image of a doctor and [his, her] notebook"_, and select the gender with the highest compatibility score to the image. Such a classifier can be considered biased if, for example, it more accurately predicts one gender in some occupation. The second common use case of CLIP models is for text-to-image retrieval [1]. Given a text query \(t_{q}\), images \(i_{n},n\in 1,\ldots,N\) and a query size \(K\), we select the \(K\) images with the highest compatibility score to the text prompt. In this setting, the model can be biased if, for example, when searching for a given occupation, people from a given demographic are over or under-represented in the top \(K\) retrieval results. **Captioning models** Captioning models are most commonly trained to autoregressively predict a text caption given an image. For an image \(i\) and, optionally, a partially completed caption with N tokens \(c=[t_{1},\ldots t_{N}]\), the model outputs the probability for the next token \(t_{N+1}\) as \(p_{\text{cap}}(t_{N+1}|i,t_{1},\ldots,t_{N})\). Similar to CLIP models, we can apply the captioning model to predict the gender of a subject in an image via pronoun resolution. We first supply a query image \(i_{q}\) (say, an image of a doctor) and a caption \(c_{q}\) like _"An image of a doctor and"_. We then inspect the probability distribution for the next token \(t_{n}\), denoted by \(p_{\text{cap}}(t_{n})=p_{\text{cap}}(t_{m}|i_{q},c_{q})\). We can now compare the probabilities \(p_{\text{cap}}(t_{n})=\)_"her"_ and \(p_{\text{cap}}(t_{n})=\)_"his"_, choosing the one with the higher score as the model's _prediction_. It has been demonstrated that comparing token probabilities is a more reliable measure of a generative language model's performance compared to free generation [59], and such templates have been successfully used to evaluate bias in LLMs [28]. ### Two Angles of Model Bias The VisoGender setup has the flexibility to measure model bias in two ways: **Resolution Task**_The resolution task considers a single image with groundtruth gender label and matches it to multiple candidate captions containing different gender pronouns_. For example, we start with an image containing a female doctor, and specify the set of candidate captions as "the doctor and her/his patient". We define **resolution accuracy**, \(RA\), as the percentage of correctly resolved pronouns. This can be calculated over all occupations, across main occupation categories, or per occupation. For a given occupation \(o\in O\) and a gender \(g\) (either male \(m\) or female \(f\)), we have:_ \[RA_{g}(o)=\frac{\text{Number of correctly resolved pronouns of gender $g$ in occupation $o$}}{\text{Total number of pronouns of gender $g$ in occupation $o$}}\] An unbiased outcome is one where the model resolves both gender pronouns equally, i.e., \(RA_{m}(o)=RA_{f}(o),\ \forall o\in O\). We now define **resolution bias** as the gender resolution accuracy gap \[\Delta(o)=RA_{m}(o)-RA_{f}(o), \tag{1}\] where a positive value of \(\Delta\) shows a model more accurately resolves men, and vice versa. Our definition of resolution bias measures a form of capability fairness, i.e., whether a system performs equally well across subgroups [8]. This task is applicable to both types of VLMs. **Retrieval Task**_The retrieval task considers a single gender neutral caption for a given occupation and matches it to multiple images containing different gender subjects from the same occupation._ For example, we start with the caption "the doctor and their patient" and define the set of candidate images as containing 50% images of doctors who are men and 50% who are women. Given there is no groundtruth for a "correct" ranking of images for a gender-neutral caption, we cannot define a **retrie accuracy** metric. For defining **retrieval bias**, we use 3 commonly used metrics - _Bias@K_[60], _Skew@K_[61, 11] and _NDKL_[62, 61]. Bias@K measures the overrepresentation of men in the top K retrieval results. Skew@K measures the difference between the desired proportion of image attributes and the observed one, and MaxSkew@K is the maximum Skew among all attributes, or the "largest unfair advantage" [61] belonging to images of any gender. NDKL is a ranking measure that measures the distance from a fair distribution. For further definitions and discussions of these, please refer to the Supplementary Materials. Our definition of retrieval bias measures a form of representational fairness, i.e., with a gender-balanced set of images and a gender-neutral caption, whether occupations of each gender have equal chances of being retrieved. The retrieval task is only applicable to CLIP models. ## 4 Results For the resolution task, we evaluate six CLIP-like models - CLIP [1], OpenCLIP [22] (trained on LAION 2B and 400M [2]), SLIP [23], DeCLIP [24], FILIP [25] (last 3 trained on YFCC-15M [63]); and two state-of-the-art captioning models - BLIP-2 [26] and GIT [27]. For two candidate models (CLIP and BLIP-2), we go into more detail by investigating their resolution capabilities and resolution biases, which are also compared to U.S. Labor Force Statistics (Sec. 4.1). We ablate the VisOgender setup by changing the order of templates and including a neutral caption. For the retrieval task, we benchmark the same six CLIP-like models as the resolution task. Captioning models are not compatible with the retrieval task. We also compare retrieval bias metrics with US Labor Force Statistics. For all CLIP models, we use ViT-B/32 encoders, and for GIT we use the GIT-Large model. ### Resolution Task **Evaluating Resolution Capabilities** We present results in Tab. 2, split according to the different levels of difficulty. We report the mean resolution accuracy \(RA_{\text{avg}}\) for each difficulty level, together with the resolution bias or accuracy gap \(\Delta\). As expected, the resolution accuracy is highest when there is one person in the image, and lowest when there are two people of different gender in the image. The accuracies for the latter are consistently worse than random chance, pointing at the models' inability to reason about scenes with multiple people and attributes associated with each of them. This confirms the findings of prior works that conclude that VLMs are not capable of visio-linguistic [18] or spatial [65] reasoning. Captioning models are better than or on par with CLIP models for all levels of difficulty. In Fig. 2 we see that BLIP-2 outperforms CLIP on all gender splits of the dataset. From Tab. 2 we also see that models with better zero-shot classification accuracy on Imagenet [64] tend to have a better overall resolution accuracy. **Evaluating Resolution Bias** From Tab. 3 we see that models tend to exhibit a larger resolution accuracy gap as we go to more "difficult" subtasks, such as two people with different genders where there is higher variation and almost _random_ predictions across models. In Fig. 2 we compare the resolution bias, or accuracy gap, for CLIP and BLIP-2. We see that (i) CLIP shows a larger accuracy gap, and (ii) CLIP is more biased towards correctly resolving pronouns for women, whereas BLIP-2 correctly resolves pronouns for men more often. For further analysis and per-occupation results, see the Supplementary Materials. \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline \hline & \multirow{2}{*}{Overall} & \multicolumn{3}{c}{Single Person} & \multicolumn{3}{c}{Two people} & \multicolumn{3}{c}{} \\ \cline{3-10} & & \(\text{RA}_{\text{avg}}\) & \(\Delta\) & \(\text{RA}_{\text{avg}}\) & \(\Delta\) & \(\text{RA}_{\text{avg}}\) & \(\Delta\) & \(\text{RA}_{\text{avg}}\) & \(\Delta\) & \\ \hline CLIP [1] & 0.75 & 0.92 & -0.14 & 0.57 & -0.27 & 0.79 & -0.18 & 0.36 & -0.35 & 63.2 \\ OpenCLIP\({}_{2n}\)[22] & 0.78 & 0.96 & -0.07 & 0.60 & -0.37 & 0.77 & -0.42 & 0.44 & -0.32 & 66.2 \\ OpenCLIP\({}_{000M}\)[22] & 0.74 & 0.84 & -0.27 & 0.64 & -0.29 & 0.80 & -0.26 & 0.46 & -0.33 & 62.9 \\ SLIP [23] & 0.60 & 0.77 & 0.14 & 0.43 & 0.14 & 0.51 & 0.12 & 0.34 & 0.17 & 34.3 \\ DeCLIP [24] & 0.70 & 0.87 & 0.06 & 0.52 & -0.17 & 0.74 & -0.14 & 0.29 & -0.19 & 43.2 \\ FILIP [25] & 0.45 & 0.41 & 0.06 & 0.49 & 0.36 & 0.49 & 0.36 & 0.50 & 0.37 & 39.5 \\ \hline BLIP-2 [26] & 0.84 & 0.92 & -0.09 & 0.76 & 0.07 & 0.93 & 0.06 & 0.60 & 0.09 & — \\ GIT [27] & 0.84 & 0.96 & -0.07 & 0.72 & -0.27 & 0.97 & -0.07 & 0.48 & -0.47 & — \\ \hline \hline \end{tabular} \end{table} Table 2: **Resolution Bias.** We present resolution accuracy averaged for male and female, as well as the resolution accuracy gap \(\Delta\), as defined in eq. (1). “Same gender” and “Different gender” are images with two people from the same or different gender, respectively. A positive gap \(\Delta\) denotes better resolution accuracy for men. We also present reported zero-shot classification accuracy on Imagenet [64]. To interpret the results in a real-world context, we compare U.S. Labor Force Statistics on the proportion of males and females in occupations with resolution bias in Fig. 3. We measure the correlation in the absolute values (with Pearson's R) and correlation in ranked values (with Kendall-Tau), i.e., testing for the monotonicity of relationship between model bias and societal skew. While we see no pattern for the bias of CLIP, the accuracy resolution gap of BLIP-2 correlates with the U.S. proportions - for occupations with fewer women, such as "engineer", the model correctly resolves men more often than women, and vice versa. ### Ablations **Template Flipping** We change the subject of the prompt sentences for images with two people by reordering the _{participant}_ and _{occupation}_, e.g., "The doctor and his patient" becomes "The patient and her doctor". We compare both templates in Fig. 4. While we observe similar trends in the two settings, the resolution accuracy of CLIP is worse when the pronoun refers to the participant. **Neutral Pronoun Resolution** Here we attempt to move away from binary gender classification and introduce a third, neutral pronoun - "their", which is always grammatically correct. In Tab. 3 we see that while BLIP-2 almost never chooses the neutral pronoun, it is selected by CLIP in 17% of all images and 31% of images containing two people of different genders. We also see that for the more difficult settings, the neutral pronoun is selected more frequently, with 31% in the "two people, different gender" setting, which corresponds to almost random chance (33%). Finally, we see that images, where the ground truth pronoun is male, tend to be resolved as neutral more often. ### Retrieval bias We evaluate VLMs on retrieval bias in Tab. 4. We see that _all_ models have positive Bias@5 and Bias@10 values, which suggests that images of men in professional settings are retrieved more often than images of women, despite the candidate images always being gender-balanced. In Fig. 5, we see that most occupations (15 out of 23) are skewed towards men in both Bias@5 and Bias@10. However, we do not find a significant relationship between retrieval bias and the U.S. Labor Force Statistics. For further analysis, please refer to the Supplementary Materials. ## 5 Discussion From our preliminary benchmarking with VisoGender, we summarise several key findings: **Models struggle to resolve pronouns in the presence of both genders** We found that all CLIP-like models show close to random performance on gender resolution when there are people of both genders in the scene. This hints at insufficient visuo-linguistic capabilities in current VLMs. **Captioning models have a higher accuracy and smaller accuracy gap between genders** We find that captioning models outperform CLIP models on all subtasks. We attribute this to the way resolution is done in captioning models - the gender of the subject is extracted using the start of the template and next token prediction. Meanwhile, CLIP models need to rely on a global cls text feature, which seems to not capture the nuanced difference between entities in the sentence. **Resolution and retrieval bias are not in the same direction** Across models, there is not a consistent pattern of bias direction - CLIP models are more accurate at resolving feminine pronouns, while BLIP models are more accurate for masculine pronouns. In contrast, we find that all CLIP models are predominately biased towards retrieving images of men. This highlights a risk of representative harm in deploying CLIP models in image search systems. \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline \hline & Bias@5 & Bias@10 & MaxSkew@5 & MaxSkew@10 & NDKL \\ Model & Mean & \(\sigma\) & Mean & \(\sigma\) & Mean & \(\sigma\) & Mean & \(\sigma\) & Mean & \(\sigma\) \\ \hline CLIP [1] & 0.11 & 0.38 & 0.16 & 0.22 & 0.27 & 0.15 & 0.18 & 0.13 & 0.19 & 0.07 \\ OpenCLIP\({}_{2B}\)[22] & 0.10 & 0.44 & 0.08 & 0.23 & 0.29 & 0.17 & 0.18 & 0.11 & 0.18 & 0.07 \\ OpenCLIP\({}_{400M}\)[22] & 0.17 & 0.47 & 0.11 & 0.22 & 0.33 & 0.18 & 0.16 & 0.13 & 0.19 & 0.07 \\ SLIP [23] & 0.06 & 0.52 & 0.00 & 0.24 & 0.32 & 0.21 & 0.17 & 0.12 & 0.19 & 0.09 \\ DeCLIP [24] & 0.11 & 0.40 & 0.15 & 0.26 & 0.28 & 0.16 & 0.20 & 0.14 & 0.17 & 0.07 \\ FILIP [25] & 0.01 & 0.43 & 0.03 & 0.26 & 0.29 & 0.16 & 0.17 & 0.13 & 0.18 & 0.07 \\ \hline \hline \end{tabular} \end{table} Table 4: **Retrieval Bias.** We present mean and standard deviation across all occupations. Positive Bias@K shows more images of men were retrieved. Figure 4: **Template Flipping** where the subject of the template refers to either the occupation or the participant. Figure 5: **Retrieval bias by occupation for CLIP.** Occupations are ranked by the Bias@5 values. Positive Bias@K shows more images of men were retrieved for the given occupation. ### Limitations **Subjective assessment when creating datasets** The dataset was compiled by a diverse team (in terms of gender and heritage) to potentially mitigate stereotypical perceptions but still inherently relies on subjective perceptions of inferred binary gender. We acknowledge that the identification of gender is a complicated societal matter, in that visual presentations may not reflect self-identity and gender identities occupy a fluid spectrum, misspecified by binary distinctions. We advocate that this work is extended to include more genders and avoid erasure of non-binary individuals represented across occupations [66]. The codebase is designed to be flexible to include neopronouns in the future. **Stacking biases from the internet** We source images from a variety of search platforms (such as Google Image Search) and image hosting sites. While we balance included occupations across genders, those that we leave out are not "missing at random", that is due to biases that already exist in images on the internet. We could not find enough images for some occupations, e.g., there were not 5 images of a female plumber and a male client that met our criteria for data accessibility and format. **Dataset size** The dataset is small compared to those typically used in VLM training. However, this dataset is only intended for evaluation purposes. It was beyond our means to partner with a StockImage provider, such as Getty Images [67], but this could be an avenue in future work to expand dataset size and to counteract some of the aforementioned image availability biases. Future work could also augment the dataset with synthetic data from generative VLMs [12, 68]. ## 6 Conclusion We introduced VisoGender, a novel dataset for benchmarking societal biases in VLMs for both pronoun resolution and retrieval settings. On some parts of the benchmark, we demonstrate that current state-of-the-art models perform no better than random chance, and that they do not perform equally well for resolving gender of men and women, nor give equal retrieval likelihood to images of male or female professionals. There is significant headroom for improvement both in the reasoning abilities of VLMs, and in the gender gap of their abilities, when it comes to complex scenes with multiple humans. We hope this work encourages the benchmarking of future VLMs, so the risk of downstream harms and negative biases can be measured, compared and mitigated. ## 7 Acknowledgements The authors would like to extend a thank you to following people for their feedback and insight that was a valuable contribution to the development of VisoGender: Max Bain, Hugo Berg, Seb Wilkes, Juliana Mota, Daniel Kochin and Avishkar Bhoopchand. This work has been supported by the Oxford Artificial Intelligence student society, the EPSRCentre for Doctoral Training in Autonomous Intelligent Machines & Systems [EP/S024050/1] (A.S.), the Economic and Social Research Council Grant for Digital Social Science [ES/P000649/1] (H.R.K.), and the Clarendon Fund in partnership with the St Cross College Scholarship, Oxford (F.G.A). For computing resources, the authors are grateful for support from Jonathan Caton and the Google Cloud and the CURE Programme under Google Brain Research.
2308.12188
Development and external validation of a lung cancer risk estimation tool using gradient-boosting
Lung cancer is a significant cause of mortality worldwide, emphasizing the importance of early detection for improved survival rates. In this study, we propose a machine learning (ML) tool trained on data from the PLCO Cancer Screening Trial and validated on the NLST to estimate the likelihood of lung cancer occurrence within five years. The study utilized two datasets, the PLCO (n=55,161) and NLST (n=48,595), consisting of comprehensive information on risk factors, clinical measurements, and outcomes related to lung cancer. Data preprocessing involved removing patients who were not current or former smokers and those who had died of causes unrelated to lung cancer. Additionally, a focus was placed on mitigating bias caused by censored data. Feature selection, hyper-parameter optimization, and model calibration were performed using XGBoost, an ensemble learning algorithm that combines gradient boosting and decision trees. The ML model was trained on the pre-processed PLCO dataset and tested on the NLST dataset. The model incorporated features such as age, gender, smoking history, medical diagnoses, and family history of lung cancer. The model was well-calibrated (Brier score=0.044). ROC-AUC was 82% on the PLCO dataset and 70% on the NLST dataset. PR-AUC was 29% and 11% respectively. When compared to the USPSTF guidelines for lung cancer screening, our model provided the same recall with a precision of 13.1% vs. 9.3% on the PLCO dataset and 3.2% vs. 3.1% on the NLST dataset. The developed ML tool provides a freely available web application for estimating the likelihood of developing lung cancer within five years. By utilizing risk factors and clinical data, individuals can assess their risk and make informed decisions regarding lung cancer screening. This research contributes to the efforts in early detection and prevention strategies, aiming to reduce lung cancer-related mortality rates.
Pierre-Louis Benveniste, Julie Alberge, Lei Xing, Jean-Emmanuel Bibault
2023-08-23T15:25:17Z
http://arxiv.org/abs/2308.12188v1
# Development and external validation of a lung cancer risk estimation tool using gradient-boosting ###### Abstract Introduction: Lung cancer is a significant cause of mortality worldwide, emphasizing the importance of early detection for improved survival rates. In this study, we propose a machine learning (ML) tool trained on data from the PLCO Cancer Screening Trial and validated on the NLST to estimate the likelihood of lung cancer occurrence within five years. Methods: The study utilized two datasets, the PLCO (n=55,161) and NLST (n=48,595), consisting of comprehensive information on risk factors, clinical measurements, and outcomes related to lung cancer. Data preprocessing involved removing patients who were not current or former smokers and those who had died of causes unrelated to lung cancer. Additionally, a focus was placed on mitigating bias caused by censored data. Feature selection, hyper-parameter optimization, and model calibration were performed using XGBoost, an ensemble learning algorithm that combines gradient boosting and decision trees. Results: The final ML model was trained on the pre-processed PLCO dataset and tested on the NLST dataset. The model incorporated features such as age, gender, smoking history, medical diagnoses, and family history of lung cancer. The model was well-calibrated (Brier score=0.044). ROC-AUC was 82% on the PLCO dataset and 70% on the NLST dataset. PR-AUC was 29% and 11% respectively. When compared to the USPSTF guidelines for lung cancer screening, our model provided the same recall with a precision of 13.1% vs. 9.3% on the PLCO dataset and 3.2% vs. 3.1% on the NLST dataset. Conclusion: The developed ML tool provides a freely available web application for estimating the likelihood of developing lung cancer within five years. By utilizing risk factors and clinical data, individuals can assess their risk and make informed decisions regarding lung cancer screening. This research contributes to the efforts in early detection and prevention strategies, aiming to reduce lung cancer-related mortality rates. lung cancer; risk calculator; screening; machine learning **GITHUB REPOSITORY:**[https://github.com/plbenveniste/LungCancerRisk](https://github.com/plbenveniste/LungCancerRisk) ## 1 Introduction Cancer is a leading cause of death worldwide, accounting for nearly 10 million deaths in 2020, or nearly one in six deaths [18]. Lung cancer is the most common cause of cancer death in 2020 with around 1.80 million deaths. The survival rate of lung cancer is strongly dependent on the cancer stage as well as the physical condition of the patient. On average, it is estimated that the five-year survival rate for lung cancer is around 56% for cases detected when the disease is still localized within the lungs. On the other hand, in later stages, when the disease has spread to other organs, the five-year survival rate drops to 5% [1], highlighting the need for early detection. In response, different recommendations have been made. Based on the National Lung Screening Trial (NLST) [17], the United States Preventive Services Task Force recommends lung cancer screening with low-dose computed tomography (LDCT) in adults aged 55 to 80 years who have a 30 pack-year smoking history and are currently smoking or have quit within the past 15 years [15]. The conclusion of the NLST was that screening using low-dose computed tomography (LDCT) resulted in a decrease in mortality equal to 3 fewer deaths per 1,000 participants [23]. This study amongst others such as DANTE (Detection and Screening of Early Lung Cancer by Novel Imaging Technology and Molecular Essays) or the DLCST (Danish Lung Cancer Screening Trial) also studied different strategies for screening, the harms and radiations caused by screening and other factors related to lung cancer mortality. Known risk factors are key indicators when identifying patients with high risks of lung cancer occurrence [13]. In this study, we chose to focus on patients who are current or former smokers. We propose a machine learning (ML) tool to compute the likelihood of lung cancer occurrence trained on data from the Prostate, Lung, Colorectal, and Ovarian (PLCO) Cancer Screening Trial and validated on the National Lung Screening Trial (NLST). From this ML-based tool, we developed a freely available web application that people can use to estimate their likelihood of developing lung cancer and sensitize them to lung cancer screening for early detection of lung cancer. ## 2 Data used ### Data sources Data used in this project come from two different datasets. The Prostate, Lung, Colorectal, and Ovarian (PLCO) Cancer Screening Trial [19] was used as the training dataset. It contains data from 154,887 patients with 219 features for each. The PLCO Screening Trial is a significant study evaluating cancer screening tests for prostate, lung, colorectal, and ovarian cancers. It collects comprehensive data on risk factors, clinical measurements, and outcomes. The trial assessed screening effectiveness and identified potential risk factors for specific cancers. Furthermore, the second dataset used is the National Lung Screening Trial (NLST) [17]. It was used as the external testing dataset. Conducted by the National Cancer Institute (NCI), the NLST study aimed to evaluate the efficacy of low-dose computed tomography (LDCT) [22] in detecting lung cancer among individuals at high risk. This extensive dataset includes comprehensive information, ranging from demographic characteristics and smoking history to imaging findings and participant outcomes, encompassing over 53,452 individuals with 324 features each. ### Risk factors In order to identify patients with a high risk of developing lung cancer, we target those who meet certain criteria. Those criteria are some of the risk factors related to lung cancer. The primary risk factor, and one that is responsible for the majority of cases, is tobacco smoking. The carcinogens present in tobacco smoke can damage the DNA of lung cells, leading to the initiation and progression of cancer. Other modifiable risk factors include exposure to secondhand smoke, occupational exposure to carcinogens such as asbestos and radon, and air pollution. Non-modifiable factors, such as a family history of lung cancer and certain genetic mutations, also play a role in increasing susceptibility. Additionally, factors such as age, gender, and a history of lung diseases like chronic obstructive pulmonary disease (COPD) further contribute to the risk profile. Understanding and addressing these multifaceted risks are crucial for effective prevention and early detection strategies in combating lung cancer [13]. We used only former or current smokers and decided to remove participants who have never smoked from the PLCO dataset leaving 80,668 participants in the dataset (52% of the original dataset). This is even more relevant as the NLST dataset only studies former and current smokers. ### Feature extraction Based on the previous risk factors, we will describe the features extracted from both PLCO and NLST. These features have been pre-processed so that both datasets match in terms of column names, data type, and formatting. All the attributes of the participants are described in Appendix A. The final model doesn't require as many as we will show in the next sections that some of these features play a minor role and barely modify the precision and recall of the model. ### Data pre-processing In this section, we describe the pre-processing of both the training and testing datasets, respectively PLCO and NLST. The first step involved removing patients who are not current or former smokers since NLST only contains current or former smokers. This pre-processing step removed 74,219 patients from the PLCO dataset (around 48%), leaving 80,668 patients. Then, we started by removing patients who had died of something else than lung cancer. By removing these patients we removed a bias in favor of lung cancer occurrence. Indeed, as some of these patients were suffering from severe medical conditions, their weak physical condition favored the development of tumors. This pre-processing removed 24,356 patients from the PLCO dataset (around 16%), leaving 56,312 patients as well as removed 2,904 patients from the NLST dataset (around 5%), leaving 50,548 patients. As a third step, we focused on removing bias brought by censored data [9]. Both in NLST and PLCO, the duration of a patient's study varies widely from one participant to another introducing bias into the model. Indeed, a participant who was studied for less than a year has less chance of being diagnosed with lung cancer than a participant who remained for 7 years. On the other hand, we decided to keep the participants who stayed longer than average in the study: the reason is that, even though they introduce a bias in favor of positive screening for lung cancer, we would rather have more false positives than false negatives (favoring recall over precision). We decided for the sake of predicting lung cancer risk in the next 5 years to train the model on patients from PLCO who were either negative for lung cancer screening and studied for longer than 2100 days (5.75 years) or positive for lung cancer screening. This preprocessing step removed 1,151 patients from PLCO (around 1%), leaving 55,161 patients as well as removed 1,953 patients from NLST (around 4%), leaving 48,595 patients. ## 3 Methods The development of this model required feature selection, hyperparameter optimization, and calibration of the model. ### XGBoost We trained an XGBoost model [6] on the pre-processed PLCO dataset and tested it on NLST. XGBoost combines the principles of gradient boosting [8] and decision trees [24] to create a robust and high-performance predictive model. The algorithm iteratively builds an ensemble of weak prediction models, called decision trees, by optimizing an objective function that measures the model's performance. During each iteration, XGBoost computes the gradient of the loss function and updates the tree ensemble by adding new trees that minimize the residual errors. One of the key strengths of XGBoost lies in its ability to handle a wide range of data types and feature interactions effectively. It incorporates several advanced techniques, such as regularization, parallel processing, and column blocking, to optimize both accuracy and computational efficiency. XGBoost can automatically handle missing values and can capture non-linear relationships, making it highly versatile. Moreover, XGBoost provides various hyperparameters that allow researchers to fine-tune the model for their specific needs. These hyperparameters control the depth, learning rate, number of trees, and regularization, among others, enabling customization and model optimization. XGBoost was chosen for this project as it can deal with missing values which are numerous in both datasets. The goal of the XGBoost in our case was to output 1 if the person is likely to develop lung cancer in the next 5 years and 0 otherwise. ### Feature selection Since our final objective was to build a self-evaluation tool to promote lung cancer screening for people at risk, we wanted to design a tool that relies on few features. After exploring different combinations of features, we selected the most important features using Shapley values [12]. ### Hyperparameter optimization To improve the results of the XGBoost model we perform hyperparameter optimization. The methodology used for hyperparameter optimization relies on Bayesian Grid Search [4]. Bayesian optimization is a powerful technique employed in the field of machine learning to efficiently search and tune hyperparameters for predictive models. When applied to XGBoost, Bayesian optimization enables the automatic optimization of its hyperparameters, taking into account the AUC-PR metric [21] for evaluating model performance. The AUC-PR metric is particularly relevant for imbalanced classification problems, such as lung cancer prediction, where accurate identification of positive cases is crucial. By incorporating Bayesian optimization with the AUC-PR metric, we leveraged the strengths of XGBoost and enhanced its predictive capabilities for lung cancer prediction. The iterative nature of Bayesian optimization intelligently explores the hyperparameter space, directing the search toward promising regions and gradually refining the model's configuration. During Bayesian optimization we focused on the following parameters: * Learning rate: The rate at which the model learns from the data during training. * Max depth: The maximum depth or levels of the decision tree or random forest model. * Subsample: The fraction of samples used for training each individual tree in the ensemble. * Colsample_bytree: The fraction of features or columns used for training each individual tree. * N estimators: The number of trees or estimators in the ensemble model. ### Calibration Plotting a calibration curve serves as a valuable tool for assessing the reliability and accuracy of the model's predictions. A calibration curve visually compares the predicted probabilities or scores generated by the model with the observed outcomes or true probabilities. This graphical representation plays a crucial role in evaluating model calibration, which measures the agreement between predicted probabilities and observed frequencies of the target outcome. A well-calibrated model should exhibit predictions that closely align with the actual probabilities, as indicated by a calibration curve closely following the diagonal line. Additionally, the calibration curve allows for the measurement of confidence in the model's predictions, enabling researchers to determine if the predicted probabilities accurately reflect the likelihood of the target outcome. Deviations from the diagonal line may indicate overconfidence or underconfidence in the model's estimates. Consequently, the calibration curve aids in selecting an appropriate decision threshold for classification tasks by identifying the range of predicted probabilities where the model is well-calibrated. Overall, the calibration curve serves as a critical tool for evaluating model performance and guiding adjustments to enhance its reliability and accuracy. ### Interpretability By assigning importance values to input features, Shapley values [12] offer an intuitive approach to understanding model predictions. In the context of lung cancer prediction, Shapley values provide valuable insights into the individual feature importance, allowing patients to identify the most influential factors contributing to the predicted risk of lung cancer. Moreover, Shapley values enable both global and local interpretability, shedding light on the overall impact of features across the dataset and providing case-specific explanations for individual predictions. Considering the complexity of XGBoost models and their ability to capture feature interactions, Shapley values offer an advantageous perspective by accounting for joint contributions of features. This comprehensive understanding of feature interactions enhances the interpretability of the model and contributes to a more nuanced comprehension of how different combinations of features influence the prediction of lung cancer risk. Additionally, Shapley values can aid in assessing model fairness and bias, enabling the detection of disparities in feature importance across different subgroups. Such information is essential for ensuring the fairness and equity of healthcare decision-making processes. Ultimately, the utilization of Shapley values for interpretability in the domain of lung cancer prediction using XGBoost not only improves model transparency and performance but also supports the development of reliable and trustworthy prediction models in healthcare settings. ## 4 Results After data pre-processing, the PLCO dataset contains 55,161 patients and the NLST dataset contains 48,595 patients. In this section, we describe both datasets and the distribution of patients for each feature (Table C1 in Appendix). ### Model performance The XGBoost model is trained on the PLCO dataset with the following parameters. Firstly, there are two types of booster parameters: linear models and tree-based models. In this study we used a tree-based model as it makes sense in the case of a binary classification and because it usually outperforms the linear models. For the loss function, we used a binary logistic objective function. This loss function is designed as a logistic regression for binary classification (which is our case) and returns a predicted probability here corresponding to the likelihood of the person developing lung cancer in the next 5 years. Furthermore, we chose to use an exact tree method because it is more precise and the training in our project is relatively short. Finally, for the evaluation metric we chose to use the Area Under the Precision Recall Curve (AUC-PR). Before diving into the hyperparameter optimization, we train a basic XGBoost on the training set. Using the Shap library [12], we look at the most important features and select the most contributing ones. Selecting these features is important for us as our goal is to design a tool on which people can easily evaluate their risk of developing lung cancer. Furthermore, we know that extensive and long questionnaires play a significant role in barriers to entry. We chose to simplify our model and slightly downgrade the overall results of the model to retain more patients. The following Figure 1 details the most contributing features. The following features were selected: * Age * Smoking cessation age * Cigarette smoking * Smoking onset age * Cigarette per day * Pack years * Smoke years * Lung cancer family history * BMI After hyperparameter optimization, ROC AUC was 0.83 on the PLCO validation dataset and 0.69 on the NLST external testing dataset (Appendix B). The results obtained by our model show a good performance on the validation set (Table 1). While the model calibration was good (Figure 2), the curve suggests under-confidence in the predictions of the model while still performing relatively well as the results lie close to the y=x line (in blue). However, when focusing on calibration-in-the-small, it seems that the model is underconfident for people with a high risk of developing lung cancer. This shows the importance of recall in this case. Indeed because, the model tends to under-classify some of the high-risk patients, focusing on having a high recall is key, even though it might lower the precision of the model. To do so, one must select a lower classification threshold to be less selective when predicting lung cancer. Figure 1: Shapley values of the features used for lung cancer risk prediction Figure 2: Calibration of the model on the PLCO and NLST datasets ### Comparison with USPSTF recommendations The US Preventive Services Task Force issued a recommendation statement in 2021 for lung cancer screening. This was an update from the 2013 recommendation. These recommendations can be summarized as a decision tree (Figure 3). This tool was designed based on NLST. Out of the 48,595 patients who are described in the NLST dataset, 48,034 of them fit into the criteria made by the USPSTF recommendation statement. It makes sense that a lot of them fit into these criteria since they were built using this dataset. Among them, 1,495 of them had cancer while overall in the dataset 1,511 had cancer. This means that overall in the NLST dataset, the criteria has a recall of around: 98.9%. Of the 55,161 patients who participated in the PLCO study (and who remained after pre-processing), 22,609 of them fit into the criteria of the USPSTF recommendation statement. Among those who fit into the criteria of the USPSTF recommendation statement, 2,105 had cancer while overall in the whole dataset, 2,752 had cancer. This means that overall PLCO, the criteria has a recall of around 76.5% in identifying the patients that have cancer. It is interesting to see that, while their criteria work very well on the NLST dataset, which makes sense because they used it to build it, it doesn't perform as well on the PLCO dataset (Table 2). Moreover, we can compare the precision and recall of the US Recommendation tool with our model. In the following table (Table 3), by fixing equal recall, we can observe the precision level. The improvement is small for the NLST and we can explain that by the fact that the US Recommendation tool was designed based on NLST. However, we can see a clear impact on the PLCO recall (Table 3). ### Online prediction tool As part of this project, we also built a web application [10]. The goal of this app is to be accessible to all for people to assess the risk of them developing cancer in the \begin{table} \begin{tabular}{|l|l|l|} \hline **Metric** & **Validation Dataset** & **Testing Dataset** \\ \hline ROC AUC & 0.82 & 0.70 \\ Brier Score & 0.043 & 0.044 \\ Average Precision & 0.52 & 0.14 \\ Average Recall & 0.02 & 0.04 \\ AUC-PR & 0.29 & 0.11 \\ \hline \end{tabular} \end{table} Table 1: Model performances on the PLCO and NLST datasets Figure 3: USPSTF decision tree next 5 years. The risk prediction is based on a short 8-question form, which are the features used in the model. We used the model saved as a pickle file and Python files hosted on GitHub [3]. Heroku [14] deploys and maintains the app: it's a Platform as a Service (PaaS) tool. The web application is currently online and working. ## 5 Discussion A data-driven risk model was established and tested to predict the 5-year outcomes related to National Lung Screening Trial (NLST)-like CT lung cancer screenings. This model demonstrated robust validation within U.S. research groups (PLCO and NLST), indicating their wide applicability. The main finding from our model suggests that selecting smokers based on individual risk, as opposed to risk-factor-based groups, could potentially prevent more deaths, enhance screening effectiveness, and increase screening efficiency. Several models have been created to better identify smokers who should be screened for lung cancer Katki et al developed and validated risk models for lung cancer screening using low-dose CT [11]. The model used factors like age, race, smoking history, and family history. Screening based on these models was more effective than USPSTF guidelines in preventing lung cancer deaths. The model identified high-risk individuals not eligible under USPSTF criteria, including current and former smokers. These risk-based screenings have been shown to potentially prevent 90% of CT-preventable lung cancer deaths by screening only 49% of American smokers aged 50 to 80 [8]. These methods outperform USPSTF recommendations by favoring high-risk, high-benefit smokers who might not be eligible under USPSTF guidelines, but who have a higher 5-year lung cancer risk and a lower number needed to screen (NNS). These high-risk individuals cannot be identified by groups and necessitate risk calculations. Meanwhile, a substantial proportion of USPSTF-eligible individuals are at a lower risk, meaning they might not benefit as much from screening without a risk calculation. Risk-based \begin{table} \begin{tabular}{|l|l|l|} \hline **Metric** & **NLST** & **PLCO** \\ \hline Total number of participants* & 48,595 & 55,161 \\ TP & 1,495 & 2,105 \\ FN & 16 & 647 \\ FP & 46,539 & 31,905 \\ TN & 545 & 20,504 \\ Precision & 3.1\% & 9.3\% \\ Recall & 98.9\% & 76.5\% \\ \hline \end{tabular} \end{table} Table 2: USPSTF recommendations results on the pre-processed dataset (*after pre-processing) \begin{table} \begin{tabular}{|l|l|l|l|} \hline & & **Recall** & **Precision** \\ \hline \multirow{2}{*}{**NLST**} & USPSTF Recommendation & 98.9\% & 3.1\% \\ & Our Model & 98.9\% & **3.2\%** \\ \hline \multirow{2}{*}{**PLCO**} & USPSTF Recommendation & 76.5\% & 9.3\% \\ & Our Model & 76.5\% & **13.1\%** \\ \hline \end{tabular} \end{table} Table 3: Recall and Precision of USPSTF and our model selection could also increase the number of African Americans and women chosen for CT lung screening. In 2022, Kumar et al used machine learning and deep learning techniques to predict the growth and progression of lung cancer [2]. They built prediction models using supervised machine learning algorithms and analyzed images using the local binary patterns (LBP) technique. This study proposed a machine learning model based on support vector machines (SVM) for the detection of lung cancer using symptom classification. Data acquisition and preprocessing were performed on a dataset from the University of California, Irvine. The same year, Dritsas et al used a public dataset [5] containing 309 patients with 15 features each and employed several techniques to improve the accuracy of the models [7]. These techniques included class balancing, which addressed the imbalance in the dataset, and feature ranking, which determined the most important features for prediction. Among the different classification models tested, the Rotation Forest model demonstrated the highest efficiency in predicting lung cancer occurrence. In the context of lung cancer prediction using machine learning, several performance metrics are commonly employed to evaluate the effectiveness of the model and understand its predictive capabilities. When evaluating the performance of a lung cancer prediction model, the choice of the performance metric is crucial in capturing the specific characteristics of the problem at hand. While metrics like ROC AUC are commonly used in the mentioned studies, and provide a comprehensive evaluation of a model's performance across different classification thresholds, Area-Under Precision-Recall Curve (AUC-PR) offers distinct advantages in certain scenarios, such as imbalanced datasets or when the focus is on positive instances. This metric considers the trade-off between precision and recall across different classification thresholds, providing a holistic measure of the model's performance. In lung cancer screening, there is a high population imbalance, i.e. the number of negative instances (non-cancerous cases) is significantly higher than the number of positive instances (cancerous cases). In such cases, using ROC AUC as the sole performance metric can be misleading because it emphasizes the model's ability to rank all instances correctly, irrespective of the class distribution. AUC-PR, on the other hand, places more emphasis on correctly predicting positive instances, which is crucial in the context of lung cancer prediction. By prioritizing positive predictions, AUC-PR provides a more meaningful evaluation of the model's performance in identifying lung cancer cases. Furthermore, lung cancer prediction is often a critical task where the goal is to minimize false negatives (missed diagnoses) while maintaining a reasonable level of precision. In our study, AUC-PR was 29% on the PLCO dataset and 11% on the NLST. Our model has the same Precision as the USPSTF guidelines, with better Recall (9.3% vs. 13.1% on PLCO and 3.1% vs. 3.2% on NLST). There are significant limits to risk-based eligibility for lung screening. Models tend to favor the elderly, which can lead to saving fewer life-years and decrease cost-efficiency. These models can also be biased towards selecting patients with comorbidities such as COPD. The next generation of models could be trained to predict life-years saved from screening. Our study has some significant drawbacks: the training and testing data were collected in the US only and we can't assume that our findings can be generalized beyond the US. Even if the data was collected from two prospective trials, there were still missing values. Also, the two trials were initially not designed to answer the same question with the same procedure, which could potentially bias the results. There is a trade-off between creating the best model for defining screening eligibility and creating a model that can be implemented effectively in the daily routine. Some risk models have web-based tools, like the Risk-based NLST Outcomes Tool for LCDRAT and LCRAT [20], and MyLungRisk for LLPv2 [16]. These models do not provide the most important features involved in the prediction, but our interface does. Getting an accurate prediction is of course interesting from the clinical point of view, but getting a prediction along with the most important feature means that the prediction can become actionable. Finally, these tools could be integrated into electronic medical software to automatically calculate the risk of any individual. ## 6 Conclusion We created a model to predict lung cancer risk at five years using XGBoost with better precision and recall than the current USPSTF recommendations. Implementing risk-based screening in clinical practice can be challenging and requires accurate, user-friendly decision aids to support shared decision-making. The web risk calculator can be used to directly communicate personalized risk prediction to the patient. In that context, shared decision-making processes should be carefully evaluated, as most lung cancer deaths are currently not preventable through screening, even if CT screening can reduce lung cancer mortality by 20%.
2306.02637
Gotta Go Fast: Measuring Input/Output Latencies of Virtual Reality 3D Engines for Cognitive Experiments
Virtual Reality (VR) is seeing increased adoption across many fields. The field of experimental cognitive science is also testing utilization of the technology combined with physiological measures such as electroencephalography (EEG) and eye tracking. Quantitative measures of human behavior and cognition process, however, are sensitive to minuscule time resolutions that are often overlooked in the scope of consumer-level VR hardware and software stacks. In this preliminary study, we implement VR testing environments in two prominent 3D Virtual Reality frameworks (Unity and Unreal Engine) to measure latency values for stimulus onset execution code to Head-Mount Display (HMD) pixel change, as well as the latency between human behavioral response input to its registration in the engine environment under a typical cognitive experiment hardware setup. We find that whereas the specifics of the latency may further be influenced by different hardware and software setups, the variations in consumer hardware is apparent regardless and report detailed statistics on these latencies. Such consideration should be taken into account when designing VR-based cognitive experiments that measure human behavior.
Taeho Kang, Christian Wallraven
2023-06-05T07:12:37Z
http://arxiv.org/abs/2306.02637v1
Gotta Go Fast: Measuring Input/Output Latencies of Virtual Reality 3D Engines for Cognitive Experiments ###### Abstract Virtual Reality (VR) is seeing increased adoption across many fields. The field of experimental cognitive science is also testing utilization of the technology combined with physiological measures such as electroencephalography (EEG) and eye tracking. Quantitative measures of human behavior and cognition process, however, are sensitive to minuscule time resolutions that are often overlooked in the scope of consumer-level VR hardware and software stacks. In this preliminary study, we implement VR testing environments in two prominent 3D Virtual Reality frameworks (Unity and Unreal Engine) to measure latency values for stimulus onset execution code to Head-Mount Display (HMD) pixel change, as well as the latency between human behavioral response input to its registration in the engine environment under a typical cognitive experiment hardware setup. We find that whereas the specifics of the latency may further be influenced by different hardware and software setups, the variations in consumer hardware is apparent regardless and report detailed statistics on these latencies. Such consideration should be taken into account when designing VR-based cognitive experiments that measure human behavior. Virtual reality, VR, EEG, cognitive experiments, human behavioral, behavioral measurements, eye-tracking, latency, response time ## I Introduction The idea of utilizing naturalistic stimuli in cognitive experiments is increasingly gaining traction, and its importance has been recognized in an increasing number of studies [1, 2, 3, 4, 5, 6, 7]. 3D environments such as Virtual and Mixed Reality (VR/MR) can provide an excellent platform for implementing experimental paradigms where immersive, interactive and naturalistic stimuli presentation is desired [8, 9, 10]. Especially for VR, behavioral and cognitive investigative experiments performed in virtual reality have the advantage of being able to control and manipulate environmental variables that in real-life settings would be nearly impossible to control [9, 11]. Possibly due to this, virtual reality has seen increased utilization in the area of behavioral and cognitive investigations, from simple behavioral experiments [12], to neuroimaging studies [13, 2], to even timing sensitive studies involving physiological signal measurements such as EEG [14, 15, 16, 17, 18, 19, 20, 21]. Due to the nature of cognitive processes of interest, behavioral and cognitive studies investigating timing-critical brain processes have been historically sensitive to latency in experimental hardware and software [22, 23]. It has been suggested, however, that specialized behavioral input devices may not be as crucial for even time-critical experiments, as the variability from human behavior itself is generally larger in scale than the input lag occurring from individual hardware devices [24]. For ease of experimental equipment acquisition that may in turn be relevant to the easy replication of studies, usage of adequately performant consumer hardware may be preferable to limited costly specialized equipment for behavioral input. Nonetheless, especially in studies measuring time-sensitive behaviors, there is importance in measuring expected latency in hardware and software setups used for experimental paradigms. Wimmer et al. [25] used opto-couplers to measure latency of 36 different serial input devices connected to a Raspberry Pi device and formed probability distribution models for each of the devices, and reported different input latency distributions per device; suggesting the need for measuring input latency levels in interactive experimental setups that make use of serial device based user input device. Furthermore, due to higher graphical computation requirements than conventional displays arising from not only generally higher refresh rates but also other factors such as needing to render twice for stereo vision, the current state of VR hardware suffers from latency greater than of conventional user interface devices [26]. A final point of consideration in this context concerns the use of higher-level APIs for generating three-dimensional, interactive environments that afford realistic levels of sensory realism and interactivity. While it is possible to create well-controlled low-level stimuli with relative ease in computer graphics languages, the amount of work necessary to create environments which, for example, contain objects that interact with each other in a physically-realistic fashion from scratch is beyond the capabilities of standard cognitive and behavioral research labs. For this reason, many researchers have increasingly turned to 3D game engines for creating such environments. One drawback of this development is that these engines offer only a reduced degree of control over their timing internals given that much of the behind-the-scenes calculation remains hidden from the API user (examples include the calculation of graphic primitives, the determination of collisions in physics-aware simulations, etc.). This raises the question of how much timing accuracy and precision is possible in game engine simulation programming environments. In this context, Wiesing et al. [27] have measured stimuli duration and onset measurements in Unreal Engine with a dedicated response pad and reported increased average latency, compared to dedicated cognitive experiment software such as Psychopy and Psychtoolbox. While Unreal Engine as a serious 3D engine has been used for VR based behavioral experiments [28], possibly due to the relative ease of implementation in comparison to the former, Unity Engine has been seeing increased applications in cognitive experiments [17, 29, 30, 31, 32, 33, 34]. While they are suited for similar purposes, due to differences in implementation detail Unity and Unreal Engine often exhibit different behaviors even when the same effect is intended, especially in frame and I/O related latency performances [35]. In light of these considerations, to ultimately implement and execute experiments investigating brain processes in a naturalistic VR environment, we deem it worth investigating the expected latency values for hardware and software setups that would (commonly) be used in VR-based behavioral experiments. In this study, we aim to achieve this by utilizing a measuring apparatus with oscilloscopes, as well as a bare-bone experimental paradigm implemented in two widely-used VR capable 3D engines, Unity Engine and Unreal Engine. In the bare bone paradigm, we create stimulus onsets that send trigger codes to the measuring apparatus before the actual displaying of stimuli is performed in the Head Mounted Device (HMD), and measure latency between the the onset code and the actual pixel change in the HMD. Furthermore, we measure the latency between a physical input action on consumer-level user interface hardware (keyboard) that can be used for behavioral response, and the registration of the input in the 3D engines. Lastly, we measure the latency between the physical input action and the pixel change from the resultant feedback code execution. ## II Materials and Methods ### _Experiment Design_ We were interested in the measurement of the following events: 1) latency between stimulus onset code execution in the 3D engine and the actual HMD pixel changes as the stimulus was presented (Stim2Disp), 2) latency between participant behavioral response by a keypress and the code execution performed immediately upon the register of the key event in the 3D engine (Key2Led), and 3) latency between participant behavioral response by keypress and the pixel change in HMD caused by the 3D engine code that presents a visual feedback stimulus upon the key event register (Key2Disp). To measure these events, we implemented experimental paradigms capable of measuring these events in both Unity Engine and Unreal Engine, as can be seen in Figure 1. We decided to measure both 3D Engines as their implementations differ, as well as the scripting implementation language for user code: Unity utilizes C# for this, whereas Unreal uses C++. Both engines have seen usage in cognitive experimental designs in 3D environments (see Introduction). The 3D VR experiment environment consists of a basic 3D spherical object that can move around based on the user's input. Upon a stimulus onset code or recognition of a participant's behavioral response by a keyboard, a chess-board grid of black and white covers the entire screen for one frame, then with the colors inverted for another frame. The experiment code embodies a bare-bone basic form of cognitive experimental paradigm using 3D Engines, and we send programmatic triggers upon stimulus onset and behavioral input registration, as one would for experiments requiring high temporal resolution like in EEG or other cognitive experiments that involve some form of time series physiological measurement. We measure the latency of the above 3 scenarios (Stim2Disp, Key2Disp, Key2Led) by sending programmatic markers to Arduino upon stimulus onset code execution, and behavioral response registration in the 3D engine, which triggers an LED. Furthermore, we use pressure sensors and photodiodes connected to the Arduino board to have quantifiable measures of when participant behavior and stimuli display happen in the real world. #### Ii-A1 Unity-specific setup For Unity, we implemented the paradigm in Unity Engine version 2019.4.20f. Following Unity Engine's manual on order of execution for event functions (see[https://docs.unity3d.com/Manual/ExecutionOrder.html](https://docs.unity3d.com/Manual/ExecutionOrder.html)), the FixedUpdate() function handles ticks on computations focused on the physics engine calculations, and may render more than once per rendered frame depending on the computation load and settings, while the Update() function ticks per every frame that is rendered (after FixedUpdate() calls are complete). The events are called serially, and between the Update() call and the actual rendering calls of the scene on display there are several other calls; as such, in the interest of sending the trigger for stimulus onset in experimental behavior measurement database as close to the onset of the actual stimulus in the display, it is preferable to send the marker code sometime after the Update() call but before the actual display rendering process. We achieve this by calling a coroutine that waits execution of sending stimulus onset triggers until the rendering computation is complete, but before the displaying is performed: Furthermore, as the FixedUpdate() call executes at the beginning of the game tick and executes at a higher rate, it is preferable to process keyboard input events (i.e. behavioral response) in there. For Key2Led and Key2Disp events, the function handling keyboard input events can also call for the co-routines to send markers subsequently: #### Ii-A2 Unreal-specific setup For the Unreal Engine, we implemented the paradigm in version 4.26. Unreal Engine logic ticks are separated in tick groups (PrePhysics, DuringPhysics, PostPhysics, and PostUpdateWork) that are serially run as per the documentation ([https://docs.unrealengine.com/5.1/en-US/](https://docs.unrealengine.com/5.1/en-US/) actor-ticking-in-unreal-engine/). Processing of user input is handled in the PrePhysics segment of the ticking, and as such binding the input of specific keys to a method that sends trigger Fig. 1: Diagram of the experimental data collection paradigm trial design. is sufficient. By calling stimulus onset triggers in a code that is executed on the PostPhysics or PostUpdateWork segment, we can also set it to be as close to the timing of the actual display as possible. To ensure tweak Unreal Engine for optimal performance, several project settings were changed in addition: First, in the Rendering-?VR settings, Instanced Stereo was enabled while mobile HDR was disabled, as per recommendations by [27]. Second, the following console variables were were changed: R.GTSyncType to 1, R.Vsync to 0, and R.OneFrameThreadLag to 0. R.GTSyncType determines which thread in game processes sync to: 0 if they sync with the rendering thread, if 1 they sync to the RHI(render hardware interface=d3dx or openly) thread. As per the Unreal documentation syncing to the RHI thread helps with input latency, so we set it as 1. VSync renders the frames at the pace at which the display device is capable of, but it often leads to more dropped frames when enabled than otherwise [36]. When OneFrameThreadLag is enabled, the graphics drivers keep the game thread from processing further than one frame worth of computations than what is currently being displayed. We deemed this undesirable as our purpose was to minimize lags stemming from computations not being ahead enough, along with minimization of the input latency. ### _Measuring apparatus_ To measure timings of 1) behavioral response onset, 2) stimulus onset code execution, 3) feedback code execution in response to behavior, and 4) pixel changes on the HMD as precisely as possible, a circuit apparatus using that can bee seen in Figure 5 was implemented. The inspiration for the circuit board was based on a schematic from class material in Aachen university's system design course [37]. Specific components of the circuitry included an Arduino Uno Rev.3, a BPW-34 photosensitive diode developed by Vishay Semiconductors, a pressure sensor FSR402 developed by Interlink Electronics. For registering the behavior response, a Wooting One keyboard developed by Wooting was used. For running the 3D Engine based experimental paradigms, a Windows-10 based computer running on AMD's 5900x CPU with Nvidia RTX 3090Ti was used. As the actual display of the VR environment, an Oculus DK2 headset from Meta Inc. was used, in which the photodiode was attached to next to the display. An USB-based oscilloscope developed by Pico Technology (Picoscope series 2205A) with two probes was used for measuring the changes in voltage. The oscilloscope's sampling frequency was set too 240kHz. The ground clamps of both probes were connected to the ground pin cable of the Arduino board. As the Oculus DK2's refresh rate is at 75Hz, we band-pass filtered the probe connected to the photodiode to [60 80]Hz. Sample collection length per trial was set to 200ms, with 20ms pre-trigger and 180ms post-trigger. For the Stim2Disp measurements, the first probe was clamped to the LED diode that would be toggled on and off by the stimulus control code in the 3D engine, while the Fig. 4: Code for sending Arduino trigger on Unreal Fig. 3: Unity code for sending trigger on behavioral response Fig. 2: Code for sending trigger to Arduino in Unity second probe was clamped to the cable connected to the HMD-attached photodiode. The scope data collection trigger was set to a rising threshold of 1.5V for Unity and 115mV for Unreal with a hysteresis of 5.87% on the first probe. All scope measure trigger thresholds were set manually set after trial and error for catching the events of interest, and the difference in threshold was made due to the probe attenuator settings changed to different levels between the two sets of measurements, the difference in thresholds, however, did not interfere with the trigger adequately capturing the point where the event was occurring. This was verified after measurements by visually inspecting the probe waveforms for peaks from LED and keypress actions. In Key2Disp and Key2LED measurements, the first probe was clamped to the cable attached to the pressure sensor attached to the keyboard. Here again, due to selecting different probe attenuator settings, the probe trigger threshold was set to 450mV rise with 2.44% hysteresis for Unreal, and a 4.7V rise for unity. The second probe was connected to the LED in Key2LED measurements, and to the photodiode in Key2Disp measurements. The Arduino would be connected to the experiment PC via USB, through which LED trigger communications would be sent from the 3D engines via serial communication. For each latency event of interest, we made at least 300 repetitions of the measurement in order to collect sufficient sample size. ### _Data processing_ Data preprocessing and analysis were performed with Matlab 2021b by Mathworks Inc.. As the scope sampling rate was rather high considering our time epochs of interest, data was first downsampled to 20kHz. As the data epochs were temporally zero-centered to the triggering event of the first probe, the timing of the events of interest on the second probe (photodiode voltage change, LED power on) had to be found by peak detection, as the onset of events of interest would result in significant changes in the probe voltage. In photodiode measurements this meant voltage troughs that were far greater than the baseline pixels (as the black and white grids would trigger a greater change in the luminosity of the display, leading to greater voltage changes). For all measurements on the second probe, as we were looking for the latency for the onset of the event of interest, finding the timing of only the first significant peak detected was necessary. Finding the position of the peaks was performed with the findpeak() function provided in the Signal Processing Toolbox of Matlab. The resultant peaks were plotted and manually inspected for enough number of trials (\(\zeta_{1}\)100) in each condition to ensure the function was performing as desired. Once the timing of the events of interest were found, we calculated latency for the three events of interest. ## III Results From stimulus onset marker code execution to the actual onset of the chess-board grid on the HMD pixels, on Unity Engine there was an average latency of 10.777ms (SD 0.672), while on Unreal Engine an average latency of 21.059ms (SD 0.671) was observed. From behavioral keypress onset detection to chess board grid onset on the HMD, an average latency of 47.026ms (SD 6.156) was observed on Unity while 46.682ms (SD 4.499) was observed on Unreal Engine. In a separate session measuring latency between physical keypress detection and LED onset upon keypress register in the 3D engine, we found an average latency of 36.948ms (SD 4.911) on Unity and 25.161ms (SD 5.087) on Unreal. Table I shows the summarized results. Figure 6 (Stim2Disp), 7 (Key2Disp), and 8 (Key2Led) each shows probe measurements for all individual trials superimposed on the top plot (as well as the detected response peaks as black scatterplots), and the averaged out measurements on the lower plot. Two-sample T-tests between the two 3D engines for the Stim2Disp condition showed a significant difference in latency between Unity and Unreal (\(t_{df=641}=60.537,p<1e^{-100},SD=0.665\)). Similarly, a significant difference was observed between Unity and Unreal's latency in the Key2Led condition (\(t_{df735}=31.900,p<1e^{-100},SD=4.991\)). No significant difference was found in latency between Unity and Unreal for the Key2Disp condition (\(t_{df=713}=0.833,p=0.405,SD=5.484\)). ## IV Discussion This study aimed to make precise measurements of latencies that may occur during time-critical cognitive behavioral experiments in 3D engine-based virtual reality environments. Fig. 5: Circuit diagram of the latency measuring apparatus, setup pictures To achieve this, we implemented a bare-bone 3D environment in both Unity and Unreal Engine, two prominent 3D engines that are used to develop Virtual reality and 3D scenarios for general purposes. We implemented latency measurement in three different scenarios: a scenario in which a marker event for stimuli presentation was sent and rendered on the display, a scenario in which a physical key press by a participant happened and registered in the 3D engine, and a scenario where a key press happened and the resultant feedback occurred on the display. We used oscilloscope probes combined with photodiodes on display events, serial-communication triggered LEDs for software events, and pressure sensors for physical keypress events in order to make precise timing measurements on when each of these events were occurring. We first discuss the difference in average latency values between Key2Disp and Key2Led conditions: although mea Fig. 6: Latency measured between stimulus onset marker execution code and actual stimuli onset pixel change in HMD (Stim2Disp), for both Unity and Unreal Engine. Higher plot in each set of plots shows all measurements from the two probes superimposed, along with the delayed response peaks determined by Matlab’s findpeak() function. Lower plot shows the average waveforms, as well as the average of the automatically detected first onset of peaks in the second probe. Fig. 7: Latency measured between keyboard press behavior and feedback stimuli onset on HMD pixel in 3D engine (Key2Disp), for both Unity and Unreal Engine. Higher plot in each set of plots shows all measurements from the two probes superimposed, along with the delayed response peaks determined by Matlab’s findpeak() function. Lower plot shows the average waveforms, as well as the average of the automatically detected first onset of peaks in the second probe. surements for the two conditions were made separately (as the Key2Led involved sending a LED trigger to arduino upon 3D engine recognition of the key event), considering our experiments were designed to be as basic as possible in terms of code implementation (as well as the sufficient sample size for each condition), at a glance one would expect the sum of Stim2Disp and Key2Led conditions to match up or be somewhat less than the average Key2Disp values. In Unity, the sum of the average of two conditions exceed the Key2Disp slightly. We believe this is understandable considering the communication time between PC software and the actual Arduino interface itself. In a previous study by Schubert et al. [38], downstream communication from an experimental computer to an Arduino was measured to be an average of 1.251ms with a low standard deviation. Considering the difference between the sum of mean Key2Led and Stim2Disp, and the Key2Disp condition itself, the difference appears to be enough to explain the somewhat larger mean latency in the combined latency. ### _Registering behavioral response without Unity or Unreal with LSL_ From our current set of results, it appears the largest issue in maintaining a reliable latency for cognitive experiment occurs from registering behavioral responses from the user I/O device. As can be observed from results in the Key2Led condition, in both 3D engines this latency is the largest (and the most variable) in registering the physical response event to the software stack. It is possible that the nature of serial port devices contribute a large part in this variance: parallel-port connected devices have been known to be favored over serial port connections for participant I/O in timing critical experimental design [22, 39, 40]. However serial port device technology has come a long way, and it has been suggested that the imprecision arising from user input devices may not be as critical as believed previously [24]. In older studies comparing serial and PS/2 devices, serial input devices were reported to have a much higher input latency with high variance [40]. In modern devices however, the latency gap between serial port based devices and parallel port devices may have become less considerable: response pad hardware specifically used for cognitive experiments such as Cedrus pads use serial USB connections. Furthermore, the choice of keyboard hardware that was used in our experiment was a mechanical keyboard that uses optic-based switches for faster input recognition on the hardware's part, along with high polling rates over 100Hz. The software stack also plays as much as a large part in the mean and variation of the latency as the hardware stack does. It has been reported that the experimental software framework as well as the operating system can contribute to differently distributed latency and missed frame counts [23]. We look into lowering the input variance and latency further by utilizing software stack independent from 3D engines in this section. In light of these considerations, we performed another set of measurements, but this time using a software outside of 3D engines for key event recognition. Lab Streaming Layer (LSL) [41] is a C++ based system for synchronizing experimental data from multiple sources through a unified clock ([https://github.com/sccn/labstreaminglayer](https://github.com/sccn/labstreaminglayer) for more info). LSL has been used for cognitive studies involving physiological signal measurements in which timing was critical [42, 43, 44, 45, 46]. It supports language bindings in multiple programming languages, as well as writing functions for adding custom data source to the data streaming system. We modified a C++ callback code available in the LSL Github Repository to catch certain key events and send Arduino LED events similar to Key2Led condition, but bypassing 3D engines for the key event recognition and getting them directly from the OS level: The results from the set of measurements using LSL and a C++ callback function for key events can be seen in Figure 10. With a mean latency of 9.950ms (SD 1.700) from physical key press event to the Arduino sensor trigger, we are seeing much lower average latency levels that compare to older PS/2 devices, as well as more stable variations in the latency. By Fig. 8: Latency measured between keyboard press behavior and marker execution for keyboard event recognition code in 3D engine (Key2Led), for both Unity and Unreal Engine. Higher plot in each set of plots shows all measurements from the two probes superimposed, along with the delayed response peaks determined by Matlab’s findpeak() function. Lower plot shows the average waveforms, as well as the average of the automatically detected first onset of peaks in the second probe. logging keypress events or participant behavioral input through separately run programs such as LSL, we believe some of the issues regarding input lag variation in experiments using graphics and compute-heavy 3D engines can be alleviated somewhat. ### _Stimulus presentation code to auditory stimulus onset delay in Unity_ In interactive experiments utilizing VR technology, especially in those that aim to create naturalistic experimental environment with immersion, it is often worth considering a multisensory presentation of stimuli. The addition of auditory components to the visual stimuli presentation would create a much more immersive VR simulation. And like visual stimuli, presentation of auditory stimuli needs to be considerably precise in timing as well for event related design experiments measuring physiological and behavioral response to stimuli. We deemed it was also worth investigating the latency of auditory stimuli onset code and the physical propagation of the stimuli sound. In the case of Unity, one can utilize the base sound functionality provided by the engine, or, use 3rd-party sound engines that are compatible with the 3D engine such as FMOD ([https://www.fmod.com/](https://www.fmod.com/)). Fig. 11: Circuit diagram of the latency measuring apparatus, setup pictures Fig. 10: Keyboard input to C++ LSL code register. Higher plot in each set of plots shows all measurements from the two probes superimposed, along with the delayed response peaks determined by Matlab’s findpeak() function. Lower plot shows the average waveforms, as well as the average of the automatically detected first onset of peaks in the second probe. Fig. 9: Code for sending markers after catching Key events on LSL, and not Unreal or Unity library from Unity does not provide a lot of tweaking options to optimize for performance, FMOD allows manually setting sound playback buffer sizes. For this study, we used a soundfile from [47] as the stimulus to playback, either using Unity's default sound library, or using FMOD with a buffer size of either 512 or 1024. A line-out cable (3.5mm M/M) was plugged into the speaker jack of the experimental computer, with the other end being connected to the oscilloscope probe as can be seen in Figure 11. We did similar measurements like in Stim2Disp conditon, measuring the latency between stimuli onset code and the actual propagation of the sound in the sound cable. We report the results in Figure 12. While using FMOD with a buffer size of 512 yielded the best results, we observe that latency for auditory stimuli presentation is much worse compared to visual stimuli both in mean accuracy and in variation. Based on this observation, we believe caution is warranted when using auditory stimuli, especially when in the absence of a concurrent visual stimuli. ### _Future works_ As Wimmer et al. [25] reported, specific latency measures for experimental setup are strongly dependent on the specifics of the hardware and software one acquires for the experiment. Considering the continuously developing landscape in VR and its related hardware/software stack, simply measuring the latency of each setup is not only insufficient, but it is a fruitless endeavor long-term. Instead, it would be more prudent to develop a framework capable of measuring delays for configurable setup on the go: this is our most immediate next step. Furthermore, the purpose of establishing latency value distributions are to ultimately utilize them in developments of VR-based behavioral experiments in event-related designs to collect synchronized time-dependent behavioral and physiological data; for our purposes of investigating underlying brain processes, we are especially interested in utilizing these findings to create latency-optimized VR EEG experiments in immersive, naturalistic 3D. ## Acknowledgments This study was supported by the National Research Foundation of Korea under project BK21 FOUR and grants NRF-2022R1A2C2092118, NRF-2022R1H1A2092007, NRF-2019R1A2C2007612, as well as by Institute of Information & Communications Technology Planning & Evaluation (IITP) grants funded by the Korea government (No. 2017-0-00451, Development of BCI based Brain and Cognitive Computing Technology for Recognizing User's Intentions using Deep Learning; No. 2019-0-00079, Department of Artificial Intelligence, Korea University; No. 2021-0-02068, Artificial Intelligence Innovation Hub). Fig. 12: Stimuli onset code Led to actual auditory stimuli propagation. Higher plot in each set of plots shows all measurements from the two probes superimposed, along with the delayed response peaks determined by Matlab’s findpeak() function. Lower plot shows the average waveforms, as well as the average of the automatically detected first onset of peaks in the second probe.
2302.01811
CheckedCBox: Type Directed Program Partitioning with Checked C for Incremental Spatial Memory Safety
Spatial memory safety violation is still a major issue for C programs. Checked-C is a safe dialect of C and extends it with Checked pointer types and annotations that guarantee spatial memory safety in a backward-compatible manner, allowing the mix of checked pointers and regular (unchecked) pointer types. However, unchecked code vulnerabilities can violate the checked code's spatial safety guarantees. We present CheckedCBox, which adds a flexible, type-directed program partitioning mechanism to Checked-C, by enhancing the Checked-C type system with tainted types that enable flexible partitioning of the program into checked and unchecked regions, in a manner such that unchecked region code does not affect the spatial safety in the checked region. We formalize our type system and prove the non-crashing and non-exposure properties of a well-typed CheckedCBox program. We implemented CheckedCBox in a configurable manner, which enables us to use existing sandbox mechanisms (eg WebAssembly) to execute programs. Consequently, in doing so, CheckedCBox has prevented four known vulnerabilities by efficiently partitioning the program.
Liyi Li, Arunkumar Bhattar, Le Chang, Mingwei Zhu, Aravind Machiry
2023-02-03T15:31:35Z
http://arxiv.org/abs/2302.01811v1
CheckedCBox: Type Directed Program Partitioning with Checked C for Incremental Spatial Memory Safety (Extended Version) ###### Abstract. Spatial memory safety violation is still a major issue for C programs. Checked C is a safe dialect of C and extends it with checked pointer types and annotations that guarantee spatial memory safety in a backward compatible manner, allowing the mix of checked pointers and regular (unchecked) pointer types. However, unchecked code vulnerabilities can violate the checked code's spatial safety guarantees. We present CheckedCBox, which adds a flexible, type directed program partitioning mechanism to Checked C, by enhancing the Checked C type system with tainted types that enable flexible partitioning of a program into c region and u region, such that u region code does not affect the spatial safety in c region. We formalize our type system and prove the non-crasching and non-exposure properties of a well-typed CheckedCBox program. We implemented CheckedCBox in a configurable manner, which enables us to use existing sandbox mechanisms (_e.g._, WebAssembly) to execute program partitions seamlessly. Our evaluation on seven programs shows that CheckedCBox can effectively and efficiently partition programs by preventing four known vulnerabilities. + Footnote †: These authors contributed equally to this work. + Footnote †: These authors contributed equally to this work. + Footnote †: These authors contributed equally to this work. ## 1. Introduction Vulnerabilities due to memory corruption, especially spatial memory corruption, are still a major issue for C programs (Bahdan et al., 2017; Chen et al., 2017; Chen et al., 2018) despite many efforts that tried to prevent them (Wang et al., 2018). Several industrial and research efforts, including CCured (Wang et al., 2018), Softbound (Wang et al., 2018), and ASAN (Shi et al., 2018), have investigated ways to better compile C programs with automatic spatial safety enforcement. These approaches all impose performance overheads deemed too high for deployment use. Recently, Elliott et al. (Elliott et al., 2018) and Li et al. (Li et al., 2018) introduced and formalized Checked C, an open-source extension to C, to ensure a program's spatial safety by introducing new pointer types, _i.e._, checked (c) pointer types. The checked pointers are represented as system-level memory words without "fattening" metadata (Li et al., 2018), and ensuring backward compatibility, _i.e._, developers can use checked and regular (unchecked u) pointers within the same program. However, as we explain in Section 2.2, the uncworted or unchecked (u) code can violate guarantees provided in c regions. We need to ensure that _code executed as part of unchecked (u) regions does not lead to the safety violations in checked (c) regions with the use of program partitioning mechanism (Zhou et al., 2018)_. Existing such mechanisms are not suitable as they are based on process isolation and have high overhead, and are _data-centric_ (Section 2.3). But in our case, we want a low-overhead code-centric partitioning, where the u region code (or functions) should be isolated (or partitioned) from c one. We also want the technique to co-exist and be compatible with Checked C guarantees such that the partition containing c region code can maintain spatial safety. Here, we propose a type-directed code-centric program partitioning approach. Specifically, our system, CheckedCBox, extends Checked C's checked and unchecked pointer types--representing safe and unsafe program pieces--with **tainted**(t_*) types running on an isolated sandbox mechanism, forbids the communication between checked and unchecked type entities, and enforces the communication between checked and unchecked types through the uses of tainted types with additional validity checks. The developer starts by marking desired (_i.e._, unchecked, u) functions and pointers used in functions as tainted. Then, CheckedCBox partitions the given program into two partitions (u and c regions) of different privileges: * u _region_ (low privilege tainted region, extended from the unchecked region in Checked C): this partition contains tainted types (_i.e._, functions and pointers) and can only access tainted and unchecked pointers. * c _region_ or _safe region_ (high privilege untainted or checked region): This partition contains the remaining (untainted) code and data and has complete access to c region. The functions in c region can invoke any function in u region and access all its data but not the other way around, except for call-back functions, which we will discuss later. The c region code is executed as a regular program, while the u region partition will be executed in an existing sandboxed environment (_e.g._, WASM sandbox), with additional instrumentations to facilitate the communication between code in c and u regions. The combination of tainted types and privileged partitions enforces isolation and provides memory safety without transforming all unchecked C code to Checked C code, because unchecked types can stay in u region, and c region code can access tainted type entities that are allocated in u region. Although memory isolation prevents direct violations, u region code can still affect c region through tainted pointers by confused deputy attacks (Li et al., 2018; Chen et al., 2018), _e.g._, by using a valid c region address in a tainted pointer. Our compiler avoids these attacks by ensuring using dynamic checks that tainted pointers validly point to u region address space. Such checks are statically generated by our compiler. In summary, we make the following three main contributions. **CheckedCBox Type System, Formalism and Compiler**. We present a type system that integrates tainted types with Checked C and provides additional guarantees--the _non-crasching_ and _non-exposure_ guarantees, _i.e._, a well-typed CheckedCBox program can never crash due to spatial safety violations, as well as u region code cannot directly observe a checked pointer address. We extend the Checked C compiler to support the type system and formalize it by extending Checked C formalism (Li et al., 2018) with the non-crasching and non-exposure guarantees. We formally prove theorems related to the two guarantees and use model-based randomized testing (Li et al., 2018) to certify the simulation relation between the CheckedCBox semantics and its compiler formalism. To the best of our knowledge, CheckedCBox is the first C(-like) language and compiler formalism with the program partitioning mechanism. **Type-Directed Program Partitioning**. We present a type-directed program partition technique to separate c and u code regions and ensure the above guarantees. Our modular design enables us to use existing sandbox techniques to enforce memory and execution isolation, with the implementation of tainted pointers in the CheckedCBox compiler. **Supporting callbacks to c region with no Checked Pointer Exposure**. Although we disallow access to c region from u region directly, there can be cases where such access is needed. Specifically, when c region wants to provide access to certain shared checked data to u region. To enable this, we support callback functions in c region that can be invoked from u region through function pointers. However, knowing the address of c region functions in u region violates the non-exposure guarantee and leads to other attacks (Kumar et al., 2019). We handle this by using indirection. Specifically, instead of directly accessing the c region callbacks, the u region accesses them using a tainted-typed protected tramoline function, which directs the execution to the appropriate callback function. In addition, the tramoline function itself is referenced using an opaque index rather than its virtual address, implemented through existing sandboxing techniques. We evaluated CheckedCBox1 by partioning seven large real-world programs to demonstrate its effectiveness. Our evaluation shows that CheckedCBox provides a flexible, low-overhead program partitioning mechanism and guarantees spatial memory safety. Footnote 1: Our implementation is available open source at [https://github.com/REDACTED](https://github.com/REDACTED). ## 2. Background and Motivation Here, we brief Checked C and the motivation for CheckedCBox. ### Checked C Checked C development began in 2015 by Microsoft Research, but it was forked in late 2021 and is now actively managed by the Secure Software Development Project (SSDP). Details can be found in a prior overview (Becker et al., 2017) and the formalism (Kumar et al., 2019). **Checked Pointer Types**. Checked C introduces three varieties of _checked pointer_: * _Ptr<T> (ptr)_ types a pointer that is either null or points to a single object of type \(T\). * _Array_ptr<T> (arr)_ types a pointer that is either null or points to an array of \(T\) objects. The array width is defined by a _bounds_ expressing, discussed below. * _NT_Array_ptr<T> (n_tarr)_ is like _Array_ptr<T> except that the bounds expression defines the _minimum_ array width-additional objects may be available past the upper bound, up to a null terminator. Both \(arr\) and \(narr\) pointers have an associated bounds which defines the range of memory referenced by the pointer. The three different ways to declare bounds and the corresponding memory range is: _Array_ptr<T>: count(n) [p, p+sizeof(\(T\)) x n)_ range is: _Array_ptr<T>: p byte_count(b) [p, p+b)_ _Array_ptr<T>: bounds(x,y) [x,y)_ The bounds can be declared for \(narr\) as well, but the memory range can extend further to the right, until a NULL terminator is reached (_i.e._, NULL is not within the bounds). **Ensuring Spatial Memory Safety**. The Checked C compiler instruments loads and stores of checked pointers to confirm the pointer is non-null, and additionally the access to \(arr\) and \(narr\) pointers is within their specified bounds. For example, in the code if (n>0) a[n-1] -... the write is via address \(\alpha\) = a + sizeof(int) X (n-1). If the bounds of a are count(u), the inserted check confirms \(\texttt{a}\leq\alpha<\texttt{a}+\texttt{sizeof(int)Xa}\) prior to dereference. Failed checks throw an exception. Oftentimes, inserted checks can be optimized away by LLVM resulting in almost no runtime overhead (Becker et al., 2017). **Backward Compatibility**. Checked C is backward compatible with legacy C as all legacy code will type-check and compile. However, the compiler adds the aforementioned spatial safety checks to only checked pointers. The spatial safety guarantee is partial when the code is not fully ported. Specifically, only code that appears in _checked code regions_ (c region), is guaranteed to be spatially safe. c regions can be designated at the level of files, functions, or individual code blocks using the checked keyword.2 Within c regions, both legacy pointers and certain unsafe idioms (_e.g._, _variadic_ function calls) are disallowed. Footnote 2: You can also designate _unchecked_ regions (u region) within checked ones. **Converting C to Checked C**. It is not possible to fully automate the conversion of C code to Checked C due to the requirement for semantic reasoning and other modifications such as refactoring. We provide more details on this in Appendix A.1. ### No Safety Against u Regions Checked C provides spatial safety guarantees for completely converted programs, _i.e._, programs that uses _only_ checked types and no regular pointer types. A partially annotated program can still enjoy spatial safety only if checked pointers do not communicate with any unchecked ones. For instance, in the example below, there are no spatial safety violations in the function func as it uses only checked pointers. However, the other unconverted code regions (or unsafe regions) can affect pointers in safe regions and violate certain assumptions leading to vulnerabilities, as demonstrated by cross-language attacks (Kumar et al., 2019). Although the blankeness proof exists (Kumar et al., 2019; Kumar et al., 2019) for Checked C, it does not state that spatial safety violations cannot happen in c regions but rather states that c regions _cannot be blamed for any spatial safety violations_. Consider the following example: ``` 1//cregioncode 2intfunc(array_ptr<char>p:count(5)){ 3%.p[4].. 4} 5//uregioncode 6... 7str="he"; 8... 9func(#assume_bounds_cast<char>(str,5)); ``` Here, the c region function func expects a pointer to a buffer of five elements, but the u region code invokes the function (Line 9) with a buffer of 2 elements. This results in a spatial safety violation (SS3) in the c region, but of course, the blame or the root cause is in the u region (#). Furthermore, since c and u regions execute in the same address space, spatial memory corruptions (_e.g._, buffer overflow) in u regions can take down the complete program despite having c regions. We need _an isolation mechanism to ensure that code executed as part of \(u\) regions does not violate the safety guarantees in \(c\) regions._ ### Program Partitioning Program partitioning (Srivastava et al., 2016) is a well-known technique to divide a program into multiple isolated parts or partitions. There has been considerable work (Bahdan et al., 2015; Chen et al., 2016; Chen et al., 2017; Chen et al., 2018; Chen et al., 2019) in the area. Most of these techniques are _data-centric_(Chen et al., 2016; Chen et al., 2017), wherein program data drives the partitioning. Specifically, given sensitive data in a program, the goal is to partition functions into two parts or partitions based on whether a function can access the sensitive data. The performance overhead of these approaches is dominated by marshaling costs and depends on the usage of sensitive data. The overhead of state-of-the-art approaches, such as Glamdring (Glamdring, 2016) and PtrSplit (Pitt, 2017), is prohibitive and varies from 37%-163%. ## 3. Overview Figure 1 shows the interaction between various components of CheckedCBox. Given an appropriately annotated source program and a sandbox configuration, CheckedCBox creates an executable such that all tainted functions and data reside in a sandbox and the \(c\) region (non-sandbox) is not affected by the code in the sandbox. Here, we brief CheckedCBox from a developer perspective and the details of individual components in the later sections. ### Running Example To explain changes required by the CheckedCBox compiler, Listing 1 shows the C code of the redacted version of a simple network server with server_loop as its entry point (). The server runs in a loop and calls handle_request, which handles a network request. The function handle_request reads data from the socket Figure 2. Funters in Listing 1 annotated (manually or through automated tools like 3C) with Checked Types. through read_msg and based on the first byte, either process_req1 or process_req2 is called to handle the request. **The Vulnerability.** There is an arbitrary memory write vulnerability (indicated by \(\#\)) in process_req1 because of using \(\mathtt{i}\) as an index into the array msg without any sanity check. The variable \(\mathtt{i}\) can take any integer value as it is parsed from msg, whose content is read from socket in read_msg indicated by \(\clubsuit\). **Goal.** The developer's goal is to partition the code in Listing 1 so that spatial memory vulnerabilities do not affect the other part of the program. Ideally, the developer can convert the entire code to Checked C so that we achieve full spatial memory safety. However, there might be considerable conversion efforts in Checked C. For instance, void* pointers are not directly supported by Checked C. Consequently, the developer needs to convert functions using void* pointers into generic versions - this could be tedious. To handle this, the developer can do a best-effort conversion and annotate only a few pointers _e.g._, by using an automated conversion tool such as 3C which annotates few pointers as shown in Listing 2. However, as shown in Section 2.2, \(\mathtt{u}\) region code can also affect the safety of checked pointers. ### CheckedCBox Annotations To overcome the above difficulty, The developer marks risky functions with unchecked pointers, _i.e._, read_msg and process_req1 as tainted as shown in Listing 3 indicated by \(\clubsuit\)by using with CheckedCBox, and results in the partially annotated program with checked and tainted types _i.e._, Listings 2 and 3 atop of Listing 1. The initial tainted functions might require other pointers to be marked as tainted according to our typing rules (Section 4.2). The developer uses our type checker to identify the additional required annotations (\(\clubsuit\)) and adds them as shown in Listing 4. The resulting well-typed program as shown in Listing 5 is passed to our source level program partitioner along with certain configuration parameters of the target sandbox. ### Partitioning Our partitioner splits the provided program into two sets of source files with the necessary source changes required to communicate with sandboxed code. These sets of source files are compiled with the corresponding compilers to get the corresponding object files. The \(\mathtt{c}\) region object file has the necessary runtime checks enforcing CheckedCBox guarantees. The \(\mathtt{u}\) region object file is produced according to the corresponding sandbox mechanism. Finally, these two object files are linked along with the necessary sandbox libraries to produce the final executable such that all the tainted functions are executed in a sandbox (\(\mathtt{u}\) region) and the rest of the functions as regular code (\(\mathtt{c}\) region). ## 4. CheckedCBox Formalism This section describes the formal core model of CheckedCBox, named CoreChrCBox. We present its syntax, semantics, type system, as well as CoreChrCBox's meta-theories, including the type soundness, non-exposure, and non-crashing theorems. ### Syntax Figure 2 shows the syntax of CoreChrCBox. **Type Syntax.** At a high level, we classify types as word-size value, multi-word value, or function types. A word-size value can be either an integer or pointer. Every pointer type (\(\mathtt{ptr}^{\mathcal{E}}\)\(\omega\)) includes a pointer mode annotation (\(\mathcal{E}\), the difference between context and pointer modes is introduced shortly below) that is either checked (\(\mathtt{c}\)), tainted Figure 1. Overview of interaction between various phases of CheckedCBox. Figure 2. CoreChrCBox Syntax (t), or unchecked (u), and a type (\(\omega\)) denoting the valid value type it is pointed to. A multi-word value type (\([\beta\;\tau]_{\kappa}\)) that ranges over arrays and null-terminated arrays is constructed by the type of elements in the array (r), an array bound (\(\beta\)) comprised of an upper and lower bound on the size of the array (\((b_{l},b_{h})\)), and an array flag (\(\kappa\)). Bounds \(b\) are limited to integer literals \(n\) and expressions \(x+n\). Whether an array pointer is null terminated or not is determined by annotation \(\kappa\), which is \(nt\) for null-terminated arrays, and \(\cdot\) otherwise (we elide \(\cdot\) when writing types). An example representation of an array and null-terminated array in CoreChxCBox is shown below: \(\_\_\)t_Array_ptr<r> : count(\(n\)) \(\Leftrightarrow\) ptr\({}^{\kappa}\) [\((0,n)\;\tau]\) \(\_\)MT_Array_ptr<r> : count(\(n\)) \(\Leftrightarrow\) ptr\({}^{\kappa}\) [\((0,n)\;\tau]_{nt}\) For simplicity, we write ptr\({}^{c}\)[\(b\;\tau\)] to mean ptr\({}^{c}\) [\((0,b)\;\tau]\), so the above examples could be rewritten as ptr\({}^{c}\)[\(n\;\tau\)] and ptr\({}^{c}\)[\(n\;\tau\)]\({}_{nt}\), respectively. **Disallowing Unsafe Types.** The well-formedness of these types are presented in Appendix A.2. It prevents unsafe types from being constructed. Consider the type \(\_\_\)_Array_ptr<_Ptr<int>>, which describes a tainted array of checked pointers. This is not well-formed in CoreChxCBox because it potentially exposes the checked pointer addresses in a u region when the tainted (t) array is used. Nevertheless, we can have a checked array whose elements are tainted pointers: \(e\),\(\_\)Array_ptr<_t_Ptr<int>> is a valid type. Function types are represented using a dependent function declarations, \(i.e\), \(\forall\;\overline{x}\). \(\overline{r}\to r\), where \(\overline{x}\) represents a list of int type variables that bind type variables appearing in \(\overline{r}\) and \(\tau\). An example of a function pointer type is shown below: \(\_\)t_ptr<(int)(\_\(t\_\)NT_Array_ptr<r_1> : count(\(n\)), \(\_\)t_NT_Array_ptr<r_2>: count(\(n\)), int_n)> \(\Leftrightarrow\) ptr\({}^{\kappa}\) (\(V\). \(n\). int \(\times\) ptr\({}^{\kappa}\) [\((0,n)\;\tau_{2}]_{nt}\times\) ptr\({}^{\kappa}\) [\((0,n)\;\tau_{1}]_{nt}\to\) int) The function type also has well-formed requirements (Appendix A.2), which disallows nesting checked pointers inside tainted pointers. Furthermore, these requirements also ensure that all variables in \(\overline{r}\) and \(\tau\) are bounded by \(\overline{x}\). **Expressions.** CoreChxCBox expressions include common expressions such as addition (\(e_{1}+e_{2}\)), pointer dereference (\(\ast\;e\)) and assignment (\(\ast\;e_{1}+e_{2}\)), along with expressions that require special handling, such as, static casts (\((\tau)e\)), dynamic casts (\((\langle\tau\rangle e)\)[\(3\)], the strlen operation (strlen(\(x\))), memory allocations (malloc(\(\xi\),\(\omega\))), function calls (\(e(\overline{e})\)), unchecked blocks (unchecked(\(\overline{x}\)){\(e\)}), and checked blocks (checked(\(\overline{x}\)){\(e\)}). For example, a dynamic bounds cast dyn_bounds_cast<Array_ptr<r>>(\(e\),count(\(n\))) is formalized as (ptr\({}^{c}\)[\(n\;\tau\)]) in CoreChxCBox. We denote integer literals \(n\) with a type \(\tau\)(\(i.e\), int or ptr\({}^{\xi}\)\(\omega\)), enabling the use of fixed addresses as pointers. For example, \(0.\)ptr\({}^{\xi}\)\(\omega\) (for any \(\xi\) and \(\omega\)) represents a null pointer. The heap allocation \(\texttt{malloc}(\xi,\omega)\) includes a mode flag \(\xi\) for allocating memory in different regions, \(c\) mode pointer in \(c\) region or u and t mode pointers in u region. We disallow \(\omega\) to be a function type (\(\forall\;\overline{x}\). \(\overline{r}\to\tau\)). The checked and unchecked expressions are used to delimit code regions. To guarantee the non-exposure property, we extend the Checked C syntax to include checked\((\overline{x})\){\(e\)} and unchecked\((\overline{x})\){\(e\)} blocks, where \(\overline{x}\) represents all variables that are allowed to communicate between instructions outside and inside of the block \(e\), and it cannot contain checked pointers. ret is introduced by the semantics when evaluating a let binding; explained shortly below. CoreChxCBox aims to be simple enough to work with but powerful enough to encode realistic CheckedCBox idioms. For example, loops can be encoded as recursive function calls.structs are not included in Figure 2 for space reasons, but they are supported as shown in (Kang et al., 2018). C-style unions have no safe typing in Checked C, so we omit them. Although the base syntax of CoreChxCBox is similar to that of Checked C model (Kang et al., 2018), there are considerable enhancements to support tainted types (\(t\) in \(\xi\)), special heap handling (\(i.e\), \(\texttt{malloc}(\xi,\omega)\)), and explicit specification of pointers in the checked and unchecked regions. ### Typing and Semantics The CoreChxCBox type system is a flow-sensitive, gradual type one that generates additional dynamic checks that are inserted in the typing checking stage and executed in the semantic evaluation stage. Our type checker restricts the usage of tainted and checked pointer types to ensure that tainted pointers do not affect checked types, along with enforcing Checked C typing rules (Kang et al., 2018). As partly shown in Figure 4 (labeled as T-\(X\)), each typing judgment has the form \(\Gamma;\Theta\;\vdash_{m}\;e:\tau\), which states that in a type environment \(\Gamma\) (mapping variables to their types) and a predicate environment \(\Theta\) (mapping integer-typed variables to Boolean predicates), expression \(e\) will have type \(\tau\) if evaluated in context mode \(m\), indicating that the code is in \(m\) region. The operational semantics for CoreChxCBox is defined as a small-step transition relation with the judgment \((\varphi,\mathcal{M},e)\longrightarrow_{m}(\varphi^{\prime},\mathcal{M}^{\prime},r)\), as shown in Figure 3. Here, \(\varphi\) is a _stack_ mapping from variables to values \(n\!:\!\tau\) and \(\mathcal{M}\) is a _heap_ that is partitioned into two parts (\(c\) and \(u\) heap regions), each of which maps addresses (integer literals) to values \(n\!:\!\tau\). The complete set of typing rules and special handling of (NT)-arrays are provided in Appendices A.3 and A.4. We wrote \(\mathcal{H}(m,n)\) to retrieve the \(n\)-location heap value in the \(m\) heap, and \(\mathcal{M}(m)\){\([n\mapsto n^{\prime}\!:\!\tau]\) to update location \(n\) with the value \(n^{\prime}\!:\!\tau\) in the \(m\) heap. While heap bindings can change, stack bindings are immutable--once variable \(x\) is bound to \(n:\tau\) in \(\varphi\), that binding will not be updated. As mentioned, value \(0\!:\!\tau\) represents a null pointer when \(\tau\) is a pointer type. Correspondingly, \(\mathcal{H}(m,0)\) should always be undefined. The relation steps to a _result_\(r\), which Figure 3. CoreChxCBox Semantics: Evaluation is either an expression, a null or bounds failure, represent an expression right after the reduction, a null-pointer dereference or out-of-bounds access, respectively. Such failures are a _good_ outcome; struck states (non-value expressions that cannot transition to a result \(r\)) characterizing undefined behavior. The rules for the main operational semantics judgment _evaluation_ are given at the bottom of Fig. 3. The first rule takes an expression \(e\), decomposes it into an _evaluation context_\(E\) and a sub-expression \(e^{\prime}\) (such that replacing the hole \(\Box\) in \(E\) with \(e^{\prime}\) would yield \(e\)), and then evaluates \(e^{\prime}\) according to the _computation_ relation \((\varphi,\mathcal{H},e^{\prime})\longrightarrow(\varphi,\mathcal{H},e^{\prime \prime})\), whose rules are given along with type rules in Fig. 4 (labeled as S-\(X\)), discussed shortly. The _mode_ function in Fig. 3 determines the context mode, i.e., region, that the expression \(e^{\prime}\) locates based on the context \(E\). In Listing 3, the function call handle_request is in u region since it is inside an unchecked function server_loop. The second rule describes the exception handling for possible cranking behaviors in u regions. Operations in u region can non-deterministically crash and the CheckerCBox sandbox mechanism recovers the program to a safe point \((0\cdot\tau)\) and continues with the existing program state. Evaluation contexts \(E\) define a standard left-to-right evaluation order. **Modes, Static Casting, and Subtyping**. In CoreChkCBox, Context modes \(m\) appearing in a type rule determine the code region that permits pointer dereferences and value-assignments, which also depends on the pointer modes. We define a three point lattice \(\vec{\xi}_{1}\leq\vec{\xi}_{2}\)4 to describe such permission, where \(\vec{\mathtt{t}}\leq\vec{\xi}\) and \(m\leq m\). This means that a t pointer can be dereferenced and value-assigned in any region, while c and u pointers can only perform such operations in c and u regions, respectively. Footnote 4: In typing rule, the lattice is usually used as \(\vec{\xi}\leq m\) as \(m\) represents context modes. CoreChkCBox also provides static casting operations. As described in rule T-CastPtr in Figure 4, an pointer typed expression of type \(\mathtt{ptr}^{\vec{\xi}_{1}}\)\(\mathtt{r}_{1}\) can be casted to another pointer type \(\mathtt{ptr}^{\vec{\xi}_{2}}\)\(\mathtt{r}_{2}\), if \(\mathtt{ptr}^{\vec{\xi}_{1}}\)\(\mathtt{r}_{1}\) subtypes \((\Box_{\Box})\) to \(\mathtt{ptr}^{\vec{\xi}_{2}}\)\(\mathtt{r}_{2}\), i.e., \(\mathtt{ptr}^{\vec{\xi}_{1}}\)\(\mathtt{r}_{1}\)\(\mathtt{r}_{2}\). In CoreChkCBox, except that we can cast a t mode pointer to a u mode one, all subtyping relations are between two types with the same mode, meaning that \(\vec{\xi}_{1}\) and \(\vec{\xi}_{2}\) above are mostly the same and the above mode lattice \((\leq)\) has no business with subtyping. ``` //_Ptrint>x;_t_Ptrint>y;int=z; z=(int*)y;//Thisisokay. x=(_Ptrint>)y;//Notallowed. ``` In the above example, a t mode pointer can be cast to u mode but casting t mode to c mode is disallowed. The complete subtyping relation was described in Appendix A.2. Notice that let statements are immutable in CoreChkCBox, so the following code is not possible, because variables x and y must have the same type in CoreChkCBox. ``` //_Ptrint>x;_t_Ptrint>y; x=y;//Notallowed. ``` **Pointer Dereference**. The type and semantic rules for pointer dereference (T-Def, S-DefC, S-DefT, S-DefTull in Figure 4) reflect the key CoreChkCBox feature, where our type checker directs the insertions of dynamic checks executed in the evaluation stage. The type rule (T-Def) ensures that pointers are used with the right modes in the right region (\(\vec{\xi}\leq m\)). With the dynamic checks inserted by the compiler, rule S-DefNull ensure that if a null pointer is used, CoreChkCBox captures the runtime error. Type and semantic rules for array types and pointer assignments are given in Appendices A.3 and A.4. Rules S-DefC and S-DefT are for c and t mode pointer dereferences, respectively. In addition to the no null check in c mode pointer dereference, any dynamic heap access of a tainted (t) pointer requires a _verification_\((\emptyset;\mathcal{H};\mathcal{0}\vdash_{n}a:\tau)\), which refers to that the pointer value \(n_{a}\) is well-defined in \(\mathcal{H}(m,n_{a})\) and has right type \(\tau\). Figure 4. Selected typing (T-\(X\)) and semantic (S-\(X\)) rules. First line is for cast operations, second line is for pointer dereferences, third line is for checked/unchecked blocks, and the rest is for function calls. **Unchecked and Checked Blocks.** The execution of a checked or unchecked block represents the context switching from a c to an u region, or vice versa, with its type and semantic rules given in Figure 4. In this context switching, to guarantee the checked (c) pointer non-exposure property, checked pointers are not allowed to go cross different regions, which is guaranteed by the predicates \(\forall x\in\overline{x}\ \cdot\neg\mathsf{c}(\Gamma(x))\) and \(\neg\mathsf{c}(\tau)\), as well as the check that all free variables in the block content \(e\) are in \(\overline{x}\). For example, StringAuth in Section 5.1.2 is a trampoline function that disallows checked pointers as arguments and return values. The use of the function in the following _T_StringAuth, which is in u region, cannot legally acknowledges any checked pointers; otherwise, we might expose a checked pointer address to unsafe code regions. In CheckedCBox, we actually permits the accesses of checked pointers inside StringAuth, since the function body of a trampoline function is in c region. More information is given in Section 5.1.2. **Dependent Function Pointers.** Rule T-Fun (Figure 4) states the type judgment for dependent function pointer application, where we represent the result of replacing all integer bound variables \(\overline{x}\) in the type \(\tau\) with with bound expressions \(\overline{e^{\prime}}\) by \(\tau[\overline{e^{\prime}/\overline{x}}]\) and write \(\overline{\tau[e^{\prime}/\overline{x}]}\) to lift the substitution to every type in \(\overline{\tau}\). Given an expression \(e\) of function pointer type (\(\mathsf{ptr}^{\xi}\ \forall\ \overline{x}\ \cdot\overline{\tau}\to\tau\)) and arguments \(\overline{e}\) of types \(\overline{\tau^{\prime}}\), the result of the application will be of type \(\overline{\tau[e^{\prime}/\overline{x}]}\); if for each pair of \(\tau^{\prime}\) and \(\tau^{\prime\prime}\) in \(\overline{\tau^{\prime}}\) and \(\overline{\tau[e^{\prime}/\overline{x}]}\), \(\tau^{\prime}\) is a subtype of \(\tau^{\prime\prime}\). Consider the process_req2 function in Fig. 5, whose parameter type for msg depends on m_1. Its function pointer type is \(\mathsf{ptr}^{\tau}\ \forall\ \underline{\tau}\ \mathsf{m}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \mathsf{l}\ \ \mathsf{l}\ \mathsf{l We created the sandbox library once and for all for each sandbox. This library abstracts the sandbox-specific details and exposes a uniform interface (header file) to be used in \(c\) region. For instance, _SBK_() gets an opaque pointer to the target sandbox. We will present our implementation using WebAssembly (WASM (Becker et al., 2017)) as our target sandbox. We also formalize the implementation and show a simulation theorem in Appendix A.5. ### CheckMate The CheckMate is primarily implemented in C++ as a Clang frontend tool (3K SLoc). However, we use a small OCAML program (680 SLoc) to remove annotations and make the code compilable with the sandbox compiler (Section 5.1.3). #### 5.1.1. Additional Function Qualifiers In addition to the _Tainted qualifier that marks functions to be in u region, we provide a few other qualifiers that enable developers to provide additional information and ease the partitioning process. Specifically, we provide three additional qualifiers: _Callback_,_Mirror, and _TLIB_. _Callback_: Developers should use this qualifier to mark callback functions, _i.e._, functions in c region, that can be called from the tainted region. The CheckMate inserts appropriate sandbox dependent mechanisms to enable this (5.1.2). _Mirror_: This qualifier permits copying the corresponding function into both c region and u region, which permits the handling of certain simple utility functions that are called from both regions. For example, append_string in our evaluation of parsons_wasm has callers from both the regions. _Mirror int append_string(_TPtr<char> buf, const char= appendStr: ittype(_Nt_array_ptr<const char>), _TPtr<char> buf_start, size_t buf_len) { */* Qualifier Rules. 1.) No access to global data NOT marked "const" /* Callness must be _Tainted or _Mirror ... > Qualifying append_string with _Mirror_ duplicates the function in both regions, allowing calls to append_string with parameter to appendStr as an unchecked or checked pointer within u and c regions, respectively. Consequently complexity from over-tainting is avoided as appendStr need not be tainted in c region and neither are callbacks required to access append_string from u region. "-Mirror" enforces control-flow and data-flow compile-time semantic rules to ensure all variable and function call dependencies of mirrored functions required for u region's compilation are resolved. _TLIB_: This qualifier relaxes type-checking rules on library functions, allowing developers to use the function freely in c region. // First, manually check the memory is in tainted region. // if yes, then call strcmp. if (!(ing_mem_in_range(t_str, t_str + n, SBK_LOW(), SBK_HLIGH())) handle_violation(); // only type checker ignores this because // the _TLIB annotation below. struct(dat, t_str, n); * extern char =struct(char *_restrict _dest, * _TLIB_extern char *struct(char *_restrict _dest, * const char *_restrict _src, size_t _nn); // In the header file > Passing tainted pointer t_str to unqualified strncat above is disallowed without having additional u region implementation for strncat. If a user ascertains that t_str has the right buffer size for strncat, she might label strncat with _TLIB, so that t_str can be treated as an checked pointer parameter; such annotation relaxes type-checking for all the arguments to its calls. It is worth noting that CheckCDBox does not enforce any semantics to ensure _TLIB functions implemented in c region are non memory-modifying; therefore, using _TLIB requires users' awareness of memory address leaks. #### 5.1.2. Generating c region Source Partition We copy all non-tainted functions into c region source files and make the following modifications to enable interaction with u region (_i.e._, sandboxed code). We created a library once and for all for each sandbox. This library abstracts the sandbox-specific details and exposes a uniform interface (header file) to be used in c region. For instance, _SBK_() gets an opaque pointer to the target sandbox. _Handling Calls to Tainted Functions_: In c region, we also need to modify calls to tainted functions as they execute inside the sandbox (separate address space) and thus cannot be invoked as regular functions. However, modifying every call site of tainted functions is tedious and also requires precise pointer analysis (Kumar et al., 2017) to handle indirect calls through function pointers. We handle this by _indirection_: Instead of modifying the call sites, we modify the body of tainted functions to invoke the corresponding function in the sandbox. For instance, we modify the body of tainted function process_req1 (Listing 5) in c region as below: ``` int proc_req1(char *msg, size_t n_1)__Tainted { - int re = -1, i; - if (m_1> MIN_SIZE) { -... + return w2c_process_req1(msg, m_1); } ``` This ensures that all calls (even through function pointers) to the tainted function process_req1 are redirected to the sandbox. _Handling_Callback Qualifiers_: As mentioned in Section 5.1.1, functions with these qualifiers can be called from u region. Consider the following StringAuth function that checks whether the provided user input usertoken is authenticated by accessing checked data. Since this needs to be invoked from u region it is annotated as a _Callback. _callback_TPtr<char> StringAuth( _TArray_Ptr<const char> usertoken : count(len), size_t len) { ... // Checks whether usertoken is authenticated /* These functions will be restricted to only accept tained parameters. */ These callback functions are only allowed to use tainted parameters as they will be called from a tainted region. For each such function, we create a corresponding trampoline function that serves as the entry point for the callback function, as shown below: ``` +unsigned int_T_StringAuth(void*sandbox, + unsigned int arg_1, + unsigned long int arg_2) { + // Perform necessary Type-conversion of arguments. + // uname <- convert arg_1 + // len <- arg_2 + ret = StringAuth(uname, len); } ``` + //ret_val<-ret + returnret_val; + } ``` The trampoline function handles the invocations from sandbox (and hence the extra parameter sandbox), performs necessary pointer argument conversion, and eventually invokes the callback. We also add the code to register this trampoline function with the sandbox. The registration function for WASM sandbox is as shown below: ``` +voidregateValidEnd_StringEnd(void( +/cellular function signature (set := int,mg_1 := int,mg_2 := int,mg_3 := int,mg_4 := int,mg_5 := int,mg_5 := int,mg_6 := int,mg_7 := int,mg_8 := int,mg_9 := int,mg_10 := int,mg_11 := int,mg_12 := int,mg_13 := int,mg_14 := int,mg_15 := int,mg_16 := int,mg_17 := int,mg_18 := int,mg_19 := int,mg_20 := int,mg_21 := int,mg_22 := int,mg_23 := int,mg_24 := int,mg_25 := int,mg_26 := int,mg_27 := int,mg_28 := int,mg_29 := int,mg_30 := int,mg_31 := int,mg_32 := int,mg_33 := int,mg_34 := int,mg_35 := int,mg_36 := int,mg_37 := int,mg_38 := int,mg_39 := int,mg_40 := int,mg_41 := int,mg_42 := int,mg_43 := int,mg_44 := int,mg_45 := int,mg_46 := int,mg_47 := int,mg_48 := int,mg_49 := int,mg_50 := int,mg_51 := int,mg_52 := int,mg_53 := int,mg_54 := int,mg_55 := int,mg_56 := int,mg_57 := int,mg_58 := int,mg_59 := int,mg_51 := int,mg_52 := int,mg_53 := int,mg_54 := int,mg_55 := int,mg_56 := int,mg_57 := int,mg_58 := int,mg_59 := int,mg_51 := int,mg_52 := int,mg_53 := int,mg_54 := int,mg_55 := int,mg_56 := int,mg_57 := int,mg_58 := int,mg_59 := int,mg_59 := int,mg_51 := int,mg_52 := int,mg_53 := int,mg_54 := int,mg_55 := int,mg_56 := int,mg_57 := int,mg_58 := int,mg_59 := int,mg_51 := int,mg_59 := int,mg_51 := int,mg_52 := int,mg_53 := int,mg_54 := int,mg_55 := int,mg_56 := int,mg_57 := int,mg_58 := int,mg_59 := int,mg_59 := int,mg_51 := int,mg_52 := int,mg_53 := int,mg_54 := int,mg_55 := int,mg_56 := int,mg_57 := int,mg_58 := int,mg_59 := int,mg_59 := int,mg_51 := int,mg_52 := int,mg_53 := int,mg_54 := int,mg_55 := int,mg_56 := int,mg_57 := int,mg_58 := int,mg_59 := int,mg_59 := int,mg_51 := int,mg_59 := int,mg_51 := int,mg_52 := int,mg_53 := int,mg_54 := int,mg_55 := int,mg_56 := int,mg_57 := int,mg_58 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_51 := int,mg_52 := int,mg_53 := int,mg_54 := int,mg_55 := int,mg_56 := int,mg_57 := int,mg_58 := int,mg_59 := int,mg_59 := int,mg_51 := int,mg_59 := int,mg_51 := int,mg_52 := int,mg_53 := int,mg_54 := int,mg_55 := int,mg_56 := int,mg_57 := int,mg_58 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_51 := int,mg_52 := int,mg_53 := int,mg_54 := int,mg_55 := int,mg_56 := int,mg_57 := int,mg_58 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_51 := int,mg_52 := int,mg_53 := int,mg_54 := int,mg_55 := int,mg_56 := int,mg_57 := int,mg_58 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 : int,mg_59 := int,mg_59 := int,mg_59 : int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 : int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 : int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 : int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 : int,mg_59 := int,mg_59 : int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 : int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 : int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 := int,mg_59 : int,mg_59 := int,mg_59 : int,mg_59 := int,mg_59 : int,mg_59 := int,mg_59 : int,mg_59 := int,mg_59 := int,mg_59 : int,mg_59 := int,mg_59 : int,mg_59 := int,mg_59 : int,mg_59 := int,mg_59 : int,mg_59 : int,mg_59 : int,mg_59 : int,mg_59 := int,mg_59 : int,mg_59 : int,mg_59 := int,mg_59 : int,mg_59 := int,mg_59 * **Performance Overhead:** What is the performance overhead (both runtime and memory) in using CheckedCBox? * **Security Impact:** How effective is the isolation provided by CheckedCBox in preventing security vulnerabilities? ### Dataset Network-facing programs such as servers directly interact with external input, often process complex data types, and are more susceptible to security issues. We primarily focus on network servers as they can benefit most from our partitioning approach. We use WebAssembly (WASM) as our target sandbox. Consequently, we also want any of the selected programs to be compilable with the WASM sandbox. We selected network servers that we could (with minimal effort) compile with the WASM compiler. We also selected a few standalone programs suggested by the Checked C team (Crock et al., 2018), which are good candidates to evaluate modifications to Checked C. Table 1 shows the program selected as part of our evaluation dataset. ### Experimental Setup All experiments are performed on a 6-Core Intel i7-10700H machine with 40 GB of RAM, running Ubuntu 20.04.3 LTS. We use WASM as our target sandbox and use a similar configuration as that of the recent work (Zhu et al., 2019); and Valgrind's "massif" memory profiler (Vaswani et al., 2017) to measure the memory usage and consider the peak heap usage of an application as its memory consumption. We measure runtime using the difference in elapsed clock cycles using clock() API from POSIX's <time.h> and linux's time command, and perform every measurement ten times and use the average as the final result. ### Conversion Effort The flexibility of CheckedCBox (Section 5.3) enables an application to be partitioned in several ways - with varying levels of overhead and assurance. We explore the following three ways: _Checked C and Tainted Partitions (CTP):_ This is the most comprehensive use case, where the c region partition contains completely annotated Checked C code, and u region contains one or more tainted functions. This provides the complete spatial safety of the code in c region including isolation from u region. _Only Tainted Partition (TP):_ This is the general partitioning (Section 5.3) use case without Checked C. This is similar to _CTP_, but c region code does not use any Checked C annotations. This provides only the isolation guarantee without spatial safety. _Only Tainted Pointers (TP\({}_{P}\)):_ In this use case, we only use tainted pointers, and all code lies in c region. This is needed for data isolation-only cases, where the developer might want to just isolate certain data (_e.g.,_ user input) from the rest of the program. As explained in Section 5.2, all access through tainted pointers will be dynamically checked to ensure that every access is within the sandbox. This provides partial spatial safety by ensuring that any spatial violation involving tainted pointer cannot affect c region. #### 6.3.1. Conversion Methodology We partitioned each program in our dataset using one of the above three methods. Our goal is to isolate risky input processing routines from the application code. We manually analyze each application's source code to identify which functions handle the input data and how. We also look into previously reported vulnerabilities to identify risky input processing routines. We pick one of the above methods to partition based on how the input data is processed. Table 2 is a summary of our dataset partitioning. For ProFTPD, \({T}_{P}\) method is used and the input data is marked as tainted. Consequently, five other pointers need to be marked as tainted according to the type rules. This results in a total of 6 pointer annotations. There is no code in u region as used \({T}_{P}\) method with only tainted pointers being used. We follow the same approach for LibPNG (png2pm and pnm2png). However, in this case, we have to annotate much more pointers (248) due to the complicated libPNG's internal structures. For MicroHTPD and UFTPD, \({T}\) method is used and we mark all of the direct input handled methods as tainted, which are consequently moved to the sandbox, with annotating several intermediate pointers as tainted. For TinyBignum and Parsons, we follow \({CTP}\) and mark all input processing routines as tainted and place them in the sandbox. The rest of the non-sandboxed code is annotated completely using Checked C types and placed in c region. We ensured that the partitioned programs retained their expected functionality by verifying using corresponding test suites. #### 6.3.2. Conversion Effort The second last column shows the hour numbers for partitioning applications. On average, it takes \(\sim\) 3.5 hours for each partitioning. However, the exact time depends on the complexity of the application and the pointer usage. Although the absolute time is high, partitioning is a one-time effort for each application. We start by annotating functions and then iteratively fixing type-checker errors. Most of the time (80%) is spent on running the type-checker. The type-checker stops at the first error without giving information about other variables that need to be fixed. For an instance, in the following code: _TPtr<int> y =...; int *z; int *z = y; x = z; The type-checker displays an error only for the first assignment. However, to correctly fix it, we need to annotate both x and y. If \(N\) pointers need to be annotated, then in the worst case, we might have to run the type-checker \(N\) times, annotating one additional \begin{table} \begin{tabular}{c|c|c|c|c} \hline \hline **ID** & **Program** & \multicolumn{2}{c|}{**Description**} & \multirow{2}{*}{ \begin{tabular}{c} **Size** \\ **(Sloc)** \\ \end{tabular} } \\ \hline 1 & \multicolumn{2}{c|}{ProFTPD} & \multicolumn{2}{c|}{High performance FTP Server} & \multirow{2}{*}{ \begin{tabular}{c} 586 \\ 122 K \\ \end{tabular} } \\ 2 & \multicolumn{2}{c|}{MicroHTTPPD} & \multicolumn{2}{c|}{Single HTTP Server} & \multirow{2}{*}{ \begin{tabular}{c} 82 K \\ \end{tabular} } \\ 3 & \multicolumn{2}{c|}{UPPD} & \multicolumn{2}{c|}{UFD Based HTTP Server} & \multirow{2}{*}{ \begin{tabular}{c} 82 K \\ \end{tabular} } \\ 4 & \multicolumn{2}{c|}{LibPNG} & \multicolumn{2}{c|}{Program to convert between png and pnn2p} & \multirow{2}{*}{ \begin{tabular}{c} 76 K \\ \end{tabular} } \\ 5 & \multicolumn{2}{c|}{TinyBignum} & \multicolumn{2}{c|}{Multiple precision integer implementation} & \multirow{2}{*}{ \begin{tabular}{c} 16 K \\ \end{tabular} } \\ 6 & \multicolumn{2}{c|}{Parsons} & \multicolumn{2}{c|}{JSON parsing library} & \multirow{2}{*}{\begin{tabular}{c} 3.1 K \\ \end{tabular} } \\ \cline{2-2} \cline{4-4} \cline{6-6} \cline{10-6} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{\begin{tabular}{c} **Position** \\ **Methodology** \\ \end{tabular} } & \multicolumn{1}{c|}{\begin{tabular}{c} **Position** \\ **Automated** \\ \end{tabular} } & \multicolumn{1}{c|}{\begin{tabular}{c} **Linear in** \\ **(Sambbox)** \\ \end{tabular} } & \multicolumn{1}{c|}{\begin{tabular}{c} **Time** \\ **Tkam** \\ \end{tabular} } & \multicolumn{1}{c|}{ \begin{tabular}{c} **CVIa** \\ **Pervated** \\ \end{tabular} } \\ \hline \hline **Performance** & **TP** & & & & & \\ \(\text{pr}\text{c}_{\text{\text{\text{\text{\text{\text{\text{\text{\text{\ pointer in every run. We plan to fix this in our future work by making the conversion procedure automatic. ### Performance Overhead Recent work (Kumar et al., 2018) shows that code executed as part of WASM sandbox incurs significant runtime overhead, _i.e.,_\(\sim\)200%. To better understand our runtime overhead, we first perform micro benchmarking of additional sandbox-related operations in CheckedCBox. #### 6.4.1. Micro-Benchmarking Figure 6 shows our micro-benchmarking result. We measure the following operations as part of this: _Memory access in WASM Sandbox (SBX\({}_{m}\))_: All memory accesses in a sandbox need additional verification by the sandbox runtime, which results in runtime overhead. We perform 100K memory accesses (read and write) in a loop, measure the time inside the sandbox, and compare it with the time executed as a regular program. The results ( Figure 6) show that we incur 156.6% overhead for memory accesses in the WASM sandbox compared to that in a normal program. This is in line with the observations of the recent work (Kumar et al., 2018). _Sandbox Roundtrip (SBX\({}_{RT}\))_: We measure the time to make a round trip between c region and sandbox (u region) compared to a regular function call and return. We create a no-op function below: ``` voidnoop(){return;} ``` We place this noop function in the sandbox and measure the time to call and return from it: ``` s=clock();sandbox_noop();e=clock(); ``` We compare the time with a regular call when noop is in c region. As shown in Figure 6, we incur an overhead of \(\sim\) 400%. This is also inline with the performance reported by prior works (Kumar et al., 2018; Dosov et al., 2018). This is expected because transitions to/from sandbox require context switches which are more expensive than regular function calls (_i.e.,_call and ret instructions). _Tainted Pointer Access in c region (TP\({}_{c}\))_: As explained in Section 5.2, we need to perform pointer swizzling to convert the sandbox-specific representation of tainted pointers to raw addresses. In addition, our instrumentation also checks at runtime that all tainted pointers are within the sandbox address range before accessing them. We measure this additional overhead by comparing tainted pointer accesses with regular pointer accesses. As shown in Figure 6, we incur 34% overhead in accessing tainted pointers in c region, due to additional validation checks, which require executing two compare instructions for every memory access. #### 6.4.2. Overhead on Dataset The first set of bars in Figure 7 shows the overhead of the partitioned programs. The runtime overhead is proportional to the execution time spent in the sandbox and the number of transitions between c region and the sandbox, which coincides with the sandbox execution overhead observation in the micro-benchmarks experiments (Figure 6). Table 2 shows the numbers of lines in the sandbox for our partitioned programs. For ProFTPD, we used only tainted pointers without code in the sandbox and transitions to/from the sandbox. Consequently, the overhead is less than 4.2%. For UTPD and MicroHTTPD, we sandboxed only request handlers (relatively small) that parse messages and return the actions that need to be performed. The server only invokes these handlers on particular requests resulting in less transitions with the sandbox. As expected, the overhead is also less in these cases. For TinyBigNum and Parsons, the overhead is high because of the relatively large amount of code in sandbox region. In both applications, we place the frequently used parsing functions in the sandbox resulting in a lot of sandbox transitions with most of them in a loop. The case is slightly different in pnn2png and png2pmn, where we made the entire png structure tainted, which resulted in dynamic checks every time when the png struct is accessed. In summary, our results indicate that the runtime overhead largely depends on the sandbox. **Overhead of only CheckedCBox.** To verify the impact of the sandbox on the overall program runtime, we perform a NO-OP sandbox experiment. Our goal is to measure the runtime overhead introduced ONLY by CheckedCBox. We perform this experiment by skipping sandboxing. Specifically, we run the completely annotated program as a regular application without any partitioning. However, we compile the annotated program with CheckedCBox compiler, which will add the relevant runtime checks (Section 5.2). We modify the instrumentation on tainted pointers to check for a valid pointer (instead of within sandbox bounds) - this will add the same amount of checks as in sandboxing case, but the comparison values will be different. On evaluating CheckedCBox with NO-OP sandbox on our entire dataset, we observe significantly less overhead as compared to that Figure 6. CheckedCBox Micro-Benchmarks. Figure 7. Runtime Overhead of Partitioned Programs. of the WASM sandbox as shown in figure 7. Therefore, CheckedCBox by itself contributes significantly less to the overhead as compared to the sandbox it uses. For TinyBigNum, the overhead is higher at 54.7%. Our analysis shows that this overhead is because we taint the main input buffer, which is processed in loops. This leads to additional checking for every loop iteration resulting in higher overhead. Another reason is that in the current implementation, our instrumentation is performed at the end after all optimization passes; thus, none of the instrumentation is optimized. We plan to move our instrumentation before the optimization passes and exploit them to optimize further and decrease the runtime overhead. #### 6.4.3. Memory Overhead All programs have a constant memory overhead (\(\sim\)81 KB) mainly for sandbox and a few variables related to creating sandbox and other helper functions. However, similar to the original Checked C, CheckedCBox itself does not add any memory overhead, because the compilation of tainted pointers do not come with any metadata. ### Security Impact The last column of Table 2 shows the list of all spatial safety vulnerabilities in the functions that have tainted types or are isolated in the u region. We re-introduced these bugs in the annotated program and checked whether these bugs could be triggered by the corresponding crashing input or exploit (if available). We also manually verified whether the bug can be triggered or prevented by CheckedCBox. As expected, _all vulnerabilities_ are prevented by CheckedCBox. The symbols and indicate whether the vulnerability was detected by our dynamic instrumentation or isolated in the sandbox, respectively. This shows that CheckedCBox provides an effective mechanism to prevent spatial safety vulnerabilities. ## 7. Limitations and Future Work Despite the effectiveness of CheckedCBox, it has limitations: **Sandbox Dependency:**CheckedCBox assumes the availability of a sandbox and consequently inherits all the limitations of the corresponding sandbox. _e.g._, Programs should be compilable with the sandbox compiler. Also, as shown in Section 6.4.2, the performance of the partitioned applications mainly depends on the sandbox. However, our implementation is not dependent on one specific sandbox and can be easily extended to other sandboxes. As a future work, we will extend our implementation to other sandboxes. **Annotation Effort:** Currently, all taint annotations have to be done manually - such that these annotations satisfy our type checker rules (Section 4.2). This could be tedious based on the complexity of the sandboxed function, its parameter complexity, and its dependency on other functions. We plan to develop an automated annotation tool such that, given the initial annotations (Listing 3), our tool will automatically add all the required annotations (Listing 4) according to our type rules. ## 8. Related Work A number of prior works have looked at formalizing the semantics of C, including CompCert (Boward et al., 2016; Kling et al., 2017), Ellison and Rosu (2017), Kang et al. (2017), and Memarian et al. (2017; Memarian et al., 2017), but they are not directly concerned with enforcing spatial safety. **Spatially Safe C Formalizations.** Several prior works (Kling et al., 2017) formalize C-language transformations or C-language dialects aiming to ensure spatial safety. The difference between these works and CheckedCBox is presented in Sections 1 and 3. Hathhorn et al. (2013) extended the formalization of Ellison and Rosu (2017) to produce a semantics that detects violations of spatial safety (and other forms of undefinedness) by focusing on bug finding, not compiling programs to use this semantics. CCured (Hathhorn et al., 2013) and Softbound (Hathhorn et al., 2013) implement spatially safe semantics for normal C via program transformation. Like CoreChChChBox, both systems' operational semantics annotate pointers with their bounds. CCured's equivalent of array pointers are compiled to be "fat," while SoftBound compiles bounds metadata to a separate hashatable, thus retaining binary compatibility at higher checking cost. CheckedCBox uses static type information to enable bounds checks without need of pointer-attached metadata. Cyclone (Cylone, 2016; Ellison et al., 2017) is a C dialect that aims to ensure memory safety; its pointer types are similar to CCured and its formalization (Cylone, 2016) focuses on ensuring temporal safety. Deputy (Dup
2303.04498
Optimal, hardware native decomposition of parameterized multi-qubit Pauli gates
We show how to efficiently decompose a parameterized multi-qubit Pauli (PMQP) gate into native parameterized two-qubit Pauli (P2QP) gates minimizing both the circuit depth and the number of P2QP gates. Given a realistic quantum computational model, we argue that the technique is optimal in terms of the number of hardware native gates and the overall depth of the decomposition. Starting from PMQP gate decompositions for the path and star hardware graph, we generalize the procedure to any generic hardware graph and provide exact expressions for the depth and number of P2QP gates of the decomposition. Furthermore, we show how to efficiently combine the decomposition of multiple PMQP gates to further reduce the depth as well as the number of P2QP gates for a combinatorial optimization problem using the Lechner-Hauke-Zoller (LHZ) mapping.
P. V. Sriluckshmy, Vicente Pina-Canelles, Mario Ponce, Manuel G. Algaba, Fedor Šimkovic IV, Martin Leib
2023-03-08T10:42:43Z
http://arxiv.org/abs/2303.04498v2
# Optimal, hardware native decomposition of parameterized multi-qubit Pauli gates ###### Abstract We show how to efficiently decompose a parameterized multi-qubit Pauli (PMQP) gate into native parameterized two-qubit Pauli (P2QP) gates minimizing both the circuit depth and the number of P2QP gates. Given a realistic quantum computational model, we argue that the technique is optimal in terms of the number of hardware native gates and the overall depth of the decomposition. Starting from PMQP gate decompositions for the path and star hardware graph, we generalize the procedure to any generic hardware graph and provide exact expressions for the depth and number of P2QP gates of the decomposition. Furthermore, we show how to efficiently combine the decomposition of multiple PMQP gates to further reduce the depth as well as the number of P2QP gates for a combinatorial optimization problem using the Lechner-Hauke-Zoller (LHZ) mapping. ## I Introduction Further accelerating the speed of scientific progress requires computational resources beyond the capabilities of state-of-the-art classical computing. Computational power has been growing exponentially for a couple of decades according to Moore's law. However, the miniaturisation of classical computers has reached hard physical boundaries bringing Moore's Law to an end. In recent years, Quantum Computing (QC) has emerged as a promising alternative [1; 2] that could provide exponentially growing compute power for application areas like quantum chemistry, optimisation and machine learning. Quantum algorithms, including speedup proofs, have been developed within all these application areas. High-level quantum algorithms using arbitrary quantum gates need to be mapped to hardware native gates. This mapping often leads to an overhead in terms of the number of gates due to non-local and multi-qubit gates. For example, fermion-to-qubit mappings, like the Jordan-Wigner transformation [3] and others [4; 5; 6] necessitate parameterized gates acting on more than two qubits. The encoding of complicated optimisation problems or the floating point dynamics of partial differential equations into qubits [7] also typically leads to multi-qubit gates. Therefore, it is important in a majority of quantum algorithms to find optimal decompositions of multi-qubit gates into native gates. Most QC hardware platforms do not support the direct implementation of multi-qubit gates. Building high fidelity multi-qubit interactions and inter-qubit connectivity [8; 9; 10; 11; 12; 13; 14] has been a major hardware roadblock. Thus arises a need to search for a decomposition of the multi-qubit gates based on the native, local, single and two-qubit gates [15; 16; 17; 18; 19; 20]. Multi-qubit gates can be decomposed into a ladder of CNOT gates and a single qubit rotation as proposed in [15]. We argue in the present work that this is not optimal even when the CNOT gate is available as a hardware native gate. Decomposition of multi-qubit gates into two-qubit CNOT gates has also been discussed in Ref. [21; 22]. These decompositions, however, are not symmetric, prohibiting gate cancellations between two consecutive many-body gates which we will argue can improve algorithm performance in the last part of this article. We start this paper with a definition of the quantum computational model. Afterwards, we propose a generalized systematic method to decompose and recursively generate parameterized multi-qubit Pauli (PMQP) gates using parameterized two-qubit gates (P2QP). We apply this method to decompose PMQP gates for some specific hardware topologies, like the path and the star graph. For the noisy intermediate scale quantum era (NISQ) [23], the number of P2QP gates and the depth of the quantum circuit or the total run-time of the gate decomposition are key indicators of algorithmic performance. We prove that the decomposition introduced here is optimal with respect to the number of P2QP native gates as well as the overall depth. Inspired by the minimal depth proof, a procedure to decompose PMQP gate on a general hardware graph is derived. We then apply the decomposition to the four qubit Pauli gates of the parity encoded Quantum Approximate Quantum Algorithm and show further advantages of our technique with gate cancellations between different decompositions of PMQP gates. ## II Quantum computational model In order to gauge the quality of our gate decompositions we define the following quantum computational model for the remainder of the article: The computational units are geometrically separated, two-level systems, or qubits, whose states are elements in a two-dimensional complex Hilbert space. The set of possible unitary operations, or single-qubit gates, on these qubits consists of the Hadamard (H) and the S gate which can be represented in matrix form by, \[\text{H}=\frac{1}{\sqrt{2}}\begin{pmatrix}1&1\\ 1&-1\end{pmatrix} \text{S}=\begin{pmatrix}1&0\\ 0&i\end{pmatrix}\,. \tag{1}\] To create correlations and entanglement between qubits we further assume connections obeying the physical constraints of the respective hardware platform. For two qubits \(a\) and \(b\) that share a connection we assume that one can switch on and off a Hamiltonian of the form, \[H_{\text{2q}}=g(t)\sigma_{a}\sigma_{b}\,, \tag{2}\] where \(\sigma_{a}(\sigma_{b})\) is a Pauli matrix (\(\sigma\in\{x,y,z\}\)), defined by the specific hardware platform, acting on qubit a(b) and \(g:\mathbb{R}\rightarrow\mathbb{R}\) an arbitrary control function. With this, one can implement the following two-qubit gates, \[U_{\text{2q}}=e^{i\gamma\sigma_{a}\sigma_{b}}\,, \tag{3}\] for arbitrary \(\gamma\in\mathbb{R}\). The single-qubit gates H and S can be used to rotate any Pauli matrix into any other, therefore one can implement the \(U_{\text{2q}}\) two-qubit gate with arbitrary Pauli matrices \(\sigma_{a}\) and \(\sigma_{b}\). To fully describe the native gate set for this quantum computational model on a specific hardware platform it suffices therefore to define the hardware graph \(\mathcal{G}_{\text{HW}}\) where every node corresponds to a qubit and every edge \(E(\mathcal{G}_{\text{HW}})\) corresponds to a connection between the qubits. We further assume that gates that commute can be executed in parallel. While this is trivial for gates that act on non-overlapping sets of qubits, we specifically extend the notion of parallelism to gates that act on two overlapping sets of qubits. For example the two-qubit gates \(e^{i\gamma_{(1,2)}z_{1}z_{2}}\) and \(e^{i\gamma_{(2,3)}z_{2}z_{3}}\) can be executed in parallel, in a digital-analog fashion[24; 25], because \(H_{(1,2)}\) and \(H_{(2,3)}\) can be switched on at the same time, enacting the desired combination of two-qubit gates. The task that we want to solve is to find a decomposition of a multi-qubit gate, \[U_{\text{nq}}=e^{i\gamma P_{n}}=\prod_{l}e^{i\beta_{l}\sigma_{a(l)}\sigma_{b( l)}}\,, \tag{4}\] where \(P_{n}=\sigma_{1}\otimes\cdots\otimes\sigma_{n}\) is a tensor product of \(n\) Pauli matrices and the decomposition is in terms of two-qubit gates that are supported by the hardware graph, \((a(l),b(l))\in E(\mathcal{G}_{\text{HW}})\) for all \(l\). We further seek decompositions that are optimal with respect to a specific error model in the sense that the decomposition shows the highest possible error resilience. Current quantum computing hardware platforms are dominated by two different types of errors: finite fidelity of gate operations and the dissipative processes of the qubits themselves typically described in terms of amplitude damping and dephasing [26]. The finite gate fidelity is currently mainly due to control errors and ultimately limited by the dissipative processes of the participating qubits, therefore we aim to find a decomposition which minimizes the overall execution time of the decomposition.The parameter of the gate \(\gamma\) is proportional to the time integral of the tunable interaction strength \(g\). Consequentially, the parameter \(\gamma\) is not necessarily related to the time it takes to implement the gate but could also be tuned by keeping the gate time fixed and changing the maximal interaction strength during gate execution. This ultimately means that even coherence time limited gates do not necessarily show a decreasing fidelity as a function of \(\gamma\). In order to minimize the overall execution time we therefore have to minimize the number of parallelizable gate layers. A parallelizable gate layer consists of two-qubit gates that can be executed in parallel as discussed above. Since the execution time of a two-qubit gate is typically longer than single-qubit gates, we only count layers of two-qubit gates. Based on the computational model developed above, a generic rule to decompose a multi-qubit gate is derived in the next section. ## III Recursive construction of gate decompositions ### General decomposition rule A general procedure to decompose a multi-qubit gate is, \[U_{nq}=e^{i\frac{\pi}{4}O_{k}}e^{i\pm\gamma H_{l}}e^{-i\frac{\pi}{4}O_{k}} \tag{5}\] where \(O_{k}\) and \(H_{l}\) act non-trivially on \(k<n\) and \(l<n\) qubits, respectively, and they fulfill the following relations: \[\left\{H_{l},O_{k}\right\}=0 P_{n}=\pm\frac{i}{2}[O_{k},H_{l}]\,. \tag{6}\] If \(H_{l}\) and \(O_{k}\) intersect non-trivially at an odd number of qubits, i.e. they have an odd number of common qubits and the Pauli operators on odd number of these qubits don't commute, then \(H_{l}\) and \(O_{k}\) anticommute. If \(H_{l}\) and \(O_{k}\) anticommute the second equality can be fulfilled if either \(iO_{k}H_{l}=P_{n}\) or \(iH_{l}O_{k}=P_{n}\). This freedom of choice, as well as the specific choice of \(H_{l}\) and \(O_{k}\) within the requirements defined above, can be used to come up with a decomposition that has a low circuit depth considering the above defined quantum computational model. Since both, \(H_{l}\) and \(O_{k}\), are also Pauli operators generating \(l\)-qubit and \(k\)-qubit gates respectively, they can be further decomposed recursively using the same decomposition rule, until all gates involved in the decomposition are native two-qubit gates. To simplify the description of specific decompositions in the remainder of the article we introduce a concise way to describe them. We symbolize every decomposition of an arbitrary PMQP gate generated by Pauli operator \(P\) into gates generated by Pauli operators \(H\) and \(O\) by, \(P_{n}\to O_{s_{O}},H_{s_{H}}\), where \(O_{s_{O}}\) (\(H_{s_{H}}\)) acts non-trivially on the qubits defined by the sets of nodes \(s_{O}\) (\(s_{H}\)) and is unambiguously defined by this set. All sequences of decompositions that we describe in the following are such that all \(O^{(i)}\) generate hardware native two-qubit gates and therefore all consecutive decomposition have a further decomposition of \(H^{(i)}\) as a target, where \(i\) denotes the sequence level of the decomposition. Consequently, a general sequence of decompositions can always be described in the following way, \[P_{n}\to O_{s_{O^{(1)}}}^{(1)},H_{s_{H^{(1)}}}^{(1)}\to O_{s_{O^{(2)}}}^{(2)},H_ {s_{H^{(2)}}}^{(2)}\rightarrow\cdots\to O_{s_{O^{(p)}}}^{(p)},H_{s_{H^{(p)}}} ^{(p)}, \tag{7}\] where we implicitly assume that \(O_{s_{O^{(i+1)}}}^{(i+1)},H_{s_{H^{(i+1)}}}^{(i+1)}\) is a decomposition of \(H_{s_{H^{(i)}}}^{(i)}\), such that \(s_{O^{(i+1)}}\subset s_{H^{(i)}}\) and \(s_{H^{(i+1)}}\subset s_{H^{(i)}}\). The sequence of decompositions is terminated when \(H^{(p)}\), for some \(p\), generates a hardware native two-qubit gate. The final decomposition of the multi-qubit gate is given as \[U_{nq}=e^{i\frac{\pi}{4}O^{(1)}}\cdots e^{i\frac{\pi}{4}O^{(p)}}e^{i\pm\gamma H ^{(p)}}e^{-i\frac{\pi}{4}O^{(p)}}\cdots e^{-i\frac{\pi}{4}O^{(1)}}. \tag{8}\] The decomposition for the path and star hardware graphs with an extension to the most general hardware graph is demonstrated in the following section. ### Path Hardware Graph Consider a path graph as hardware graph \(\mathcal{G}_{HW}\), with \(n\) vertices \(v_{1},v_{2}\cdots,v_{n}\) and \(n-1\) edges \(\{(v_{j},v_{j+1})|1\leq j<n\}\), cf. Figure 1. At the first step a vertex \(v_{m}\) from \(v_{1},\cdots,v_{n-1}\) is chosen. Then, the decomposition of a PMQP gate \(U=e^{i\gamma P_{n}}\), \(P_{n}\to O_{\{v_{1}\}\cup\{v_{2}\}}^{(1)},H_{\{v_{2}\}\cup\{v_{3} \ldots,v_{n}\}}^{(1)}\) is started from one of the boundaries of the path hardware graph. \(v_{2}\) is the common connecting node which makes \(O^{(1)}\) and \(H^{(1)}\) anticommute. As a next step, decompose \(H_{\{v_{2},\ldots,v_{n}\}}^{(1)}\to O_{\{v_{2}\}\cup\{v_{3}\}}^{(2)},H_{\{v_{3 }\}\cup\{v_{4}\ldots,v_{n}\}}^{(2)}\) such that the common connecting node of \(O^{(2)}\) and \(H^{(2)}\) is \(v_{3}\). The process is iterated such that the index of the common node is increased by one at every step until it becomes \(v_{m}\) as \[P_{n}\to O_{\{v_{1}\}\cup\{v_{2}\}}^{(1)},H_{\{v_{2}\}\cup\{v_{3} \ldots,v_{n}\}}^{(1)}\to O_{\{v_{2}\}\cup\{v_{3}\}}^{(2)},H_{\{v_{3}\}\cup\{v_{ 4},\ldots,v_{n}\}}^{(2)}\rightarrow\cdots\to O_{\{v_{m-1}\}\cup\{v_{m}\}}^{(m -1)},H_{\{v_{m}\}\cup\{v_{m+1}\ldots,v_{n}\}}^{(m-1)}, \tag{9}\] occurring at step \(m-1\). Afterwards, start the decomposition, beginning with node \(v_{n-1}\) as the common node and decrease by one at every step to finally end at \(v_{m+1}\) as \[P_{n} \to O^{(1)}_{\{v_{1}\}\cup\{v_{2}\}},H^{(1)}_{\{v_{2}\}\cup\{v_{3} \ldots,v_{n}\}}\to O^{(2)}_{\{v_{2}\}\cup\{v_{3}\}},H^{(2)}_{\{v_{3}\}\cup\{v_{4 }\ldots,v_{n}\}}\to\cdots\to O^{(m-1)}_{\{v_{m-1}\}\cup\{v_{m}\}},H^{(m-1)}_{\{v _{m}\}\cup\{v_{n+1}\ldots,v_{n}\}}\] \[\to O^{(m)}_{\{v_{n-1}\}\cup\{v_{n}\}},H^{(m)}_{\{v_{m}\}\cup\{v_{m +1},\ldots,v_{n-2}\}\cup\{v_{n-1}\}}\to O^{(m+1)}_{\{v_{n-2}\}\cup\{v_{n-1}\}},H^{(m+1)}_{\{v_{m}\}\cup\{v_{n+1}\ldots,v_{n-3}\}\cup\{v_{n-2}\}}\to\ldots\] \[\to O^{(n-2)}_{\{v_{m+1}\}\cup\{v_{m+2}\}},H^{(n-2)}_{\{v_{m}\} \cup\{v_{m+1}\}} \tag{10}\] cf. Figure 1 a. Every step of the decomposition adds 2 two-qubit gates, except for the last step which adds 3 two-qubit gates, totalling \(2n-3\) two-qubit gates. The set of nodes in the hardware graph \(s_{O^{(i)}},i\leq m-1\) is distinct from the set of nodes in \(s_{O^{(i)}},i\geq m\). Therefore, every parallel layer adds 2 two-qubit gates acting on different vertices except the central layer which contains only one two-qubit gate corresponding to \(H^{(n-2)}_{\{v_{m}\}\cup\{v_{m+1}\}}\). The depth of the circuit or the number of parallel layers is \(2(n-m+1)-3\) if \(m<\lceil\frac{n}{2}\rceil\) and \(2(m+1)-3\) otherwise. Since the quantum circuits representing the operation are not unique, choosing \(v_{m}\) to be \(v_{\lceil\frac{n}{2}\rceil}\) leads to a minimal depth of the circuit out of the equivalent decomposition strategies. Generalizing, the minimal depth can be written as \(n-mod(n+1,2)\), where \(\text{mod}(x,2)\) is the modulo operation that returns the remainder of the division of \(x\) by 2. At the other extreme when \(v_{m}\) is chosen to be either \(v_{1}\) or \(v_{n-1}\), a circuit depth of \(2n-3\) is obtained. Although the depth scales linearly with the size of the Path and varies slightly for different starting vertices \(v_{m}\), the number of two qubit gates required for all the equivalent implementations is the same, \(2n-3\). ### Star hardware graph Next, we discuss a star graph as hardware graph \(\mathcal{G}_{HW}\), with \(n\) vertices \(v_{1},v_{2},\cdots,v_{n}\) and \(\{(v_{1},v_{j})|1<j\leq n\}\) as edges. This is a one-to-all connected graph. A PQMP gate \(U_{nq}=e^{i\gamma P_{n}}\), acting on all the vertices of the star graph, can be decomposed by choosing the vertex \(v_{1}\) as the common vertex for all recursive decompositions. The first step maps \(P_{n}\to O^{(1)}_{\{v_{1}\}\cup\{v_{2}\}},H^{(1)}_{\{v_{1}\}\cup\{v_{3}, \ldots,v_{n}\}}\). We follow the steps to obtain \[P_{n}\to O^{(1)}_{\{v_{1}\}\cup\{v_{2}\}},H^{(1)}_{\{v_{1}\}\cup\{v_{3}, \ldots,v_{n}\}}\to O^{(2)}_{\{v_{1}\}\cup\{v_{3}\}},H^{(2)}_{\{v_{1}\}\cup\{v_{ 4},\ldots,v_{n}\}}\to\cdots\to O^{(m-2)}_{\{v_{1}\}\cup\{v_{n-1}\}},H^{(m-2)}_ {\{v_{1}\}\cup\{v_{n}\}}. \tag{11}\] The number of two-qubit gates is \(2n-3\), similar to the decomposition of the Path graph. The Pauli operators on the vertex \(v_{1}\) for all the \(O^{(i)}\)'s can be chosen to be the same (and hence commuting) and distinct from \(H^{(n-2)}\). Therefore the depth of the circuit is in fact 3 due to the simultaneous Figure 1: Step-wise decomposition of a PMQP on the (a) path and (b) star graph with 6 vertices. For the path graph, \(v_{m}=v_{2}\). The yellow boxes correspond to two-qubit gates, \(O^{(i)}\) and the green boxes correspond to smaller PMQP gates, \(H^{(i)}\). The red and blue arrows connecting pairs of qubits represent coupling strengths of \(\pm\frac{\pi}{4}\), respectively. At the end of the decomposition, only native two-qubit gates remain. We follow this color scheme through the rest of the paper. execution of commuting gates, as introduced in our computational model. Since every qubit is connected to the central qubit, any other order of choosing qubits, gives equivalent decompositions with the same depth cf. Figure 1 b. Using the computational model defined in [27], we obtain a logarithmic depth for a PMQP gate on an all-to-all connected graph. On the contrary, we obtain a constant depth even with an one-to-all connected graph, thanks to our computation model involving parallel gate execution. This circuit cannot be reduced further due to the structure of the technique developed here. ### Minimal Depth Proof We are going to derive a lower bound for the depth of a quantum circuit consisting of hardware native two-qubit gates that implement the desired multi-qubit gate. This lower bound will coincide with the depth of the decompositions found above, thereby proving their optimality as well as motivating the algorithm presented below for optimal gate decomposition of multi-qubit gates on arbitrary hardware graphs. Without loss of generality we can assume the generator \(P\) of the PMQP gate to be a tensor product of Pauli \(x\) matrices on all qubits. We obtain a lower bound of the two-qubit gate depth by proving a lower bound for the decomposition acting on an arbitrary separable state \(\bigotimes_{i}|\psi_{i}\rangle\), for one specific angle \(\gamma=\frac{\pi}{4}\), \[|\Psi\rangle=e^{-i\frac{\pi}{4}x_{1}\otimes\cdots\otimes x_{n}}\bigotimes_{i} |\psi_{i}\rangle=\frac{1}{\sqrt{2}}\left(\bigotimes_{i}|\psi_{i}\rangle-ix_{1} \otimes\cdots\otimes x_{n}\bigotimes_{i}|\psi_{i}\rangle\right)\,. \tag{12}\] Notice that the resulting quantum state is highly correlated in the sense that all local measurements Figure 2: Different decompositions of a PMQP gate on the path graph using only CNOT gates and single qubit rotations. The two-qubit gate depth of the decomposition varies from 10 to 6. with Pauli operators that are anti-commuting with the generator of the multi-qubit gate depend on all local states \(\left|\psi_{i}\right\rangle\) of the qubits before the gate operation, \[\left\langle\Psi\right|z_{j}\left|\Psi\right\rangle=\left\langle\psi_{1}|x_{1}| \psi_{1}\right\rangle\ldots\left\langle\psi_{j}|y_{j}|\psi_{j}\right\rangle \ldots\left\langle\psi_{n}|x_{n}|\psi_{n}\right\rangle\,. \tag{13}\] This implies for the two-qubit gate decomposition of the multi-qubit gate that for every pair of qubits \(i\) and \(j\) there has to be a chain of two-qubit gates, parametrized by \(k\) with \(k(1)=i\) and \(k(l)=j\) that connects these two qubits \(U(x_{i},x_{k(2)})U(x_{k(2)},x_{k(3)})\ldots U(x_{k(l-2)},x_{k(l-1)})U(x_{k(l-1 )},x_{j})\), such that no pair of consecutive two-qubit gates commutes, \([U(x_{k(m-1)},x_{k(m)}),U(x_{k(m)},x_{k(m+1)})]\neq 0\), \(\forall m\in\{2,\ldots,l-1\}\). For the path hardware graph this immediately leads us to the x-shaped two-qubit gate ladders that we derived above as well as the fan-shaped two-qubit patterns that we derived for the star hardware graph. Decompositions of multi-qubit gates into two-qubit gates have also been discussed in [21]. The error model introduced in that work is dependent on the strength of the coupling and therefore the goal is to approximate a three or four qubit gate by reducing the coupling strength at the cost of increasing the number of two-qubit gates implemented. Under such a model, the decomposed circuit misses the bound by one two-qubit gate. ### General hardware graph Based on the insights from the minimal depth proof above we can derive another lower bound for the depth of the decomposition on a general hardware graph. We accomplish this by identifying the longest distance between any two qubits in the hardware graph, i.e. the diameter. We define the distance between two qubits in the hardware graph, in accordance with graph theory, by the length of the shortest possible path between the two qubits. Here, a path is a sequence of vertices or qubits of the hardware graph such that consecutive vertices are neighboring and its length is the number of edges that are traversed along the path. Between these two qubits that define the diameter of the graph there must be the aforementioned chain of two-qubit gates that consequentially lower bounds the entire depth of the decomposition of the multi-qubit gate on the given general hardware graph. We will proceed in exactly the same way as above by showing that we can find a decomposition that matches this lower bound thereby showing the optimality of the decomposition. Let us identify a pair of qubits with the largest possible distance in the hardware graph breaking are arbitrarily. We define a subgraph \(T\) of the hardware graph based on this found seed path graph. Add to this subgraph the shortest distance paths from every remaining qubit of the hardware graph to one of the qubits of the original set of qubits in the seed path graph. Choose the qubit in the seed path graph with the minimal distance to the current qubit, breaking ties arbitrarily. Assume, for now, that the diameter of the graph is even. Decompositions for hardware path graphs with odd diameter are a straightforward extension of the following steps. The subgraph T that we generated has now the following features: It is a rooted spanning tree, with the root being the qubit in the middle of the seed path graph. We subsume all qubits in this tree with the same distance to the root in sets that we call "generation", where the qubits in the generation that is the furthest apart from the root are called the "leaves". The parent for every qubit besides the root is the unique qubit it is connected to in the generation that is closer to the root qubit. The height of this rooted spanning tree, that means the longest distance between the root and any other qubit in the spanning tree \(T\), is equal to half of the diameter of the hardware graph. If this would not be the case that would mean that we have identified a pair of qubits whose distance in the hardware graph is longer than the diameter of the hardware graph, which is impossible by the definition of the diameter of a graph. Lastly, all edges in the rooted spanning tree correspond to physical couplers since \(T\) is a proper subgraph of the hardware graph. After this groundwork, we can proceed with the decomposition of the multi-qubit gate. We start with decomposing from the leaves of the spanning tree \(T\): Every set of leaves together with its parent is decomposed according to the star graph decomposition, where the set of two-qubit gates generated by \(O^{(i)}\) is acting on the qubits in the leaves and the respective parent qubit in the spanning tree. The remaining multi-qubit gate involves the parent qubit as well as the entire rest of the hardware graph. We iterate this procedure for every generation of qubits in the rooted spanning tree \(T\) until we reach the root qubit. We finish with another star decomposition, where we choose the central gate generated by \(H\) arbitrarily. If the graph has an odd diameter we would have two rooted trees connected at their roots that we identify as the spanning tree \(T\). The decomposition, however, progresses in exactly the same way with the exception of the last step where the central multi-qubit gate generated by \(H\) is already a two-qubit gate connecting both rooted trees that does not need to be further decomposed. The decomposition for every generation adds two layers of parallelizable gates to the already existing decomposition, since all involved gates can be parallelized, either because they involve disjoint pairs of qubits or are acting on the same parent qubit, however with an identical Pauli operator for the generator. We therefore managed to decompose the entire multi-qubit gate within a number of layers matching the optimal decomposition for the path hardware graph with the length given by the diameter of the general hardware graph, thereby exactly matching the lower bound identified earlier. In cf. Figure(3) we show a sample General hardware graph with 15 vertices which requires only a depth of 7 for its implementation. ## IV Specific example of parity encoded mapping In this section, we show that consecutive decompositions of many multi-qubit gates can lead to cancellations, using a Quantum Approximate Optimisation (QAOA) circuit with a parity encoded binary optimisation problem. We do not provide the details of parity encoded QAOA, please refer to [28; 29] for the Lechner-Hauke-Zoller (LHZ) construction and [30; 31; 32] for the parity architecture. For our intentions and purposes it is sufficient to know that we need to implement a gate generated by the problem Hamiltonian, \[H=\sum_{i}J_{i}\,z_{i}+\sum_{l}^{M}C_{l\square}\,z_{(l,n)}z_{(l,e)}z_{(l,s)}z_{ (l,w)} \tag{14}\] on a square grid hardware graph where \(J_{i}\)'s and \(C_{l\square}\)'s are constants dependent on the inital problem parameters and \(n,e,s,w\) denote north, east, south, and west qubit of each plaquette (\(\square\)) with \(M\) number of plaquettes. The gates generated by the first term, containing local fields, can be implemented using single qubit gates in one layer. The gates generated by the second term, however, present parameterized four-qubit Pauli gates on the plaquettes of the square lattice hardware graph that require subsequent decomposition into two-qubit gates. The total run-time for the implementation of the second gate depends on the optimal decomposition of a four-qubit gate and a strategy to combine several gates that can be simultaneously executed. We follow the strategy: we choose to decompose all plaquettes with the same color in parallel, cf. Figure (4) a. We decompose the red and then blue plaquettes thereby covering all alternate columns and finally repeat the same execution to the remaining columns (gray squares and then the maroon squares). For decomposing a single four-qubit plaquette term of the form \(U_{4q}=e^{i\gamma P_{4}}\) with \(P_{4}=z_{1}z_{2}z_{3}z_{4}\) we apply the protocol developed for the path graph discussed above. A simple linear Path of \(v_{1},v_{2},v_{4},v_{3}\) is chosen with \(v_{m}=v_{2}\) such that the decomposition leads to \[U_{4q}=e^{i\frac{\pi}{4}z_{1}\,x_{2}}e^{i\frac{\pi}{4}z_{3}\,x_{4}}e^{i\gamma \,y_{2}\,y_{4}}e^{-i\frac{\pi}{4}z_{3}\,x_{4}}e^{-i\frac{\pi}{4}z_{1}\,x_{2}}. \tag{15}\] The same protocol can be applied to all the other plaquettes. Executing neighbouring decomposed four-qubit plaquette terms leads to a cancellation of 2 two-qubit gates when sequentially applied to Figure 3: a) A general hardware graph with \(n=15\) vertices. b) its corresponding spanning tree. The path in green is a path with \(k=7\) vertices defining the diameter of the graph to be 6. There are three branches \(B_{1}\) (a local Path graph), \(B_{2}\) (a local star graph) and \(B_{3}\) (which is a combination of local star and path graphs). Vertices corresponding to branches \(B_{1}\), \(B_{2}\) and \(B_{3}\) are decomposed first by applying as many two-qubit gates in parallel as allowed by the computational model. Then the green vertices lying on the longest shortest-path graph are decomposed optimally giving a depth of 7 for the decomposition. c) The final decomposition of a PMQP gate on this general graph. Purple lines show the separation between the different parallel layers. the same vertices with opposite sign of the coupling strength cf. Figure (4). Moreover, the central two-qubit gates of all the four-qubit decompositions, namely \(e^{i\gamma_{(2,4)y2y_{4}}}\), \(e^{i\gamma_{(4,6)y4y6}}\) and \(e^{i\gamma_{(6,8)y6y8}}\), can also be executed in parallel. The two qubit circuit depth for the implementation of four-qubit plaquette terms on alternate rows is 3 instead of 5 which is the depth obtained using the x-shaped CNOT structure[33]. Further generalization to the entire square lattice using our decomposition, can be performed in two steps of alternating rows of plaquettes giving a total constant run-time of 5 using parallelizable commuting gates. This minimal depth of this circuit is ensured by the minimal implementation of the decomposed four-qubit gates combined with additional parallelizing strategies allowed by our computational model. This makes parity-encoded QAOA a promising problem to tackle given the currently available hardware constraints. Figure 4: (a) Square Grid hardware graph with the prescribed strategy for an LHZ encoded optimization problem: We choose to decompose on red squares, then blue, then gray and finally the maroon squares. The white lines within each square represents the path that we follow. (b) Decomposition of a column of four-qubit gates. Two-qubits gates acting on the same qubits with opposite signs of the coupling strengths are cancelled. Execution of all central green colored two-qubits gates in parallel results in a total depth of 3. Conclusions We have presented a general method to decompose PMQP into hardware implementable two-qubit gates. We demonstrated the decomposition for specific hardware graphs: the Path graph and the star graph. We show that the lower bound for the depth of the decomposition is set by the correlation of qubits for a multi-qubit gate. Further, we show that our decomposition can achieve this bound, scales linearly for the path graph and is constant for the star graph. Therefore, the less connected the graph is, the more enhanced the depth of the circuit becomes. Motivated by the minimal depth proof, we provide a strategy to optimally decompose a multi-qubit gate on any general hardware graph. For a specific quantum circuit for combinatorial optimization using the LHZ mapping, we show that the lowest depth that can be achieved is 6, independent of the size of the system. The technique also presents an efficient way to enable the decomposition of long-range multi-qubit interactions. For Hamiltonian systems with many multi-qubit terms, sub-optimal decompositions of some of the multi-qubit gates could be more beneficial and further facilitate gate cancellation strategies. While we present only a few use cases, the decomposition is universal and can be used to provide low-depth circuits for a wide range of near-term quantum applications. In a recent publication [34] some of the authors use the decomposition technique for fermionic systems and develop additional strategies of gates along with an optimal fermion-to-qubit mapping to reduce the depth of the circuit further. Reducing the depth reduces errors and therefore helps in developing better noise mitigation strategies.These are crucial, but not limited to the NISQ era with minimal computational effort and the hardware facilities currently available. ###### Acknowledgements. The authors would like to thank Ines de Vega, Hermanni Heimonen, Bruno G. Taketani and Mikko Mottonen for useful discussions.
2306.07997
Machine Learning Approach on Multiclass Classification of Internet Firewall Log Files
Firewalls are critical components in securing communication networks by screening all incoming (and occasionally exiting) data packets. Filtering is carried out by comparing incoming data packets to a set of rules designed to prevent malicious code from entering the network. To regulate the flow of data packets entering and leaving a network, an Internet firewall keeps a track of all activity. While the primary function of log files is to aid in troubleshooting and diagnostics, the information they contain is also very relevant to system audits and forensics. Firewalls primary function is to prevent malicious data packets from being sent. In order to better defend against cyberattacks and understand when and how malicious actions are influencing the internet, it is necessary to examine log files. As a result, the firewall decides whether to 'allow,' 'deny,' 'drop,' or 'reset-both' the incoming and outgoing packets. In this research, we apply various categorization algorithms to make sense of data logged by a firewall device. Harmonic mean F1 score, recall, and sensitivity measurement data with a 99% accuracy score in the random forest technique are used to compare the classifier's performance. To be sure, the proposed characteristics did significantly contribute to enhancing the firewall classification rate, as seen by the high accuracy rates generated by the other methods.
Md Habibur Rahman, Taminul Islam, Md Masum Rana, Rehnuma Tasnim, Tanzina Rahman Mona, Md. Mamun Sakib
2023-06-12T19:04:07Z
http://arxiv.org/abs/2306.07997v1
# Machine Learning Approach on Multiclass Classification of Internet Firewall Log Files ###### Abstract Firewalls are critical components in securing communication networks by screening all incoming (and occasionally exiting) data packets. Filtering is carried out by comparing incoming data packets to a set of rules designed to prevent malicious code from entering the network. To regulate the flow of data packets entering and leaving a network, an Internet firewall keeps a track of all activity. While the primary function of log files is to aid in troubleshooting and diagnostics, the information they contain is also very relevant to system audits and forensics. Firewall's primary function is to prevent malicious data packets from being sent. In order to better defend against cyberattacks and understand when and how malicious actions are influencing the internet, it is necessary to examine log files. As a result, the firewall decides whether to "allow, "deny, 'drop,' or "reset-both" the incoming and outgoing packets. In this research, we apply various categorization algorithms to make sense of data logged by a firewall device. Harmonic mean F1 score, recall, and sensitivity measurement data with a 99% accuracy score in the random forest technique are used to compare the classifier's performance. To be sure, the proposed characteristics did significantly contribute to enhancing the firewall classification rate, as seen by the high accuracy rates generated by the other methods. _multiclass classification, internet firewall, log file, machine learning, networking_ ## I Introduction By exchanging information online, your data might be subject to a variety of cyberattacks and breaches. As we enter a new era of information technology and the internet, the number of apps that may access data resources is growing at a dizzying rate. The log file contains entries that document system and application activity, as well as network activity for any connected devices. Log data is produced in vast quantities by all of the system's software and hardware components. In response to each conceivable occurrence on the internet, data is recorded in a log file [1]. This information was initially recorded for assistance in troubleshooting and diagnostics. Firewalls are like toll booths for data packets on a network. System administrators install firewalls that are tailored to the needs of their specific business [2]. Firewalls have proven to be an integral component of modern communication networks due to the vital function they play in protecting the network from both external and internal threats [3]. Firewalls, in their most basic form, organize network log data in accordance with their rules, which may be defined manually or by default depending on particular criteria, such as the purpose of the link, which ports are effective communication and interpersonal, which subdivisions are permitted, etc. The organization that is using the firewall will determine its specific regulations [4]. In addition, keeping these rules up-to-date is a time-consuming and ongoing task, what with technological developments and the ever-evolving behavior of the environment. Actions, such as "Accept," "Drop," "Deny," or "Reset-both," are taken depending on these rules and many other aspects of the network log entries. Incorrectly handling a session might compromise security, leading to consequences like the loss of data or the inadvertent destruction of equipment, which could have a ripple effect on revenue. The original purpose of log files was to document system activity. In the case of an attack or other malicious action, this can be used for forensics and audit trails [5]. Firewalls in a network determine whether or not traffic is allowed depending on policy by analyzing the data generated. For communications systems to operate smoothly and securely, firewall setup is essential. The efficient operation of a business's communication tools and other networked resources depends in large part on the setup of these systems. Firewalls are the electronic equivalent of security gates, restricting access to and from computer networks. The firewalls are set up by the system administrator to protect the company [6]. If hackers are able to break into a network's architecture, they may transfer data to unintended recipients or alter the data's veracity and consistency at any point in its existence. In response, several security measures, such as Internet Firewalls, Intrusion Detection/Prevention Systems (IDS/IPS) [7], and others, have been implemented at varying levels of protection to deal with security concerns. The rest of our paper, we will review some related works on firewall log file system briefly in section II, then in section III will explain about the dataset, methods used to classify the dataset and comparison among the outputs, finally section IV concludes with the final result and future scopes in this field. ## II Literature Review This section will detail and analyze prior studies done in the field of internet gate log file categorization to provide additional context for the efficacy of the algorithm applied to the dataset to detect the discrepancy.
2304.00659
Thermodynamic engine with a quantum degenerate working fluid
Can quantum mechanical thermodynamic engines outperform their classical counterparts? To address one aspect of this question, we experimentally realize and characterize an isentropic thermodynamic engine that uses a Bose-condensed working fluid. In this engine, an interacting quantum degenerate gas of bosonic lithium is subjected to trap compression and relaxation strokes interleaved with strokes strengthening and weakening interparticle interactions. We observe a significant enhancement in efficiency and power when using a Bose-condensed working fluid, compared to the case of a non-degenerate thermal gas. We demonstrate reversibility, and measure power and efficiency as a function of engine parameters including compression ratio and cycle time. Results agree quantitatively with interacting finite temperature field-theoretic simulations that closely replicate the length and energy scales of the working fluid.
Ethan Q. Simmons, Roshan Sajjad, Kimberlee Keithley, Hector Mas, Jeremy L. Tanlimco, Eber Nolasco-Martinez, Yifei Bai, Glenn H. Fredrickson, David M. Weld
2023-04-02T23:52:51Z
http://arxiv.org/abs/2304.00659v2
# Thermodynamic engine with a quantum degenerate working fluid ###### Abstract Can quantum mechanical thermodynamic engines outperform their classical counterparts? To address one aspect of this question, we experimentally realize and characterize an isentropic thermodynamic engine that uses a Bose-condensed working fluid. In this engine, an interacting quantum degenerate gas of bosonic lithium is subjected to trap compression and relaxation strokes interleaved with strokes strengthening and weakening interparticle interactions. We observe a significant enhancement in efficiency and power when using a Bose-condensed working fluid, compared to the case of a non-degenerate gas. We demonstrate reversibility, and measure power and efficiency as a function of engine parameters including compression ratio and cycle time. Results agree quantitatively with exact interacting finite temperature field-theoretic simulations. Classical thermodynamic engines have been critical to human technology since the industrial revolution. In the past decade, the capabilities of quantum thermodynamic engines have been explored theoretically [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], and recent years have seen experimental demonstrations of both quantum and nanoscopic classical engines using single ions [21, 22], nuclear spins [23], cold atoms [24, 25, 26], nitrogen-vacancy centers [27], and quantum gases [28, 29]. A natural question is whether quantum phenomena can enhance the performance of a thermodynamic engine [30, 31, 32]. Perhaps the simplest experimental approach to this question -- the direct comparison of an engine using a classical working fluid to an equivalent one using a quantum degenerate working fluid -- has remained unexplored. In this work, we experimentally realize and characterize an isentropic thermodynamic engine with a quantum degenerate working fluid. The engine cycle interleaves compression and decompression of an optical trap with Feshbach enhancement and suppression of interparticle scattering to pump energy from a magnetic field to an optical field via a trapped ensemble of ultracold neutral lithium. We observe that quantum degeneracy significantly enhances the output power. We measure the dependence of efficiency and power on cycle time, and investigate the effects on engine performance of compression ratio and interaction strength ratio. Results agree quantitatively with both approximation-free finite temperature interacting numerical simulations and mean-field analytics. The experiments begin by preparing a Bose-Einstein condensate (BEC) of 300,000 to 1 million \({}^{7}\)Li atoms in a far-detuned crossed optical dipole trap with a mean trap frequency \(\bar{\omega}=2\pi\times 133\) Hz, at a temperature of 170 nK, corresponding to a condensate fraction of 0.95. After evaporative cooling to degeneracy, the \(s\)-wave interparticle scattering length is Feshbach-tuned to \(100a_{0}\), where \(a_{0}\) is the Bohr radius. This sets the initial condition (labeled \(A\) in figures). Interleaved variation of the trap intensity and Feshbach field then execute the thermodynamic cycle illustrated in Fig. 1a. Between steps \(A\) and \(B\) (stroke \(AB\)), the trap power is increased with Figure 1: Thermodynamic engine with a quantum degenerate working fluid. **(a)** Engine cycle in \(a_{s}-\bar{\omega}\) space. Color shows total energy per particle. **(b)** Top: BEC images after 12 ms of expansion at each step. Middle: Evolution of trap frequency (dotted) and scattering length (dot-dashed). Bottom: measured release energies for quantum degenerate (circles) and thermal (squares) working fluids during one engine cycle, normalized by the step \(A\) value. Dotted lines connect data points. Inset shows efficiency for each condensate fraction \(f_{c}\); line indicates theoretical maximum efficiency in the Thomas-Fermi regime. Error bars show standard error in all figures. a functional form such that \(\bar{\omega}\) increases from \(\bar{\omega}_{A}\) to \(\bar{\omega}_{B}\) at a constant rate. This is the compression stroke of the engine. In stroke \(BC\), the trap frequency is held constant as the interaction strength is ramped from \(a_{s}^{B}=100a_{0}\) to a larger value \(a_{s}^{C}\) at a constant rate. Subsequently, the trap frequency and then the interactions are ramped linearly back to their initial values. Such a cycle pumps energy between the magnetic and optical control fields, because the work performed by the strongly interacting gas during decompression is not equivalent to the work done to compress the more weakly interacting gas. Performing the strokes of the cycle in the order shown in Fig. 1a results in a net transfer of energy from magnetic to optical fields. Appendix B details an intuitively useful analogy between this isentropic cycle and the Otto heat-engine cycle. The second-quantized Hamiltonian describing the working fluid includes kinetic, interaction, and potential terms: \[\hat{H} =\hat{H}_{\rm kin}+\hat{H}_{\rm int}+\hat{H}_{\rm pot}, \tag{1}\] \[\hat{H}_{\rm kin} =\int\left(\frac{\hbar^{2}}{2m}\nabla\hat{\Psi}^{\dagger}\nabla \hat{\Psi}\right)\mathrm{d}^{3}r,\] \[\hat{H}_{\rm int} =\frac{g(t)}{2}\int\hat{\Psi}^{\dagger}\hat{\Psi}^{\dagger}\hat{ \Psi}\,\mathrm{d}^{3}r,\] \[\hat{H}_{\rm pot} =\sum_{\bf k}\hbar\omega_{\bf k}(t)\left(\hat{\Psi}_{\bf k}^{ \dagger}\hat{\Psi}_{\bf k}+\frac{1}{2}\right),\] where \(g(t)=4\pi\hbar^{2}m^{-1}a_{s}(t)\) is the interaction coupling constant, \(a_{s}(t)\) is the scattering length, \(m\) is the mass, and \(\omega_{\bf k}(t)\) is the trap frequency of mode \({\bf k}\) at time \(t\). To measure release energy (defined below) at each step, we first abruptly switch off the trap, quenching to zero the last term of the Hamiltonian \(\hat{H}_{\rm pot}\). Following 12 ms of free expansion we measure the column-integrated density distribution by absorption imaging and reconstruct the 3D distribution via Abel inversion [33]. After expansion, not only is the initial momentum distribution converted to a position distribution, but also essentially all the initial interaction energy is converted to kinetic energy [34], so the distribution provides a measure of the condensate's release energy \(E_{\rm rel}=E_{\rm kin}+E_{\rm int}\). We report release energies per atom, while plotted powers represent the total engine power. Engine performance can be characterized by efficiency and power. We define work done on the condensate as positive. As in refs. [7; 12], we define the efficiency \[\eta=-\frac{W_{AB}^{\rm las}+W_{CD}^{\rm las}}{W_{BC}^{\rm max}}, \tag{2}\] and the power \[P=-\frac{W_{AB}^{\rm las}+W_{CD}^{\rm las}}{T_{\rm cycle}}. \tag{3}\] Here \(W_{ij}^{k}\) is the work done on the BEC by the field \(k\) (laser or magnetic) in stroke \(ij\) of the cycle and \(T_{\rm cycle}\) is the total cycle time. While we measure the release energy rather than the total energy, thse quantities can be simply related via the Gross-Pitaevskii description of an interacting gas. In the Thomas-Fermi regime, the total energy is given by [35] \[E_{\rm tot}=E_{\rm kin}+E_{\rm pot}+E_{\rm int}=\frac{5}{7}\mu, \tag{4}\] where \(\mu\) is the chemical potential, \(E_{\rm pot}=(2/7)\mu\), \(E_{\rm int}=(3/7)\mu\) and \(E_{\rm kin}\approx 0\). The ratio between the total energy and the measured energy is then \[\frac{E_{\rm tot}}{E_{\rm rel}}\approx\frac{E_{\rm pot}+E_{\rm int}}{E_{\rm int }}=2.5. \tag{5}\] Therefore, a power measurement based on release energy will be reduced from the true power by a factor of 2.5, while measured efficiency will give the true value. As shown later, we have verified the validity of these assumptions using approximation-free numerics. Fig. 1b demonstrates the stark contrast between the behavior of degenerate and non-degenerate gases subjected to similar thermodynamic cycles. The thermal gas is prepared via inhibited evaporation at a small scattering length of \(57a_{0}\), resulting in a density of \(\sim\)\(6\times 10^{11}\,\mathrm{atoms/cm^{3}}\) at a temperature of 890 nK. The density of the condensate is \(\sim\)\(2\times 10^{13}\,\mathrm{atoms/cm^{3}}\), about 33 times larger, at a temperature of 170 nK. Much of this enhancement in density is a direct result of bosonic quantum statistics. While the thermal gas and condensate are prepared at different trap frequencies, the compression ratio \(\nu=\bar{\omega}_{B}/\bar{\omega}_{A}\approx 2\) is the same for both. As interaction strength increases, the low density of the thermal gas results in a negligible change in release energy, while the Bose-enhanced density of the quantum degenerate sample results in a significant change. The measured efficiency of the engine with a thermal working fluid is consistent with zero, while the measured efficiency of the engine with a quantum degenerate working fluid is \(0.45\pm 0.1\), near the maximum theoretical value of 0.55 for this compression ratio. Reversibility can be tested by comparing the results of forward (\(A\)-\(B\)-\(C\)-\(D\)-\(A\)) and reverse (\(A\)-\(D\)-\(C\)-\(B\)-\(A\)) cycles. Fig. 2a shows the experimental results of such a comparison, demonstrating a high degree of reversibility and confirming that the reverse cycle results in a net transfer of energy from optical to magnetic fields, opposite to the forward cycle. Fig. 2b shows that the same cycle can be performed many times. The repeated return of the condensate to its initial release energy indicates that it can mediate energy transfer between magnetic and optical fields without significant net absorption of energy. To estimate the degree of adiabaticity, one can apply the Landau-Zener formalism [36] to approximate the probability of low-lying collective excitations [37]. Considering only the ground state and lowest-lying collective excitation, the probability of diabatic passage between them is \(P_{D}=\exp(-1/\Theta)\), where the adiabaticity parameter \(\Theta=\bar{\omega}_{E}/(2\pi\omega_{E}^{2})\) depends on the energy gap \(\hbar\omega_{E}\) to the nearest excited level. Taking our trap to be approximately axially symmetric, and using the results of [37] with a known ramp speed \(\dot{\omega}=2\pi\times 1\) Hz/ms, we estimate a maximum adiabaticity parameter of \(\Theta\simeq 0.001\) for the cycles shown in Figs. 1 and 2, with cycle times of 530 ms. Varying the engine cycle time affects both efficiency and power. Fig. 3a compares measured efficiency to the ideal Thomas-Fermi efficiency, which is independent of cycle time. Measurements for a range of cycle times cluster near this ideal. However, at long cycle times three-body loss, one-body loss, and heating can degrade efficiency, while at the shortest cycle times a combination of technical limitations (for example inductive limits on magnet current ramp rate) and decreasing adiabaticity affect engine performance. Measuring engine power, we observe the expected inverse dependence over a range of cycle times, as shown in Fig. 3b. Power increases for faster cycles, deviating somewhat from the adiabatic prediction of Eq. 7 as the cycle time is reduced. The breakdown of engine performance at very short cycle times is also visible. These results indicate an optimal range of working speeds; as with any engine, there is a balance to be struck between power and efficiency [38]. Related theoretical work has explored the possibility of bypassing this trade-off using shortcuts to adiabaticity [6, 7, 39]. To investigate the validity of our theoretical approximations, we compare experimentally measured release energies to the results of fit-parameter free, finite temperature equilibrium simulations reproducing the experimental particle number, scattering length, and confinement. The engine's demonstrated reversibility and adiabaticity justify the use of multiple equilibrium simulations and an assumption of isentropic evolution. At steps \(A\) through \(D\), we model the system as an interacting confined Bose gas using a path integral over complex-conjugate coherent states fields \(\phi\) and \(\phi^{*}\) with the action given in continuous imaginary time notation as [40] \[S=\int_{0}^{\beta}d\tau\int d^{d}r\left\{\phi^{*}(\mathbf{r},\tau+ )\left[\partial_{\tau}-\hbar^{2}/(2m)\nabla^{2}\right.\right.\] \[\left.\left.+U_{\mathrm{ext}}(\mathbf{r})-\mu\right]\phi(\mathbf{r},\tau) +\frac{g}{2}\left[\phi(\mathbf{r},\tau)\phi^{*}(\mathbf{r},\tau+)\right]^{2}\right\}, \tag{6}\] where the notation \(\tau+\) indicates the field should be evaluated at an advanced position on the \(\tau\) contour, with \(\tau\in[0,\beta]\) and \(\beta=1/k_{B}T\). \(U_{\mathrm{ext}}(\mathbf{r})=\frac{1}{2}m(\omega_{x}^{2}x^{2}+\omega_{y}^{2}y^{2 }+\omega_{z}^{2}z^{2})\) is the confinement potential, with \(\omega_{i}\) the angular trap frequency in the \(i^{\mathrm{th}}\) direction. Interactions are modeled as pairwise contact repulsions. The chemical potential \(\mu\) is constrained such that total particle number \(N\) is constant in each simulation [41]. We sample configurations of this field theory using the complex Langevin (CL) technique, a stochastic method of evaluating integrals that is robust for actions with a sign problem [42, 43] such as that in Figure 3: Efficiency and power vs. cycle time. **(a):** Measured energy transfer efficiency \(\eta\) versus cycle time. Line shows theoretical efficiency from Eq. 7. **(b):** Measured engine power, quoted in quectoWatts (\(10^{-30}\) Watts), versus cycle time. Shaded region shows the theoretical prediction of Eq. 7 for the measured range of atom numbers. The power shown here is taken from release energy measurements; as discussed in the main text, the total power is a factor of 2.5 higher. Inset shows adiabaticity parameter \(\Theta\) versus cycle time. Figure 2: Engine reversibility and repeatability. **(a):** Comparison of the cycle performed in the “forward” (\(A\)-\(B\)-\(C\)-\(D\)-\(A\)) and “reverse” (\(A\)-\(D\)-\(C\)-\(B\)-\(A\)) directions, indicated by right- and left-pointing markers respectively. Light blue line shows results of analytic calculations (see Eq. 7); black line shows results of isentropic fully-interacting numerical simulations in both panels. **(b):** Measured release energy evolution during four repeated engine cycles. Simulation particle number is set to the mean particle number across each four-step cycle. Error bars are smaller than symbol size. Eq. 6. Observables are calculated by time averaging field operators, obtained from thermodynamic derivatives of the partition function. This method does not require simplifying approximations, even at finite temperature, so it fully accounts for quantum and thermal fluctuations. The use of fields rather than particle coordinates allows full-scale replication of the experimental system on readily-available GPU hardware. Further details of the numerical methods appear in Appendix A. Fig. 2a shows close agreement between numerically calculated and experimentally measured release energies. This correspondence provides additional retroactive justification for the isentropic assumption, and also demonstrates that approximate analytic expressions describing only the condensate with \(N_{c}\) particles provide relatively accurate estimates of the energy. The analytic formulas for release energy and total energy are [44] \[\frac{E_{\mathrm{rel}}}{N_{c}k_{B}T_{c}^{0}}=\frac{3\zeta(4)}{2\zeta(3)}t^{4}+ \alpha\frac{1}{7}\left((1-t^{3})^{2/5}(2+\frac{17}{2}t^{3})\right) \tag{7}\] and \[\frac{E_{\mathrm{tot}}}{N_{c}k_{B}T_{c}^{0}}=\frac{3\zeta(4)}{\zeta(3)}t^{4}+ \alpha\frac{1}{7}\left((1-t^{3})^{2/5}(5+16t^{3})\right), \tag{8}\] with \(\alpha=\mu_{0}/(k_{B}T_{c}^{0})\), \(\mu_{0}\) the zero-temperature chemical potential, \(t=T/T_{c}^{0}\) the reduced temperature, \(T_{c}^{0}\) the critical temperature of a harmonically confined Bose gas, \(\zeta(x)\) the Riemann zeta function, and \(k_{B}\) the Boltzmann constant. These results are accurate to within about 3% of approximation-free numerical simulations at the measured condensate fraction. The numerical results shown in Fig. 2b do indicate that \(E_{\mathrm{kin}}\) accounts for 10% to 15% of the total energy in steps \(A_{1}\) through \(A_{5}\), violating to some extent the Thomas-Fermi approximation. Evaluating the ratios \(P_{\mathrm{tot}}/P_{\mathrm{rel}}\) and \(\eta_{\mathrm{tot}}/\eta_{\mathrm{rel}}\) using energies obtained from simulations shows that the former is 1% to 3% larger than predicted while the latter is 0.1% to 0.6% larger. A natural parameter to tune in order to maximize efficiency is the compression ratio \(\nu=\bar{\omega}^{B}/\bar{\omega}^{A}\). Fig. 4a shows measured release energy evolution over one cycle for different values of \(\nu\). At higher compression ratios we observe distinctly higher release energies for steps \(B\) and \(C\) but no significant changes to the values at steps \(A\) and \(D\), in agreement with expectations from Eqs. 7 and 8. Fig. 4b demonstrates that increasing the compression ratio increases the efficiency \(\eta\), which asymptotically approaches unity. In the Thomas-Fermi approximation this can be understood by analyzing the change in energy per particle \(E_{\mathrm{tot}}\propto a_{s}^{2/5}\bar{\omega}^{\,6/5}\)[35]. Defining \(\kappa=a_{s}^{C}/a_{s}^{A}\) as the interaction ratio between steps \(C\) and \(A\), the efficiency Figure 4: Varying compression ratio. **(a):** Measured release energy evolution over one engine cycle for varying \(\nu=\bar{\omega}_{B}/\bar{\omega}_{A}\) at a fixed interaction ratio \(\kappa=a_{c}^{C}/a_{s}^{A}=2.4\). Lines show analytical prediction of Eq. 7. **(b):** Efficiency \(\eta\) as a function of compression ratio. Shaded region shows theoretical prediction of Eq. 9 for the measured range of atom numbers. Figure 5: Varying interaction strength ratio. **(a):** Measured release energy evolution over one engine cycle for varying interaction strength ratio \(\kappa=a_{s}^{C}/a_{s}^{A}\) at a fixed compression ratio \(\nu=1.94\). **(b):** Power output as a function of \(\kappa\). Shaded regions in both panels are theoretical predictions from Eq. 7 for the measured range of atom numbers. can be expressed as \[\eta=\frac{\kappa^{2/5}(\nu^{6/5}-1)-(\nu^{6/5}-1)}{\nu^{6/5}(\kappa^{2/5}-1)}=1- \nu^{-6/5}. \tag{9}\] As \(\nu\rightarrow\infty,\eta\to 1\), and \(\eta\) is independent of the interaction strength. This can be compared in loose analogy to the Otto cycle efficiency \(\eta_{\text{Otto}}=1-\nu^{1-\gamma}\) with \(\nu\) the compression ratio and \(\gamma\) the specific heat ratio. Similarly, we can isolate the effects of interaction strength ratio \(\kappa\) by holding the compression ratio constant. Following the same procedure used to derive Eq. 9, we find \(P\propto(\kappa^{2/5}-1)(\nu^{6/5}-1)\): the power is determined solely by the interaction ratio \(\kappa\) for a fixed compression ratio \(\nu\). Fig. 5a shows release energy evolution over one cycle for various interaction strength ratios corresponding to step \(C\) interaction strengths of 120, 160 and 200\(a_{0}\), at a constant compression ratio \(\nu=1.94\) and a particle number approximately 60% larger than in Fig. 2. Fig. 5b shows that the output power indeed increases with \(\kappa\), with a departure from theoretical predictions at larger values of \(\kappa\) a possible hint of beyond-mean-field behavior. These results emphasize the importance of interaction effects in the engine: Feshbach tuning is the key parameter controlling energy transfer between magnetic and optical fields. This power enhancement is completely decoupled from the boost to efficiency achieved through stronger compression, and from the power enhancement due to decreased cycle time. In conclusion, we have realized an isentropic thermodynamic engine with a quantum degenerate working fluid and demonstrated that it outperforms a classical counterpart. Experimental measurements of engine performance for various values of control parameters and degrees of adiabaticity are in good agreement with both low-temperature analytics and approximation-free numerical simulations. This work opens up a variety of interesting directions for future exploration. These include optimizing performance with shortcuts to adiabaticity [6; 7; 12; 39], realizing a quantum Otto refrigerator [45; 46; 47; 48; 49], applying similar techniques to quantum heat engines involving trapped reservoirs of hot and cold atoms, investigating the role of criticality [8], and experimentally exploring the effects of entanglement on quantum thermodynamic engines [50; 51; 52]. ###### Acknowledgements. We thank Kris Delaney and Ethan McGarrigle for theoretical contributions. D.W. acknowledges support from the National Science Foundation (2110584), the Air Force Office of Scientific Research (FA9550-20-1-0240), the Army Research Office (W911NF-20-1-0294), and the Eddleman Center for Quantum Innovation, and from the NSF QLCI program through grant number OMA-2016245. G.F. acknowledges support from NSF DMR-2104255 for the theoretical method development. R.S. and E.N.-M. acknowledge support from the UCSB NSF Quantum Foundry through the Q-AMASEi program (Grant No. DMR-1906325). Use was made of computational facilities purchased with funds from the National Science Foundation (CNS-1725797) and administered by the Center for Scientific Computing (CSC). The CSC is supported by the California NanoSystems Institute and the Materials Research Science and Engineering Center (MRSEC; NSF DMR 1720256) at UC Santa Barbara. ## Appendix A Numerical Simulations To compose the full cycle in simulations, we must fix particle number, \(N\), cell volume, \(V\), and total entropy, \(S\), around the cycle. All experimental observables are calculated by averaging field operators as described in the main text. Operators for \(N\), internal energy [53] and Helmholtz free energy [54] have been derived previously. Release energy is calculated using the operator for internal energy derived in [53], excluding the contribution from the trap. Entropy \(S\) is calculated from the Helmholtz free energy and internal energy. Note that this means \(N\) is an output of the field theory, not a degree of freedom to be sampled, so the algorithmic cost is virtually independent of \(N\). To perform the ensemble averaging, we allow the complex fields \(\phi\) and \(\phi^{*}\) to independently evolve in a fictitious complex Langevin (CL) dynamics scheme according to a set of coupled stochastic partial differential equations that generates a Markov chain of system configurations [42; 43]. Random noise correlations are chosen according to a fluctuation dissipation theorem [55; 56] that ensures that averages over CL time are equivalent to unbiased thermodynamic ensemble averages [57; 58], provided the CL dynamics have reached steady state prior to sampling. Although the operators may be complex, time or ensemble-average operators for physical observables are real. We evolve the CL dynamics equations using the pseudospectral method detailed in [41], which decouples \(\phi\) and \(\phi^{*}\) to linear order for numerical stability, and gives near-linear scaling with real space and \(\tau\) resolution. We converge spatial resolution and imaginary time resolution until finite size effects in \(E_{\text{rel}}\) and \(S\) are no longer significant. For the simulations reported here, we use up to \(160\times 160\times 128\) plane waves and 64 points in the \(\tau\) direction. On an NVIDIA A100, the average simulation in continuous 3D space of approximately half a million particles at 170 nK converges to a time-independent solution in 2.5 hours, and by 24 hours statistical errors of the mean are less than 0.05% of the mean. The longest simulations reported in this study had a duration of approximately 49 hours. We report only simulations computed with the average number of particles over the entire cycle. Initially, we performed two sets of simulations, one at \(N=5\times 10^{5}\) and one at \(N=6\times 10^{5}\), to account for experimental error in measured particle number of the data in Fig. 2a. However, the range of simulation results was smaller than the line width in Fig. 2a. Relative uncertainty in experimental \(a_{s}\) and \(\omega_{i}\) is smaller than the relative uncertainty in \(N\), so we expect our results to be accurate for the reported experimental conditions. Cell volume \(V\) is fixed such that the density distribution is well-contained within the simulation cell and finite size effects are no longer observed in the calculated release energy and entropy. For the largest sample, we used a simulation box of \(71\times 71\times 57\)\(\mu\)m. \(S\) is constrained by first computing the entropy at step \(A\) on the cycle using the experimental \(T_{A}\) as an input parameter, then adjusting \(T\) at all other points to maintain constant \(S\). Using this procedure, \(S\) remains within 2.5% of its initial value in all cycles. ## Appendix B Connection to the Otto Cycle Here we draw an analogy between this isentropic thermodynamic cycle and the classical Otto cycle. First, following [59], we define a "harmonic volume" \(\mathcal{V}=(\hbar\bar{\omega})^{-3}\) and write the total energy as \[E=\frac{5}{7}\frac{15^{2/5}}{2}m^{1/5}N\left(\frac{Na_{s}}{\hbar\mathcal{V}} \right)^{2/5}. \tag{10}\] The "harmonic" pressure can then be derived using the fact that it is conjugate to volume: \[\mathcal{P}=-\frac{\partial E}{\partial\mathcal{V}}\bigg{|}_{N}=\frac{15^{2/5 }}{7}m^{1/5}N\left(\frac{Na_{s}}{\hbar}\right)^{2/5}\mathcal{V}^{-7/5}, \tag{11}\] or equivalently by substituting the Thomas-Fermi density into the integral of the harmonic pressure given in [59]. The harmonic volume and pressure are not merely formal analogies; the harmonic volume can be associated with the physical volume that the particles occupy and the harmonic pressure can be associated with the mechanical equilibrium of the system. The total energy can then be rewritten as \[E=\frac{5}{2}\frac{15^{2/5}}{7}m^{1/5}N\left(\frac{Na_{s}}{\hbar\mathcal{V}} \right)^{2/5}=\frac{5}{2}\mathcal{P}\mathcal{V}, \tag{12}\] and by using the definition of the Thomas-Fermi energy, we can recover an analogy to the ideal gas law: \[\mathcal{P}\mathcal{V}=\frac{2}{7}N\mu. \tag{13}\] It is important to note that while \(\mu\) plays the role of an "effective temperature" it is unrelated to a thermal equilibrium. In our cycle, strokes of constant \(\mu\) are analogous to isothermal strokes in the classical cycle. We now have all of the pieces to establish a connection with the Otto cycle. The first stage is an adiabatic compression \(\bar{\omega}_{A}\rightarrow\bar{\omega}_{B}=\nu\bar{\omega}_{A}\) with compression ratio \(\nu\). This traces an adiabat in the \(\mathcal{P}\mathcal{V}\)-space.Using Eq. 13, the adiabat is defined by \[\mathcal{V}\mu^{5/2}=\text{constant},\quad\text{or}\quad\mathcal{V}^{7/5} \mathcal{P}=\text{constant}. \tag{14}\] The heating stroke in the classical Otto cycle is replaced by an interaction strength stroke, which keeps the harmonic volume unchanged but changes the chemical potential and the harmonic pressure, thus mimicking an "isochoric" process. We note that this is not an actual transfer of heat, as the thermodynamic entropy is constant. The final two strokes follow the same arguments presented above. A quantitative \(\mathcal{P}\mathcal{V}\) diagram of this thermodynamic cycle is shown in Fig. 6. This mathematical analogy enables an alternative derivation of the efficiency of the thermodynamic engine, allowing us to use the Otto cycle efficiency directly with the adiabatic exponent \(\gamma=7/5\): \[\eta=1-\left(\frac{\mathcal{V}_{B}}{\mathcal{V}_{A}}\right)^{ \gamma-1}=1-\left(\frac{\mathcal{V}_{B}}{\mathcal{V}_{A}}\right)^{2/5}\] \[=1-\left(\frac{\bar{\omega}_{A}}{\bar{\omega}_{B}}\right)^{6/5}=1 -\nu^{-6/5} \tag{15}\] This is the same expression as Eq. 9 in the main text. Figure 6: \(\mathcal{P}\mathcal{V}\) diagram for the thermodynamic engine. \(\mathcal{V}_{A}\) and \(\mathcal{P}_{A}\) are the harmonic volume and pressure evaluated at step \(A\) of the engine cycle. Here \(\kappa=10\) and \(\nu=1.5\).
2301.02690
Hypothesis Testing for Error Mitigation: How to Evaluate Error Mitigation
In the noisy intermediate-scale quantum (NISQ) era, quantum error mitigation will be a necessary tool to extract useful performance out of quantum devices. However, there is a big gap between the noise models often assumed by error mitigation techniques and the actual noise on quantum devices. As a consequence, there arises a gap between the theoretical expectations of the techniques and their everyday performance. Cloud users of quantum devices in particular, who often take the devices as they are, feel this gap the most. How should they parametrize their uncertainty in the usefulness of these techniques and be able to make judgement calls between resources required to implement error mitigation and the accuracy required at the algorithmic level? To answer the first question, we introduce hypothesis testing within the framework of quantum error mitigation and for the second question, we propose an inclusive figure of merit that accounts for both resource requirement and mitigation efficiency of an error mitigation implementation. The figure of merit is useful to weigh the trade-offs between the scalability and accuracy of various error mitigation methods. Finally, using the hypothesis testing and the figure of merit, we experimentally evaluate $16$ error mitigation pipelines composed of singular methods such as zero noise extrapolation, randomized compilation, measurement error mitigation, dynamical decoupling, and mitigation with estimation circuits. In total our data involved running $275,640$ circuits on two IBM quantum computers.
Abdullah Ash Saki, Amara Katabarwa, Salonik Resch, George Umbrarescu
2023-01-06T19:16:08Z
http://arxiv.org/abs/2301.02690v1
# Hypothesis Testing for Error Mitigation: ###### Abstract In the noisy intermediate-scale quantum (NISQ) era, quantum error mitigation will be a necessary tool to extract useful performance out of quantum devices. However, there is a big gap between the noise models often assumed by error mitigation techniques and the actual noise on quantum devices. As a consequence, there arises a gap between the theoretical expectations of the techniques and their everyday performance. Cloud users of quantum devices in particular, who often take the devices as they are, feel this gap the most. How should they parametrize their uncertainty in the usefulness of these techniques and be able to make judgement calls between resources required to implement error mitigation and the accuracy required at the algorithmic level? To answer the first question, we introduce hypothesis testing within the framework of quantum error mitigation and for the second question, we propose an inclusive figure of merit that accounts for both resource requirement and mitigation efficiency of an error mitigation implementation. The figure of merit is useful to weigh the trade-offs between the scalability and accuracy of various error mitigation methods. Finally, using the hypothesis testing and the figure of merit, we experimentally evaluate \(16\) error mitigation pipelines composed of singular methods such as zero noise extrapolation, randomized compilation, measurement error mitigation, dynamical decoupling, and mitigation with estimation circuits. In total our data involved running \(275,640\) circuits on two IBM quantum computers. ## 1 Introduction The current state of quantum computers is often dubbed as the noisy intermediate-scale quantum (NISQ) [1] regime. In this age, qubit count and qubit and gate quality still need to be improved before quantum error correction can be done successfully. Nonetheless, it is a range in which it should be possible to do computations that cannot efficiently be simulated on a classical computer. Since its inception there has been a burst of research looking for a quantum advantage in different areas like quantum machine learning [2, 3, 4, 5, 6, 7, 8, 9], quantum chemistry [10, 11, 12, 13, 14, 15], and quantum finance [16, 17, 18]. An excellent and comprehensive view of near-term quantum algorithms is contained here [19]. As one would expect, alongside this flurry of research on the algorithmic side arose the field of quantum error mitigation. Quantum error mitigation first rose in the ideas of Richardson Extrapolation and Probabilistic Error Cancellation (PEC) [20]. In the first method, meant to correct the expectation value of the operator \(E\), the user runs versions of the unmitigated circuit with increasing noise levels, which is done by stretching the gate times. This gives one a set \(\mathcal{E}=\{\langle E\rangle_{1},\langle E\rangle_{2}\ldots\langle E\rangle _{i}\ldots\langle E\rangle_{M}\}\), where for each increasing \(i\), \(\langle E\rangle_{i}\) was estimated using a circuit with longer gate times. One then extrapolates to the zero noise limit using the Richardson extrapolation technique borrowed from solving differential equations. In the second method, one does a careful tomographic characterization of some basis gates \(\mathcal{O}_{j\alpha}\) for one's device, where \(j\) is a label for a gate \(\mathcal{G}_{j}\) in the unmitigated circuit \(\mathcal{C}\), while \(\alpha\) will be a label indexing the linear combination for gate \(\mathcal{G}_{j}\). Thinking about \(\mathcal{C}\) this way means that \(\mathcal{C}\) has been replaced with an ensemble of circuits \(\{\mathcal{C}^{(k)}\}\) for which we will sample from with the selected circuit run on the quantum device. It was shown that this gives an unbiased estimate of the observable we would like to estimate. These techniques were experimentally demonstrated soon after their invention [21]. While these techniques were foundational to error mitigation, they had/have considerable barriers to the typical user of near-term quantum devices who receives access to quantum devices through the cloud. The first method requires pulse-level access which is not easily obtainable, while the second requires process tomography, which is a considerable overhead for a cloud user with limited access time. Motivated by the limitations of pulse-level access, a slew of works were produced investigating a digitized version of Richardson Extrapolation [22, 23, 24], while for PEC, recent work has been done to reduce the overhead of quantum tomography by combing it with cycle benchmarking and randomized compiling [25]. The digitized versions of ZNE have an ambiguity as to how to do the extrapolations, i.e., what functions to choose for real devices; solutions borrowing ideas from machine learning have been developed to work around this problem [26], namely Clifford Data Regression (CDR). Another exciting idea for error mitigation comes roughly from thinking of a spatialized version of the quantum Zeno effect where one considers copies of the noisy state [27], called Virtual State Distillation (VSD). There have been a couple of attempts to combine different error mitigation techniques to overcome the specific limitations of a single error mitigation algorithm; so [28, 29] came up with a framework that combines ZNE, CDR, and VSD, while [30] proposed to combine PEC and ZNE. These are all general-purpose techniques, but one can imagine using information about the problem in hand, for example, symmetries one expects a unitary to have to mitigate errors; [31, 32, 33] have developed ideas along these lines. As we proceed deeper into the NISQ era, a few questions need to be understood and answered. 1. What is the precise asymptotic scaling of resources requirements [34, 35]? The resource scaling will be critical as we incorporate some quantum error correction with quantum error mitigation [36, 37, 38]. 2. How can one easily implement and benchmark these methods for everyday use? The Mitiq[39] and Qermit[40] software packages are efforts in this direction. 3. Often, the methods make assumptions about the quantum noise like Markovianity, locality of noise in the operations, and time independence. These assumptions mean that it is unclear whether these methods work in everyday use and how an actual experiment on the cloud will perform. Therefore, it seems plausible that a sequence of quantum error mitigation techniques might need to be designed for a specific hardware and quantum algorithm. How then can we quantify our uncertainty and ensure that particular sequence is not only improving our results, but has been chosen in such a way that limited resources have been used in a near-optimal fashion? Our work is dedicated to answering the last question. For this, we introduce into quantum error mitigation a notion of hypothesis testing for quantifying our confidence in different _error mitigation pipelines_, where an _error mitigation pipeline_ is a single or a sequence of multiple error mitigation (EM) techniques. We make the following contributions in this paper: 1. With a series of mitigation techniques in hand, all with their limitations and a desire to combine them, how can one evaluate which set or subset of techniques is responsible for mitigation? This is crucial since resources may be limited. For this question, we initiate the use of hypothesis testing within the framework of error mitigation. 2. Different error mitigation strategies produce overhead in different ways, i.e., increasing the number of distinct circuits one needs to run, increasing the number of shots one needs to run for any particular circuit and lastly increasing the depths of the circuit. For this issue we propose an entropic figure of merit that considers the different ways the overhead can appear. The rest of the paper is organized as follows: Sec. 2 introduces the hypothesis testing framework used in this paper to evaluate error mitigation (EM) pipelines. In Sec. 3, we discuss the details of error mitigation pipeline constructions and hardware experiments. Accuracy and resource trade-off considerations are described in Sec. 4 along with metrics encapsulating them. We present the experimental results in Sec. 5 and finally, we conclude in Sec. 6. ## 2 Hypothesis Testing: Modeling our uncertainty about error mitigation As a more and more cloud quantum computers come online, it will be necessary to do quantum error mitigation. On the other hand, the various complicated noise models on different platforms reduce the efficacy of error mitigation. This raises important questions: Is error mitigation actually working? How typical or representative are my conclusions about the efficacy of error mitigation for a specific device and how certain can I be? There are many EM techniques on hand; we can choose a single method or combine any number to mitigate errors in hardware experiments. For our work, we choose measurement error mitigation (MEM), randomized compiling (RC), zero noise extrapolation (ZNE), and dynamical decoupling (DD). However, we believe our ideas will be vital as the complexity of techniques increases when using other techniques. Since there are different versions of ZNE depending on the extrapolation method and the folding method, we introduce the following notation: \(\texttt{ZNE}^{(E)}\), where the presence of \(E\) represents whether _estimation circuits_[24] were used. Following Cirstoiu et al. in [40], we use the relative error mitigation (\(REM\)) metric to characterize the performance of an experiment, defined as \[\text{REM}=\frac{|\langle E\rangle_{ideal}-\langle E\rangle_{mitigated}|}{| \langle E\rangle_{ideal}-\langle E\rangle_{noisy}|} \tag{1}\] It measures how close a mitigated expectation value \((\langle E\rangle_{mitigated})\) is to the ideal expectation value \((\langle E\rangle_{ideal})\) compared to a noisy (unmitigated) expectation value \((\langle E\rangle_{noisy})\). A \(REM<1\) means the EM pipeline is mitigating the errors and pushing the mitigated expectation value closer to the ideal value compared to the unmitigated noisy value. On the other hand, a \(REM\geq 1\) indicates that the error mitigation is making the expectation value worse. Another crucial theme will be accuracy vs resource trade offs. To motivate why this might be an interesting problem to contemplate, consider the following results clipped from Table 7: here \(\mathcal{P}_{4}\) is a pipeline consisting of \(\texttt{ZNE}+\texttt{DD}+\texttt{MEM}\), while \(\mathcal{P}_{8}\) is a pipeline consisting of \(\texttt{ZNE}+\texttt{RC}+\texttt{DD}+\texttt{MEM}\). This is a rather simple case of what could happen generally, i.e. investing more resources may not yield better results. We shall see that, in this case, \(\mathcal{P}_{8}\) requires resources roughly \(2.5\) times that of \(\mathcal{P}_{4}\), yet by choosing the right extrapolation function in ZNE on the right device, resources can be saved. This is a rather obvious trade-off to make but there will be situations where more nuanced judgement calls will need to be made. We are therefore offering tools for this analysis. We propose using statistical hypothesis testing. Specifically, we use _one-sample test of proportions_ to answer how good a pipeline is and _two-sample test of proportions_ to compare two different pipelines. We summarize the procedure in Figure 1. We want to emphasize that this sort of analysis or variant thereof is a necessary precursor to any ideas related to the application of volumetric benchmarking [41, 42, 43, 44], for the simple reason that in practice a user will need to do quantum error mitigation for experiments and is interested in the performance of a quantum device in this context. It behooves the user to make sure that the techniques being used have been properly chosen for the device at hand. ### One-sample test of proportions _One-sample test of proportions_ can determine if one outcome is more likely to happen than other in a binomial distribution. We employ this test to understand if an error mitigation pipeline succeeds (mitigating errors) more often than it fails. To decide whether an error mitigation attempt is successful or not, we use the concept of \(REM\)[40] (Eq. 1). We label the experiments with \(REM<1\) as SUCCESS and experiments with \(REM\geq 1\) as FAIL, and thus convert them to binary values for the one-sample test of proportions. Next, we construct the following null and alternate hypotheses to determine if SUCCESS is occurring significantly more than FAIL: * Null hypothesis, \(H_{0}:\hat{p}=p_{0}\) with \(p_{0}=0.5\). * Alternative hypothesis, \(H_{A}:\hat{p}>p_{0}\). \begin{table} \begin{tabular}{c c c} \hline Pipeline & \(REM\) & \(REM\) \\ & (\(p=1\)) & (\(p=2\)) \\ \hline \(\mathcal{P}_{8}\) & 0.49 & 0.30 \\ \(\mathcal{P}_{4}\) & 0.60 & 0.30 \\ \hline \end{tabular} \end{table} Table 1: Data from ibm_perth showing relative error mitigation (\(REM\)) Here, \(\hat{p}\) is the proportion of successful experiments, which is computed as \[\hat{p}=\frac{\text{\# of successful experiments}}{\text{\# total experiments }(n)}, \tag{2}\] and \(p_{0}\) is the known proportion. A value of \(0.5\) specifies that the EM pipeline is equally likely to succeed and fail, i.e., the error mitigation technique is indistinguishable from the case of no error mitigation. The test statistics are computed using the following formula: \[z^{*}_{(one-sample)}=\frac{\hat{p}-p_{0}}{\sqrt{\frac{p_{0}(1-p_{0})}{n}}} \tag{3}\] Based on the test statistics, the null hypothesis can or cannot be rejected. If the null hypothesis cannot be rejected, it will mean the EM pipeline is likely to generate random results, i.e., ineffective. On the other hand, if the null hypothesis is rejected it tells us that the pipeline mitigates errors more often than it fails. **Confidence interval: one-sample proportions** Having decided whether or not to accept or reject the null hypothesis, this framework allows us to attach a confidence level to our conclusion. For this work we shall compute the \(95\%\) confidence interval of the proportion of the successful experiments, \(\hat{p}\) as follows: \[CI=100\times(\hat{p}\pm z^{*}\times SE) \tag{4}\] where, standard error, \(SE=\sqrt{\hat{p}(1-\hat{p})/n}\) and \(z^{*}=1.96\) for \(95\%\) confidence level. Figure 1: Hypothesis testing for quantum error mitigation. Suppose we get \(CI=80\%\pm 10\%\) for a pipeline \(\mathcal{P}_{\mathcal{A}}\). It tells us that the pipeline successfully mitigates errors (i.e., generates \(REM<1\)) \(70\%\) to \(90\%\) of experiments. A user might be interested in pipelines with a higher confidence interval of successful trials. ### Two-sample test of proportions As discussed before, we are particularly interested in comparing two different EM pipelines. For that purpose, the _two-sample tests of proportions_ which can determine whether the two populations differ significantly on a specific characteristic will be used. In our proposed framework, two populations are two sets of experiments, each with a different error mitigation (EM) pipeline, such as one with only \(\mathtt{ZNE}\) vs. \(\mathtt{ZNE}\) + \(\mathtt{RC}\) + \(\mathtt{DD}\) + \(\mathtt{MEM}\). As the _specific characteristic_, we again pick the \(REM\)[40] value. Let \(\hat{p}_{A}\) and \(\hat{p}_{B}\) represent proportions of success for two EM pipelines, \(\mathcal{P}_{A}\) and \(\mathcal{P}_{B}\). We construct the null and alternate hypotheses as the following: * Null hypothesis, \(H_{0}:\hat{p}_{A}=\hat{p}_{B}\), i.e., there is no statistically significant difference between the two proportions. Both EM pipelines pass (or fail) with similar probability. * Alternative hypothesis, \(H_{A}:\hat{p}_{A}>\hat{p}_{B}\) i.e., \(\mathcal{P}_{A}\) has significantly higher probability of passing than \(\mathcal{P}_{B}\). To accept or reject the Null hypothesis, we set a significance level, \(\alpha=0.05\), and compute the test statistics (Z-score) as follows: \[z_{(two-sample)}^{*}=\frac{\hat{p}_{A}-\hat{p}_{B}}{\sqrt{p^{*}(1-p^{*})( \frac{1}{n_{A}}+\frac{1}{n_{B}})}}, \tag{5}\] Where the parameters are defined in the following table, **Confidence interval: two-sample proportions** After determining a significant difference between two pipelines from hypothesis testing, we compute the \(95\%\) CI for the difference between two population proportions as follows: \[CI=100\times((\hat{p_{A}}-\hat{p_{B}})\pm z^{*}\times SE) \tag{6}\] where, standard error \(SE=\sqrt{(\hat{p_{A}}(1-\hat{p_{A}})/n_{A})^{2}+(\hat{p_{B}}(1-\hat{p_{B}})/n _{B})^{2}}\) and \(z^{*}=1.96\) for \(95\%\) confidence level. The interval tells us how much more successful the pipeline \(\mathcal{P}_{A}\) is than pipeline \(\mathcal{P}_{B}\). Suppose we get \(CI=8\pm 2\%\). It will mean that error mitigation with \(\mathcal{P}_{A}\) will generate \(6\%\) to \(10\%\) more successfully error-mitigated experiments (i.e., \(REM<1\)) than \(\mathcal{P}_{B}\) at \(95\%\) confidence level. ## 3 Experimental Details Having setup the framework for evaluating our experimental results, we now dedicate this section to describe the experimental setup for which the results shall be presented and analyzed. In this work, we evaluate \(16\) different pipelines, \begin{table} \begin{tabular}{c l} \hline \hline Parameter & Description \\ \hline \(x_{A}\) & \# of successful experiments in \(\mathcal{P}_{A}\) \\ \(n_{A}\) & \# of total experiments in \(\mathcal{P}_{A}\) \\ \(x_{B}\) & \# of successful experiments in \(\mathcal{P}_{B}\) \\ \(n_{B}\) & \# of total experiments in \(\mathcal{P}_{B}\) \\ \(p^{*}\) & \(\frac{(x_{A}+x_{B})}{(n_{A}+n_{B})}\) \\ \(\hat{p}_{A}\) & \(\frac{x_{A}}{n_{A}}\) \\ \(\hat{p}_{B}\) & \(\frac{x_{B}}{n_{B}}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Parameters involved in calculating the test statistics. which are composed of the following error mitigation techniques: ZNE, MEM[45], RC[46], DD and mitigation with estimation circuits [24]. Ref. [47] contains a summary of each technique. Table 3 describes the composition of each pipeline considered in this work. All our experiments were run on two \(7\)-qubit IBM devices, namely, ibm_lagos and ibm_perth, and physical qubits [0, 1, 2, 3] were used to run circuits on each device. #### Test Circuits As the test circuit, we chose the QAOA-MaxCut circuit for all-connected \(4\)-node graphs with \(1\)-layer. Each circuit consists of \(2\) parameters (\(\gamma,\beta\)). Figure 1(a) shows the \(4\)-node all connected graph for the MaxCut problem, and Figure 1(b) shows the corresponding quantum circuit. We observed that the success rate of an EM pipeline could have some dependence on the circuit parameters chosen for the test circuit. In order to incorporate this in our analysis, we selected \(10\) pairs of \((\gamma,\beta)\), which cover the ideal expectation value range from \(-0.62\) to \(2.0\) (excluding the constant term in the operator) for the circuit in Figure 2 #### Zero Noise Extrapolation In zero noise extrapolation (ZNE), expectation values of an algorithm are computed at multiple noise levels. Next, the expectation values are regressed using a choice of fitting functions, such as linear and quadratic, to extrapolate the expectation values in the zero noise limit. We compute expectation values at \(3\) noise levels or scale factors in our experiments, [1.0, 3.0, 5.0]. Noise level \(=1.0\) corresponds to the original circuit, whereas to achieve noise levels \(3.0\) and \(5.0\), the noise in the original circuit has to be amplified. In our experiments, we adopt the digital noise amplification technique [39, 22] to amplify the noise. It involves _folding_ gates in the original circuit such that the logical function of the circuit remains the same, but the circuit experiences more noise due to elevated gate counts. We employ two types of gate folding techniques, namely, local CNOT folding and global folding. The basic idea of each type of folding is depicted in Figure 3. In the case of local CNOT folding, each CNOT gate is replaced by \(3\) or Figure 2: (a) \(4\)-node all connected graph for MaxCut. (b) Corresponding \(4\)-qubit QAOA quantum circuit. \begin{table} \begin{tabular}{l l} \hline \hline Pipeline & Pipeline composition \\ \hline \(\mathcal{P}_{1}\) (\(P_{1}^{E}\)) & ZNE (ZNE\({}^{E}\)) \\ \(\mathcal{P}_{2}\) (\(P_{2}^{E}\)) & ZNE (ZNE\({}^{E}\)) + MEM \\ \(\mathcal{P}_{3}\) (\(P_{3}^{E}\)) & ZNE (ZNE\({}^{E}\)) + DD (’X-X’ sequence) \\ \(\mathcal{P}_{4}\) (\(P_{4}^{E}\)) & ZNE (ZNE\({}^{E}\)) + DD (’X-X’ sequence) + MEM \\ \(\mathcal{P}_{5}\) (\(P_{5}^{E}\)) & ZNE (ZNE\({}^{E}\)) + RC \\ \(\mathcal{P}_{6}\) (\(P_{6}^{E}\)) & ZNE (ZNE\({}^{E}\)) + RC + MEM \\ \(\mathcal{P}_{7}\) (\(P_{7}^{E}\)) & ZNE (ZNE\({}^{E}\)) + RC + DD (’X-X’ sequence) \\ \(\mathcal{P}_{8}\) (\(P_{8}^{E}\)) & ZNE (ZNE\({}^{E}\)) + RC + DD (’X-X’ sequence) + MEM \\ \hline \hline \end{tabular} \end{table} Table 3: Composition of error mitigation pipelines evaluated in this paper. Each pipeline is either standalone ZNE or other EM methods such as MEM, RC, and DD applied on top of ZNE. We also test the same pipelines where the expectation values fed to ZNE are corrected with noise estimation circuits [24]. Those pipelines are denoted by an extra subscript \(E\) such as \(\mathcal{P}_{1}^{E}\) and mentioned inside parentheses. consecutive CNOT gates to achieve a noise scale factor of \(3\) and \(5\), respectively. For the global folding, the original circuit (\(C\)) is appended by its inverse and itself (\(C^{-1}C\)), once for noise amplification factor \(3\) and twice for the factor \(5\). We use the Qiskit transpiler to get device-executable circuits. Then, circuits corresponding to noise levels \(1.0\), \(3.0\), and \(5.0\) are executed on the hardware, and expectation values are computed. We repeat the experiments \(15\) times to get a distribution of expectation values at each noise level. Each of these \(15\) data points of expectation values per noise level is computed with \(10,000\) shots. Finally, expectation values are fitted using both linear and quadratic regression methods to find the expectation value at the zero noise limit. We use _non-parametric bootstrapping_[48] with \(10,000\) bootstrap samples to calculate the variance of the zero noise limit result. ### Measurement Error Mitigation As the measurement error mitigator, we select the linear algebra-based approach [45] available in the Qiskit experiments. It involves running calibration circuits to construct a measurement calibration matrix, \(M_{calib}\). The calibration matrix is then inverted and multiplied with the noisy counts to get the measurement error mitigated counts, \(Count_{mitigated}=M_{calib}^{-1}Count_{noisy}\). For \(n\) measured qubits, the process involves running \(2^{n}\) calibration circuits (\(2^{4}=16\) calibration circuits for \(4\)-qubit experiments). We run each measurement calibration circuit with \(10,000\) shots. ### Randomized Compilation Randomized compilation involves generating multiple random duplicates of an original circuit and aggregating the counts of duplicates to get a noise-mitigated count. In this work, we follow the RC implementation used in [24]. It involves dressing CNOT gates in a circuit with Pauli gates as shown in Figure 3(a), such that it logically remains a CNOT. Here, two Pauli gates (P, 0) are added before the CNOT gate, and two (R, S) are added after the CNOT. These P, 0, R, and S gates are chosen independently and randomly for each CNOT gate in the circuit from Table 4. By dressing CNOT gates as above, we construct \(50\) random duplicates per noise-scaled circuit and run each random duplicate with \(200\) shots so that we have \(50\times 200=10,000\) shots in aggregate. Figure 4: (a) Dressing of CNOT gate for randomized compilation (RC). Two Pauli gates each are added before and after a CNOT gate so that it logically remains a CNOT. (b) Dynamical Decoupling. Idle time on a qubit between two gates is dressed with the Idle (\(\tau/4\)) - X - Idle (\(\tau/2\)) - X - Idle (\(\tau/4\)) sequence. (The length of the arrows denoting the \(\tau/2\) and \(\tau/4\) times are not to scale.) Figure 3: Example of local CNOT and global folding used in zero noise extrapolation experiments. ### Dynamical Decoupling In the case of dynamical decoupling, a sequence of pulses is applied to the system with the purpose of decoupling it from the effects of the environment, usually to qubits that are temporarily idle. Ideally, we would like to apply infinitely many fast pulses, but we choose to work at the digital level of the control stack by applying the more coarse-grained circuit gates instead. For dynamical decoupling, we have a number of options for the gate sequence to apply, but we found that the simplest sequence, 'X-X', worked best in the case of our experiments. Figure 4b visually shows the idea of DD. Any qubit idle time is dressed with the following sequence of two 'X' gates: \(\tau/4\) (idle) - X - \(\tau/2\) (idle) - X - \(\tau/4\) (idle). ### Estimation Circuits Estimation circuits [24] are constructed by removing all single-qubit gates and keeping only the two-qubit CNOT gates in the circuit. Mitigation with estimation circuits involves computing a noise parameter \(p\), where \(1-p=\langle\sigma_{z}^{\otimes n}\rangle\). From hardware experiments, we observed that the noise might be too high for deeper circuits (e.g., in \(4\)-qubit circuits with a noise scale factor of \(5\)), such that \(\langle\sigma_{z}^{\otimes n}\rangle\) tends to 0. Correcting the noisy expectation \(\langle E_{noisy}\rangle\) value by dividing it by \(1-p\) (\(=\langle\sigma_{z}^{\otimes n}\rangle\to 0\)) overshoots the corrected expectation values. Thus, we modify the noise parameter \(p\) such that \[1-p=\frac{\#b_{000\ldots 00}}{N}, \tag{7}\] where \(\#b_{000\ldots 00}\) is the number of all-zero bitstrings and \(N\) is the total number of shots. Mitigation with a modified definition of the noise parameter \(p\) provided better results in our experiments. ## 4 Resource vs. Mitigation Efficiency Trade Off Considerations Having discussed how to judge different error mitigation techniques, we now deal with another equally important problem: how do we quantify the resource requirement for accuracy improvement? Some error mitigation methods require an increase in depth, while others require an increase in the number of circuits to run. Therefore, it is necessary to devise a figure of merit that captures these different overhead concerns. Also how do we account for the _complexity_ of an error mitigation pipeline? In this section, we introduce a classical information theory-inspired entropic figure of merit to capture the overhead and complexity of error mitigation pipelines and name it resource, \(R\). Finally, we combine \(R\) with two measures of error mitigation efficiency and introduce a single resource normalized metric for the overall quality of an error mitigation pipeline. ### An Entropic Figure of Merit for Resource Let label \(R\) be figure of merit that includes both the overhead and the complexity. An obvious choice for \(R\) is to count the number of _shots_ or _samples_ taken on the quantum computer. While this is a good first-order approximation, it only partially captures the situation. For example, if mitigation method \(A\) requires \(1\) circuit with \(10,000\) shots and mitigation method \(B\) requires \(10\) circuits with \(1,000\) shots each, both require the same number of shots on the quantum computer. However, method \(B\) will be more challenging to run as the program must be modified during the experiment. This will introduce additional overhead in the classical control/support hardware and increase the challenge of scheduling circuits (jobs) on a cloud computing service. There are two options on hand, one could either develop models of run-time for each hardware provider or invent some proxy for the complexity of running experiments. This proxy should be calculable independent of the hardware provider and easily interpretable. We arrive at a proxy using the following reasoning: what increases the complexity of an algorithm from the experimental point of view are the number of _distinct_ circuits one needs to run. We therefore consider an algorithms that needs to run fewer distinct circuits to be simpler \begin{table} \begin{tabular}{c than one that needs to run more distinct circuits. From this point of view the concept of _entropy_ is a natural measure to consider. We define the resource metric \(R\) by the following equation: \[R=T(1+S) \tag{8}\] where \(T\) is, for now, the total number of _shots_ or _samples_ taken on the quantum computer, and \(S\) is the entropy which is defined as: \[S=-\sum_{i}p_{C_{i}}\ln(p_{C_{i}}) \tag{9}\] where \(p_{C_{i}}\) is the probability of the distinct circuit \(C_{i}\) where each \(C_{i}\) can be thought of as a symbol for an error mitigation \(\mathcal{A}\), i.e., \(\mathcal{A}=\{C_{1},C_{2},\ldots C_{N}\}\). The question then becomes, what probability \(p_{C_{i}}\) should we attach to these circuits? At first pass, \(p_{C_{i}}=\frac{N_{C_{i}}}{N}\), where \(N_{C_{i}}\) is the number of shots run for circuit \(C_{i}\) and \(N=\sum_{i}N_{C_{i}}\) is the total number of shots needed for algorithm \(\mathcal{A}\). However, not all circuits in an algorithm will necessarily have the same number of gates or depth or duration. This should also be accounted for. For each circuit \(C_{i}\), let \(D_{C_{i}}\) be its duration. We then need an updated measure for quantum computer usage. We can use the duration-weighted total number of shots to estimate the time spent running on the quantum hardware. \[T=\sum_{i}N_{C_{i}}D_{C_{i}} \tag{10}\] Finally, different circuits required to implement the algorithm \(\mathcal{A}\) may need a different number of qubits. If the circuit is dense enough and parallelization of all gates is impossible then in some sense a circuit with more qubits is harder to implement than one with fewer qubits. Direct comparison of circuits with different number of qubits is not clear but we can modify (10) to a qubit number weighted average as follows: \[T=\sum_{i}N_{C_{i}}D_{C_{i}}Q_{C_{i}}^{norm} \tag{11}\] where, \(Q_{C_{i}}^{norm}=Q_{C_{i}}/Q_{max}\) is the qubit count of circuit \(C_{i}\) normalized by maximum qubit count (\(Q_{max}\)) among circuits in \(\mathcal{A}\). We can thus define the probability of a circuit be \[p_{C_{i}}=\frac{N_{C_{i}}D_{C_{i}}Q_{C_{i}}^{norm}}{T} \tag{12}\] ### Combining Mitigation Efficiency and Resource Now that we have defined an entropic figure of metric that takes into account the total number of circuits, the number of shots for each circuit, and the duration of each circuit, we include the error mitigation efficiency achieved by the algorithm. We use \(REM\) to measure the mitigation efficiency as it quantifies to what extent an error mitigation method can push the mitigated expectation value to the ideal value--the lower the \(REM\) value, the better the mitigation. We compute the \(95\%\) confidence interval of the median \(REM\) of a pipeline and take the upper interval in our calculations to be conservative. Next, we argue that only accounting for \(REM\) values may not tell us the complete picture. A pipeline may have overall low \(REM\) but fail to mitigate errors more frequently than others. Thus, we need to consider the proportion of SUCCESS (i.e., how often a pipeline results in REM \(<1\)) of the pipeline along with median \(REM\). We compute the \(95\%\) confidence interval of the proportion of SUCCESS and, this time, take the lower interval to be conservative. We name the lower interval as _pipeline success rate_ (\(PSR\)). The higher the \(PSR\), the better the pipeline. Finally, the overall quality of an error mitigation pipeline depends on three factors: resource (\(R\)), \(REM\), and \(PSR\). Among these three factors, we want \(R\) and median \(REM\) (\(\epsilon\)) to be lower and \(PSR\) to be higher. Thus, we can combine them to compute a single metric for the overall quality of mitigation, \(M\), as follows: \[M=\frac{PSR(\%)}{\epsilon\times R} \tag{13}\] Eq. 13 can be tweaked in different ways, such as one can take weighted versions of each factor to prioritize one over the other. Instead of \(REM\), \(\epsilon\) can represent different metrics depending on the problem. While computing \(R\), \(D\) can be taken as more generic _depth_ (or, _weighted-depth_) of a circuit instead of the device-dependent raw duration. However, in this paper, we take the non-weighted version of the parameters with \(\epsilon\) as \(REM\) and \(D\) as the duration (in seconds) of a quantum circuit. ## 5 Results and Analysis ### Data Preparation and Analysis As per Eq. 1, three parameters are needed to compute an \(REM\) value, namely \(\langle E\rangle_{ideal}\), \(\langle E\rangle_{\lambda=0}\) (expectation value in the zero noise limit, i.e., the mitigated expectation value \(\langle E\rangle_{mitigated}\)), and \(\langle E\rangle_{\lambda=1}\) (expectation value with no scaling of gates or circuits, i.e., the noisy expectation value \(\langle E\rangle_{noisy}\)). The mean (\(\mu_{\lambda=1}\)) and standard deviation (\(\sigma_{\lambda=1}\)) of \(\langle E\rangle_{\lambda=1}\) are computed from running \(15\) experimental runs of original circuit at noise level \(1\) for a specific set of parameters. Using _non-parametric bootstrapping_[48], we approximated the mean (\(\mu_{\lambda=0}\)) and standard deviation (\(\sigma_{\lambda=0}\)) of the regression parameter, \(\langle E\rangle_{\lambda=0}\). As the statistical hypothesis testing framework described in Section 2 works on binarized \(REM\) values, we follow the procedure in Algorithm 1 to produce data for the hypothesis testing framework. ``` 0:\(\{\gamma^{(i)},\beta^{(i)}\}_{i=1}^{i=10}\), \(\{\mu_{\lambda=0}^{(i)},\sigma_{\lambda=0}^{(i)}\}_{i=1}^{i=10}\), \(\{\mu_{\lambda=1}^{(i)},\sigma_{\lambda=1}^{(i)}\}_{i=1}^{i=10}\) 0: Success and Failure Counts for an EM pipeline 1:procedureData Preparation 2:\(\texttt{success}\gets 0\) 3:failure\(\gets 0\) 4:for\(i=1\) to \(i=10\)do 5: Generate 1000 samples for \(\langle E\rangle_{\lambda=0}^{(i)}\) from \(\mathcal{N}(\mu_{\lambda=0}^{(i)},\sigma_{\lambda=0}^{(i)})\) 6: Generate 1000 samples for \(\langle E\rangle_{\lambda=1}^{(i)}\) from \(\mathcal{N}(\mu_{\lambda=1}^{(i)},\sigma_{\lambda=1}^{(i)})\) 7: Plug values into Eq. 1 to get 1000 REM Values 8:for\(j=1\) to \(j=1000\)do 9:if\(REM<1\)then 10: success\(\leftarrow\texttt{success}+1\) 11:elseif\(REM\geq 1\)then 12: failure\(\leftarrow\texttt{failure}+1\) 13: Return Success, Failure ``` **Algorithm 1** Characterize Success Rate for an EM pipeline The statistical tests are applied on the experimental data collected from hardware experiments on the ibm_lagos and ibm_perth. We ran \(91,880\) circuits on each device per folding type to collect the hardware data. When evaluating the EM pipelines individually (Figure 5), we see that \(\mathcal{P}_{3}\), \(\mathcal{P}_{4}\), \(\mathcal{P}_{7}\), \(\mathcal{P}_{8}\), \(\mathcal{P}_{7}^{E}\), and \(\mathcal{P}_{8}^{E}\) are the best for both devices and for both fitting types. Each of these pipelines has dynamical decoupling (\(\mathtt{DD}\)) in common, which hints towards an essential role of \(\mathtt{DD}\) on these devices. Already we are beginning to see in our toy experiments how trade off decisions can be important. Randomized Compiling can increase the number of circuits one needs to run by an order of magnitude or 2 but if it is the case the \(\mathtt{DD}\) is what is playing the essential role in mitigation and producing results that are close to more complicated pipeline, there is some room left for resource considerations. Figure 5: Confidence intervals of proportions of successful experiments for two devices, ibm_lagos and ibm_perth, and both linear and quadratic fitting. Bluer cells indicate a higher proportion of success. \(\mathcal{P}_{3}\), \(\mathcal{P}_{4}\), \(\mathcal{P}_{7}\), \(\mathcal{P}_{8}\), \(\mathcal{P}_{7}^{E}\), and \(\mathcal{P}_{8}^{E}\) resulted in bluest intervals across devices and fitting types in general. Blank cells indicate that the proportion of SUCCESS is not significantly higher than the proportion of FAIL for a pipeline. This observation emphasizes the importance of assigning statistical confidence to the mitigation capability of EM pipelines. A user may achieve mitigation with blank pipelines at times, but the performance will be inconsistent, which is paramount. An interesting observation from the ibm_lagos device is that a few pipelines on this device are failing to generate significantly higher proportion of SUCCESS than FAIL, and those cells are left blank in the confidence interval plot in Figure 4(a). This does not mean that the pipelines _always_ fail to mitigate error, instead the pipelines may mitigate errors in some cases, but the probability is not distinguishable from a \(50\)-\(50\) draw or is in fact biased towards increasing the error within our confidence. This observation highlights the importance of attaching a statistically-supported confidence to pipelines. We also consider the probability of type II errors, i.e., if we fail to reject the null hypothesis and conclude that an EM pipeline does not mitigate error more frequently when in reality it does. Due to large sample size (\(10,000REM\) values), the probability of type II error in our cases was practically zero. Next, we compare the linear and quadratic fitting variants of the \(6\) best-performing pipelines, i.e., \(\mathcal{P}_{3}\), \(\mathcal{P}_{4}\), \(\mathcal{P}_{7}\), \(\mathcal{P}_{8}\), \(\mathcal{P}_{7}^{E}\), and \(\mathcal{P}_{8}^{E}\) using the two-sample test of proportions. The confidence intervals are plotted in Figure 6. A blue cell indicates that the pipeline on the column (linear fitting) has a better proportion of SUCCESS than the pipeline on the row (quadratic fitting), and a red cell specifies the opposite. The diagonal elements of the heatmaps compare the linear and quadratic variants of the same pipeline. On ibm_lagos, linear variants of \(\mathcal{P}_{7}\) and \(\mathcal{P}_{8}\) perform better than most quadratic variants in general, except \(\mathcal{P}_{7}\) (quadratic). \(\mathcal{P}_{7}\) (quadratic) does not have a statistically significant difference from either \(\mathcal{P}_{7}\) (linear) or \(\mathcal{P}_{8}\) (linear). On the other hand, quadratic variants of \(\mathcal{P}_{3}\) and \(\mathcal{P}_{4}\) perform worse than all linear variants on ibm_lagos. On ibm_perth, the trend is different. Inspecting the diagonal elements provides a mixed scenario. For instance, quadratic versions of \(\mathcal{P}_{3}\), \(\mathcal{P}_{4}\), and \(\mathcal{P}_{7}^{E}\) have better success proportions than their respective linear versions. In contrast, \(\mathcal{P}_{7}\) and \(\mathcal{P}_{8}\) quadratic versions are marginally worse than their linear counterparts. We observe no significant difference between linear and quadratic fittings in the case of \(\mathcal{P}_{8}^{E}\). \(\mathcal{P}_{3}\) (quadratic) and \(\mathcal{P}_{8}^{E}\) (linear) have the edge over other pipelines except between themselves as \(\mathcal{P}_{3}\) (quadratic) has all red cells in the row whereas \(\mathcal{P}_{8}^{E}\) (linear) has all blue cells on the column. _The results from two different devices indicate that the choice of fitting function is device dependent._ ### Resources vs. Mitigation Efficiency In this section, we analyze the resource vs. (mitigation) efficiency of different pipelines using the metrics proposed in Sec. 4. As a first pass, the metric \(M\) provides a user with a single metric to compare and choose a pipeline for error mitigation. The user may choose the pipeline with highest \(M\). Consider Table 5 which tabulates the resources and accuracy for a partial set of pipelines (full set of values are tabulated in Table 6 for ibm_lagos and in Table 7 for ibm_perth). \(\mathcal{P}_{3}\), \(\mathcal{P}_{4}\), and \(\mathcal{P}_{8}\) are the top three pipelines in terms of \(M\) values for ibm_lagos. While \(\mathcal{P}_{8}\) has better \(REM\) and \(PSR\) values, its \(M\) is slightly inferior to \(\mathcal{P}_{3}\) due to higher resource requirements. An extreme case of the trade-offs between accuracy and resources required can be seen for \(\mathcal{P}_{7}^{E}\) which has the lowest median \(REM\), however mitigation with estimation circuits almost doubles the required number circuit runs leading to a significantly higher resource value. This leads to a low \(M\) value. We observe similar trend in the ibm_perth as well with \(\mathcal{P}_{3}\) and \(\mathcal{P}_{4}\) having the top two \(M\) values and \(\mathcal{P}_{7}^{E}\) getting penalized for resource consumption. Figure 6: \(95\%\) confidence intervals for comparison between two pipelines. A more positive confidence interval (blue) indicates that the pipeline on the column (X-axis) has a higher proportion of SUCCESS than the pipeline on the row. In contrast, a negative (red) interval means the opposite. Blank cells indicate no statistically significant difference between the two compared pipelines. The metric \(M\) equips a user with a simple tool to assess an EM pipeline. In addition to the tables, we introduce the concept of _resource-efficiency space_ which visually shows the trade-offs between resources and mitigation efficiency. The resource-efficiency space is a scatter plot where each marker is placed at a point with the X-coordinate as the inverse of \(REM\) and the Y-coordinate as the \(PSR\). The marker size corresponds to the resource (\(R\)), i.e., the more prominent marker, the higher the resource requirement of the pipeline. We expect a pipeline with a lower resource requirement (i.e., smaller markers) to have a higher \(PSR\) and a lower \(REM\). Thus, the upper-right corner is the favorable spot in the resource-efficiency space. Figure 7 shows the resource-efficiency space for ibm_lagos for both linear and quadratic fitting. It visually displays the insights from clipped Table 5. For instance, \(\mathcal{P}_{8}\) with quadratic fitting stands out among other pipelines in terms of mitigation quality on ibm_lagos. It has high \(PSR\) and \(1/REM\) with modest resource expenditure. Thus, if mitigation quality is the priority for users, they should choose \(\mathcal{P}_{8}\) (quadratic). On the other hand, if users have resource constraints, then they may opt for pipelines with smaller resources such as \(\mathcal{P}_{3}\) (quadratic) and \(\mathcal{P}_{4}\) (quadratic). These pipelines have lower resource needs (smaller markers) with reasonable mitigation qualities (\(PSR\): \(93.52\%\) and \(93.39\%\) and \(REM\): \(0.3126\) and \(0.2435\), respectively). The plots also tell us that expending more resources does not guarantee monotonically improved mitigation efficiency. For example, \(\mathcal{P}_{5}^{E}\) and \(\mathcal{P}_{6}^{E}\) have two of the highest resource consumption while providing only modest mitigation. Figure 7: Resource-Efficiency space for experiments on ibm_lagos with local folding \(\mathtt{ZNE}\). \(\mathcal{P}_{8}\) with quadratic fitting stands out among other pipelines in terms of mitigation efficiency as it has a high \(PSR\) and the lowest \(REM\). \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline Device & Pipeline & R & REM (lin) & REM (quad) & PSR (lin) & PSR (quad) & M (lin) & M (quad) \\ \hline & \(\mathcal{P}_{3}\) & \(2.8494\) & \(0.5397\) & \(0.3126\) & \(0.9858\) & \(0.9352\) & \(64.1009\) & \(104.9981\) \\ ibm\_lagos & \(\mathcal{P}_{4}\) & \(3.8006\) & \(0.5097\) & \(0.2435\) & \(0.9609\) & \(0.9339\) & \(49.6020\) & \(100.9129\) \\ & \(\mathcal{P}_{8}\) & \(9.9802\) & \(0.2815\) & \(0.0992\) & \(1.0000\) & \(0.9981\) & \(35.5945\) & \(100.8167\) \\ & \(\mathcal{P}_{7}^{E}\) & \(20.1687\) & \(0.1170\) & \(0.1240\) & \(0.9915\) & \(0.9780\) & \(42.0164\) & \(39.1058\) \\ \hline & \(\mathcal{P}_{3}\) & \(2.3061\) & \(0.6899\) & \(0.4267\) & \(0.9623\) & \(0.9948\) & \(60.4875\) & \(101.0925\) \\ ibm\_perth & \(\mathcal{P}_{4}\) & \(3.1189\) & \(0.5950\) & \(0.3007\) & \(0.9055\) & \(0.9811\) & \(48.7956\) & \(104.6124\) \\ & \(\mathcal{P}_{7}^{E}\) & \(16.5019\) & \(0.1578\) & \(0.1625\) & \(0.9258\) & \(0.9363\) & \(35.5540\) & \(34.9154\) \\ \hline \hline \end{tabular} \end{table} Table 5: Resource usage and mitigation efficiency values for partial set of pipelines. This table exemplifies the resource-efficiency trade-offs in error mitigation. One can compose more complex EM pipeline by combining various EM methods and achieve better mitigation. However, this improved mitigation may come at the cost of more resources. This contrasting dynamics is reflected on the \(M\) values of the pipelines. For example, \(\mathcal{P}_{7}^{E}\) on both devices high \(PSR\) and low \(REM\) values. However, due to significant resource needs, the pipelines have the lowest \(M\) values. Besides, most pipelines are located in the upper left corner of the space, which means most pipelines can mitigate errors frequently (\(PSR\uparrow\)); however, the extent of mitigation is small (\(REM\uparrow\)). On ibm_perth, we again observe a crowded upper-left corner in the plot. However, the choice of pipeline on this device is different from ibm_lagos. If the user prioritizes the mitigation quality, then \(\mathcal{P}_{7}^{E}\) (both linear and quadratic variants) is the pipeline of choice as it is located in the upper right corner of the plot (\(PSR\uparrow\), \(REM\downarrow\)). If less resource is the priority, then \(\mathcal{P}_{4}\) with quadratic fitting is a decent compromise between quality and resource. Nonetheless, \(\mathcal{P}_{7}\) and \(\mathcal{P}_{8}\), both linear and quadratic variants, top the chart with the highest \(PSR\)s, i.e., these pipelines guarantee a statistically higher chance of mitigating errors. This trend is common in both ibm_lagos and ibm_perth for both fitting types. Lastly, one could note that the X-axis (\(1/REM\)) of the resource-efficiency space is longer for ibm_lagos than the X-axis for ibm_perth, which potentially indicates that the ibm_lagos may be a better device than ibm_perth. Finally, by analyzing the resource-quality space, we can gather the following insights: * The choice of the pipeline with the best quality error mitigation and fitting type is device dependent. Our proposed statistical analysis framework can aid quantum cloud users in selecting the best-performing pipeline. * Adopting a full-fledged pipeline with ZNE, MEM, RC, and DD is a statistically safe choice. * While there is a resource vs. quality trade-offs among pipelines, adding more resources does not always guarantee a better error mitigation quality. * The choice of an error mitigation pipeline (and +device) depends on the user's priority between resource and mitigation quality. ## 6 Conclusion '_How good is my quantum error mitigation?_' is an open question in the community. In this paper, we formalized the answer to this question with statistical hypothesis testing and introduced a more inclusive measure for resource and mitigation efficiency of any EM method. Our work will enable researchers to evaluate their quantum error mitigation techniques formally. We demonstrated this framework by experimentally evaluating the performance of \(16\) quantum error mitigation pipelines composed of one or more atomic techniques such as zero noise extrapolation, measurement error mitigation, randomized compilation, and dynamical decoupling. We introduced the use of the one-sample test of proportions to determine if the proportion of SUCCESS (i.e., \(REM<1\)) of a pipeline is significantly higher than \(0.5\) and the use of the two-sample test of proportions to evaluate if one pipeline has a significantly higher proportion of SUCCESS than the other. As error mitigation generally requires more circuits, shots, and maybe more qubits, we introduced an entropic figure of merit that succinctly incorporates the number of circuits, shots, and qubit count. Finally, we combined the measure of pipeline success (\(PSR\)), amount of mitigation (\(REM\)), and resource consumption (\(R\)) in a single metric, \(M\), for overall mitigation. Although the metric \(M\) is aligned with the shots normalized \(REM\) as in [49], the accounting Figure 8: Resource-Efficiency space for experiments on ibm_perth with local folding ZNE. for the number of circuits and qubit usage, in addition to shots, makes it a complete metric. While a single metric for overall mitigation is easy to follow, it may mask the interplay among factors. Thus, we introduce the concept of _resource-efficiency space_, which visualizes the landscape of mitigation efficiency (\(PSR\) and \(REM\)) and resource (\(R\)) trade-offs. The evaluation frameworks and metrics we proposed are extensible to different hardware, test circuits, and error mitigation methods. For instance, we can bring probabilistic error cancellation and virtual state distillation and compose new EM pipelines, which can be evaluated similarly using the hypothesis testing framework. Besides, in our experiments, we adopted the digital version of ZNE while a pulse-level ZNE[21] is available in the literature. It remains an exciting prospect to evaluate pipelines with pulse-level ZNE. Pulse-level ZNE may enable shorter duration circuits and hence, a lower resource \(R\), owing to more granular control over noise amplification by pulse stretching. It can also enable mitigating errors in deeper circuits. However, pulse re-calibration may be required for the stretched pulses, which consumes additional resources. Thus, understanding the compromises between gains from shorter duration circuits and loss from pulse re-calibration remains an important open question, along with the comparison of mitigation efficiencies of digital and pulse-level ZNE. Another direction of analysis can be testing the same pipeline but with different resource expenditures. For example, pipeline \(\mathcal{P}_{5}\) (\(2\)NE + RC) can be run with more shots per random duplicate and/or with more random duplicates. The statistical tests and resource-efficiency analysis can reveal if there is an efficiency gain from more resources and what is a sweet-spot for resource vs. efficiency of a pipeline. All things considered, we propose a flexible and formal statistical testing framework and metrics for evaluating error mitigation techniques in this paper. Using these techniques, researchers can perform a wide range of analyses and better assess their mitigation methods. \begin{table} \begin{tabular}{c c c c c c c c c} \hline Pipeline & T & S & R & REM & REM & PSR & PSR & M & M \\ & & & & (lin) & (quad) & (lin) & (quad) & (lin) & (quad) \\ \hline \(\mathcal{P}_{1}\) & 1.4662 & 0.9434 & 2.8494 & 0.9739 & 0.9973 & 0.8236 & 0.5118 & 29.6773 & 18.0106 \\ \(\mathcal{P}_{2}\) & 1.5979 & 1.3786 & 3.8006 & 0.9697 & 1.0018 & 0.8258 & \(-\) & 22.4069 & \(-\) \\ \(\mathcal{P}_{3}\) & 1.4662 & 0.9434 & 2.8494 & 0.5397 & 0.3126 & 0.9858 & 0.9352 & 64.1009 & 104.9981 \\ \(\mathcal{P}_{4}\) & 1.5979 & 1.3786 & 3.8006 & 0.5097 & 0.2435 & 0.9609 & 0.9339 & 49.6020 & 100.9129 \\ \(\mathcal{P}_{5}\) & 1.5413 & 4.8539 & 9.0227 & 1.0229 & 0.9963 & \(-\) & 0.5143 & \(-\) & 5.7214 \\ \(\mathcal{P}_{6}\) & 1.6730 & 4.9656 & 9.9802 & 1.0182 & 0.9840 & \(-\) & 0.5513 & \(-\) & 5.6135 \\ \(\mathcal{P}_{7}\) & 1.5413 & 4.8539 & 9.0227 & 0.3634 & 0.1848 & 1.0000 & 0.9994 & 30.4985 & 59.9354 \\ \(\mathcal{P}_{8}\) & 1.6730 & 4.9656 & 9.9802 & 0.2815 & 0.0992 & 1.0000 & 0.9981 & 35.5945 & 100.8167 \\ \(\mathcal{P}_{1}^{E}\) & 2.9293 & 1.6361 & 7.7220 & 0.8613 & 1.0640 & 0.9398 & \(-\) & 14.1304 & \(-\) \\ \(\mathcal{P}_{2}^{E}\) & 3.0609 & 1.8624 & 8.7614 & 0.8586 & 1.0737 & 0.9407 & \(-\) & 12.5056 & \(-\) \\ \(\mathcal{P}_{3}^{E}\) & 2.9293 & 1.6361 & 7.7220 & 0.3569 & 0.1784 & 0.8948 & 0.8941 & 32.4690 & 64.9040 \\ \(\mathcal{P}_{4}^{E}\) & 3.0609 & 1.8624 & 8.7614 & 0.3623 & 0.1820 & 0.8951 & 0.8941 & 28.2002 & 56.0726 \\ \(\mathcal{P}_{5}^{E}\) & 3.0806 & 5.5470 & 20.1687 & 0.9973 & 0.9471 & 0.5086 & 0.6280 & 2.5286 & 3.2875 \\ \(\mathcal{P}_{6}^{E}\) & 3.2122 & 5.6044 & 21.2147 & 0.9929 & 0.9363 & 0.5217 & 0.6522 & 2.4768 & 3.2836 \\ \(\mathcal{P}_{7}^{E}\) & 3.0806 & 5.5470 & 20.1687 & 0.1170 & 0.1240 & 0.9915 & 0.9780 & 42.0164 & 39.1058 \\ \(\mathcal{P}_{8}^{E}\) & 3.2122 & 5.6044 & 21.2147 & 0.1527 & 0.1583 & 0.9920 & 0.9736 & 30.6233 & 28.9921 \\ \hline \end{tabular} \end{table} Table 6: Resource and mitigation efficiency values of pipelines on ibm_lagos with local folding ZNE. Some pipelines do not generate significantly higher proportions of SUCCESS, and those pipelines are marked with a dash (\(-\)) in the \(PSR\) columns. Again, these pipelines reiterate the importance of statistical testing for pipelines. By assigning confidence intervals to the proportion of SUCCESS, we can parameterize the uncertainty of pipelines and understand which pipelines can consistently mitigate errors (and which ones cannot).
2307.06778
Photon rings as tests for alternative spherically symmetric geometries with thin accretion disks
The imaging by the Event Horizon Telescope (EHT) of the supermassive central objects at the heart of the M87 and Milky Way (Sgr A$^\star$) galaxies, has marked the first step into peering at the photon rings and central brightness depression that characterize the optical appearance of black holes surrounded by an accretion disk. Recently, Vagnozzi et. al. [S.~Vagnozzi, \textit{et al.} arXiv:2205.07787 [gr-qc]] used the claim by the EHT that the size of the {\it shadow} of Sgr A$^\star$ can be inferred by calibrated measurements of the bright ring enclosing it, to constrain a large number of spherically symmetric space-time geometries. In this work we use this result to study some features of the first and second photon rings of a restricted pool of such geometries in thin accretion disk settings. The emission profile of the latter is described by calling upon three analytic samples belonging to the family introduced by Gralla, Lupsasca and Marrone, in order to characterize such photon rings using the Lyapunov exponent of nearly bound orbits and discuss its correlation with the luminosity extinction rate between the first and second photon rings. We finally elaborate on the chances of using such photon rings as observational discriminators of alternative black hole geometries using very long baseline interferometry.
Luís F. Dias da Silva, Francisco S. N. Lobo, Gonzalo J. Olmo, Diego Rubiera-Garcia
2023-07-13T14:36:29Z
http://arxiv.org/abs/2307.06778v2
# Photon rings as tests for alternative spherically symmetric geometries with thin accretion disks ###### Abstract The imaging by the Event Horizon Telescope (EHT) of the supermassive central objects at the heart of the M87 and Milky Way (Sgr A\({}^{\star}\)) galaxies, has marked the first step into peering at the shadow and photon rings that characterize the optical appearance of black holes surrounded by an accretion disk. Recently, Vagnozzi et. al. [S. Vagnozzi, _et al._ arXiv:2205.07787 [gr-qc]] used the claim by the EHT that the size of the shadow of Sgr A\({}^{\star}\) can be inferred by calibrated measurements of the bright ring enclosing it, to constrain a large number of spherically symmetric space-time geometries. In this work we use this result to study some features of the first and second photon rings of a restricted pool of such geometries in thin accretion disk settings. The emission profile of the latter is described by calling upon three analytic samples belonging to the family introduced by Gralla, Lupsasca and Marrone, in order to characterize such photon rings using the Lyapunov exponent of nearly bound orbits and discuss its correlation with the luminosity extinction rate between the first and second photon rings. We finally elaborate on the chances of using such photon rings as observational discriminators of alternative black hole geometries using very long baseline interferometry. ## I Introduction One of the core results of the theory of black holes within General Relativity (GR) is the universality of the Kerr hypothesis, namely, that every black hole of the universe is described by two parameters: mass and angular momentum (since the electric charge is typically neglected in astrophysical environments) [1]. This hypothesis is deeply anchored in the uniqueness theorems, though the addition of matter fields allows to find hairy black holes under certain circumstances [2]. Since testing the validity of the Kerr hypothesis is nearly impossible (see however [3]), one typically performs instead null-tests, i.e., tests with electromagnetic or gravitational waves on the compatibility of the Kerr black hole with current observations, and the feasibility of every alternative to it (be a modified black hole or a horizonless compact object) to also match such observations [4]. Recently, the progress in the development of very long baseline interferometry (VLBI) has paid off via the imaging by the Event Horizon Telescope (EHT) Collaboration of the central supermassive objects at the heart of the M87 [5] and Milky Way (Sgr A\({}^{\star}\)) [6] galaxies. Such observations report the presence of a bright ring of radiation enclosing a central brightness depression, which are the two most salient features of images found using General Relativistic Magneto-HydroDynamic (GRMHD) simulations of the accretion flow surrounding a Kerr black hole. The first such feature comes from the presence of a region of bound unstable orbits in the effective potentials seen by photons (the photon shell [7]), allowing for strongly lensed trajectories that orbit the black hole \(n\) (half-)times. If the disk is optically thin (i.e. transparent to its own radiation), such trajectories create a thin _photon ring_ whose features interpolate between two extreme scenarios. On one end, if the accretion disk has a spherically symmetric inflow, the photon ring converges to the critical curve in the image plane of the observer while the central brightness depression entirely fills it: this is the typical _shadow_ of Falcke's view [8; 9], and the second feature of interest in black hole images. On the opposite end it is the _wedding cake_ scenario, in which the photon ring is decomposed into an infinite sequence of rings [10], each of them being a gravitationally lensed image of the direct emission region but exponentially dimmed in luminosity, the latter captured by the Lyapunov exponent of nearly bound geodesics in a given geometry. The latter scenario happens not only in infinitesimally thin-disk settings but also as long as there are gaps in the emission region, and it is characterized by the fact that the sequence of rings converges to the apparent position of the equatorial event horizon, below of which one finds the inner shadow [11]. This field of imaging compact objects illuminated by their accretion disk (many times simply referred to as shadows) is thus entering a golden era in which it represents a promising opportunity to both test the reliability of the Kerr solution and to explore the plausibility of any of its alternatives to describe observed images. However, such an opportunity can be spoiled by the large uncertainties in the modelling of the disk together with its entanglement with the background geometry in the generation of such images, rendering the quest for reliable observational discriminators between the Kerr solution and its many alternatives a main object of interest in the scientific community. This can be pursued via the two main features of such images - photon ring and central brightness depression -, since they carry a wealth of information about the underlying space-time geometry and, consequently, on the case to test GR itself [12; 13]. For the former feature, as larger values of \(n\) are considered, the theoretical properties of the corresponding photon rings grow less dependent on the features of the disk and more on the background geometry, thus offering us a way out of the "contamination" enacted by the disk [14]. The most promising target is the \(n=2\) ring. Indeed, despite its exponentially-suppressed luminosity, its sharp features makes it die off slowly in the Fourier domain and, as a consequence, tends to dominate the interferometric signal in very high-frequencies, leading to the VLBI field. While its detection hinges on observational capabilities that surpass those currently available, either by requiring observations at higher frequencies or longer baselines, these are expected to be achievable with the next generation EHT (ngEHT) observations [15] and through space-based interferometry. In this regard, prospects have been recently reviewed in the literature, see e.g. [16; 17; 18; 19; 20]. Given the fact that one expects significant deviations in the shape, diameter, width, and relative luminosity of the \(n=2\) ring [21] (and of the \(n=1\) one to a lesser extent [22]) for alternative non-Kerr geometries, observations of this ring could be potentially used to constrain them [23]. For the latter feature, while the size of the outer edge of the central brightness depression cannot be directly determined by the EHT Collaboration given the fact that it cannot measure luminosity contrasts below \(\sim 10\%\) of its peak, it has been recently reported that it can be _indirectly_ inferred (after proper calibration accounting for theoretical and observational uncertainties) by a correlation between the observed size of the bright ring (caused by the disk's direct emission) and the shadow's size itself [24]. Assuming the validity of this correlation and the assumptions upon which it holds, a collective effort was made by Vagnozzi et. al. in [25] to constrain the parameter space of a plethora of alternative spherically symmetric geometries motivated by fundamental or phenomenological considerations. One should note, however, that this observation alone does not single out specific metrics to represent current images but rather their compatibility with them, since the black hole shadow is known to be degenerate [26]. The main aim of this paper is to combine the two ingredients discussed above, taking a restricted pool of the alternative spherically symmetric geometries considered by Vagnozzi et. al., and generate their images when illuminated by an equatorial orbital infinitesimally-thin accretion disk (i.e. the object is seen face-on). Such an assumption on the geometry of the disk is motivated in order to enhance the opportunity to clearly visualize the photon rings. Indeed, the accretion disk features are the weakest thread in the generation of black hole images due to the not so well understood physics of the magnetized plasma, so different pools of assumptions upon its optical, geometrical, and emission properties (among others [27]) are needed in order to optimize our chances to seek any putative deviation from the Kerr metric under different physical conditions. In our case, the emission properties are set via the consideration of a bunch of analytic models introduced by Gralla, Lupsasca and Marrone (hereafter GLM models) in [12] via its matching with the results of GRMHD simulations. We shall use three picks of such models: one truncated at a certain distance from the event horizon in order to clearly isolate the \(n=1\) and \(n=2\) photon rings, and two extending to the event horizon with different peaks and decays. In the former model we provide captions of the photon rings, while in the latter models we supply the full images for all these geometries. In all cases we compute the Lyapunov exponent of nearly-bound orbits (a sensitive quantity to deviations from Kerrness) and seek for the presence of any correlation with the actual suppression of luminosity between the \(n=2\) and \(n=1\) rings, a potential observable of VLBI projects. This paper is organized as follows: in Sec. II we set the theoretical framework, build the (null) geodesic motion in spherically symmetric backgrounds, upgrade the formalism to account for those cases in which the matter source is a magnetic monopole from non-linear models of electrodynamics, discuss the notion of critical curves, briefly describe the EHT calibrated measurement of Sgr A\({}^{\star}\) shadow, and set the emission (GLM) models used in this work. In Sec. III we provide an explanation of the choice of spherically symmetric geometries from Vagnozzi et. al. and the refinements made upon the space of parameters in each case. The generation of images and discussion of the physical results obtained is provided in Sec. IV, and we conclude in Sec. V with further thoughts and prospects. Theoretical Framework ### Null geodesics in spherically symmetric space-times We consider the motion of null particles in a spherically symmetric space-time suitably written as \[ds^{2}=-A(r)dt^{2}+B(r)dr^{2}+C(r)d\Omega^{2}. \tag{1}\] Note that these three functions can always be reduced to just two via a change of coordinates; however, the radial function \(C(r)\) cannot always be trivialized to \(C(r)=r^{2}\), so this shape grants us a larger freedom to work with. We assume such particles to follow geodesics of the background metric as \(g_{\mu\nu}k^{\mu}k^{\nu}=0\), where \(k^{\mu}=\dot{x}^{\mu}\) is the photon's wave number, and a dot represents a derivative with respect to the affine parameter. Using the freedom granted by the spherical symmetry of the system, we can assume the motion to take place along \(\theta=\pi/2\) without any loss of generality, so that the above equation reads \[-\dot{A}\dot{t}^{2}+B\dot{r}^{2}+C\dot{\phi}^{2}=0. \tag{2}\] Using the conserved quantities of the system, namely, the energy per unit mass, \(E=\dot{A}\dot{t}\), and the angular momentum per unit mass, \(L=C\dot{\phi}\), the above equation can be suitably rewritten (after re-absorbing a factor \(L^{2}\) in the definition of the affine parameter) as \[AB\dot{r}^{2}=\frac{1}{b^{2}}-V_{eff}(r)\, \tag{3}\] where \(b\equiv\frac{L}{E}\) is the impact parameter, and the effective potential reads as \[V_{eff}=\frac{A(r)}{C(r)}. \tag{4}\] Unstable bound orbits correspond to critical (maxima) points of the effective potential and are an essential theoretical concept for the characterization of black hole images. They are found as the solutions of the equations \[b_{c}^{2}=V_{eff}^{-1}(r_{ps})\,\ V_{eff}^{\prime}(r_{ps})=0\,\ V_{eff}^{ \prime\prime}(r_{ps})<0\, \tag{5}\] where primes denote derivatives with respect to the radial coordinate \(r\). Here \(r_{ps}\) is dubbed as the critical curve (or, alternatively, as the photon sphere in this spherically symmetric case) and \(b_{c}\) as the critical impact parameter. This is so because light rays issued from the observer's screen backwards towards the black hole split the impact parameter space into two well-distinguished regions: those with \(b>b_{c}\) find a turning point at some radius \(r>r_{ps}\), while those with \(b<b_{c}\) eventually intersect the event horizon of the black hole. Those that have \(b\gtrsim b_{c}\) approach asymptotically the critical curve and may linger there indefinitely, turning an arbitrarily large number of times before being released to asymptotic infinity. The angle turned by every photon upon deflection by the black hole is found by re-writing Eq. (3) into the more convenient form \[\frac{d\phi}{dr}=-\frac{b}{C(r)}\frac{\sqrt{AB}}{\sqrt{1-b^{2}\frac{A(r)}{C(r )}}}. \tag{6}\] This equation is the main tool we shall be using for the ray-tracing behind the generation of black hole images. This is done by integrating a set of light trajectories (for a range of \(b\)) backwards from the observer's screen and classify them according to the number \(n\) of times it has (half-)circled the black hole. This is of interest since, provided that there are gaps in the emission region of the disk (i.e. as long as the disk is not completely spherical), and assuming that the disk is transparent to its own radiation (i.e. optically thin) at the emission frequencies, every trajectory turning \(n\)-half times will boost its luminosity by picking additional photons from the disk on its winding around the black hole. This is the reason behind the existence, in this scenario, of a nested sequence of photon rings on top of the direct emission of the disk, the latter corresponding to those photons that travel from the disk to the observer without undergoing additional turns around the black hole. The characterization of such photon rings is the main object of interest in this work. ### Effective null geodesics from non-linear electrodynamics The above formalism needs to be upgraded when the matter fields threading the geometry belong to non-linear electrodynamics (NED): generalizations of Maxwell electrodynamics via new contributions in the field invariants. In such a case, it has been long recognized in the literature that photons propagate along null geodesics of an effective metric induced by the non-linearity of the matter fields [28; 29]. Such a scenario is actually of physical interest, since many alternative spherically symmetric geometries proposed in the literature have been identified to be supported by specific NED models. This way, we are driven to generalize the equations of geodesic motion in NED-supported geometries for the purpose of casting images of the corresponding objects. NEDs are generically defined by two field invariants associated to electric and magnetic fields; however, for purely electric or magnetic fields (the latter being the case of interest for our purposes in this work) only one of them is non-vanishing, defined via \[F=\frac{1}{4}F_{\mu\nu}F^{\mu\nu}\, \tag{7}\] so that NEDs correspond to choices of a function \(\mathcal{L}(F)\). It was proven in [28] (see also [29]) that in geometries threaded by such NEDs, the effective geometry such photons propagate on, \(g_{\mu\nu}^{e}k^{\mu}k^{\nu}=0\), is related to the background geometry via the relation \[g_{e}^{\mu\nu}=\mathcal{L}_{F}g^{\mu\nu}-\mathcal{L}_{FF}F^{\mu}{}_{\alpha}F^ {\alpha\nu}\, \tag{8}\] where \({\cal L}_{F}\equiv d{\cal L}/dF\). For purely magnetic configurations \(A_{\mu}=q_{m}\cos\theta\delta^{\phi}_{\mu}\), where \(q_{m}\) is the magnetic charge, the NED field equations provide a single solution for the invariant \(F\) for every NED as given by \[F=\frac{q_{m}^{2}}{2r^{4}}. \tag{9}\] With these definitions we can repeat the derivation performed in Sec. II.1. We first propose a suitable line element of the form \[ds_{e}^{2}=H(r)(-A(r)dt^{2}+B(r)dr^{2})+h(r)C(r)d\Omega^{2}\, \tag{10}\] so that the two functions \(H(r)\) and \(h(r)\) encode the deviations between the effective and background metrics. By working out the relation (8) in this magnetically charged case, such functions are explicitly given in the present framework by \[H(r)={\cal L}_{F}+2F{\cal L}_{FF}\,,\qquad h(r)={\cal L}_{F}. \tag{11}\] This way Eq.(2) gets replaced by \[-H(A\hat{t}^{2}+B\hat{r}^{2})+hCd\dot{\phi}^{2}=0\, \tag{12}\] where now the energy reads as \(E=HA\dot{t}\) and the angular momentum as \(L=hC\dot{\phi}\), allowing to rewrite the previous equation as \[AB\left(\frac{dr}{d\phi}\right)^{2}=\frac{C^{2}h^{2}}{H^{2}}\left(\frac{1}{b^ {2}}-V^{e}_{eff}(r)\right)\, \tag{13}\] and once again we can identify a potential \(V^{e}_{eff}(r)\) under these effective geodesics as \[V^{e}_{eff}(r)\equiv\frac{A(r)}{C(r)}\frac{H(r)}{h(r)}. \tag{14}\] Unstable bound photon orbits must thus satisfy conditions (5), but now with respect to the new effective potential (14). Obviously, this means that there will be differences in the quantitative values of the critical curve and its associated impact parameter and, consequently, in the features of the corresponding optical appearances. To work out the latter, we just need to (once again) find an equation for the deflection angle as a function of the radial coordinate, which is just Eq. (13) rewritten as \[\frac{d\phi}{dr}=\pm\frac{b}{C(r)}\frac{H(r)}{h(r)}\frac{\sqrt{AB}}{\sqrt{1-b ^{2}\frac{A(r)}{C(r)}\frac{H(r)}{h(r)}}}\, \tag{15}\] and thus we are done regarding this aspect. ### EHT shadow boundary constraints On the observer's screen, the boundary of the shadow separates scattered orbits from the captured ones (thus being tightly attached to the critical impact parameter given by the definitions (5)), and marks the location of the apparent image of the photon ring(s). We can explicitly write the critical curve as the solution of the equation \[C^{\prime}(r_{ps})A(r_{ps})-C(r_{ps})A^{\prime}(r_{ps})=0. \tag{16}\] The shadow's radius in this view is defined as the lensed image of the photon sphere, that is \[r_{sh}=\sqrt{\frac{C(r)}{A(r)}}\bigg{|}_{r=r_{ps}}\, \tag{17}\] and thus it coincides in value with the critical impact parameter itself, \(b_{c}\). In most spherically symmetric space-times the radial function trivializes to \(C(r)=r^{2}\) and one recovers a more well-known expression, \(r_{sh}=r/\sqrt{A(r)}|_{r=r_{ps}}\). One should note that these expressions correspond to Falcke's view of a shadow filling completely the region inner to the critical curve. Furthermore, the shadow's radius is actually directly unobservable given the lack of photon sensitivity below a certain threshold of the peak intensity. The EHT collaboration copes with this by appealing to the radius of the bright ring of radiation created by the direct emission - which is measurable - as a proxy for the size of the shadow subject to two main conditions [24]: 1. A sufficiently bright source and strongly lensed supply of photons near the horizon is present and; 2. The accretion flow is geometrically thick and furthermore optically thin at the wavelengths the EHT operates. In addition to these two conditions, a calibration factor must be introduced, which accounts for both theoretical and observational sources of uncertainty in how reliable such a proxy between the bright ring's radius and the shadow's size is. This inference is possible for Sgr A\({}^{\star}\) thanks to the fact that its mass-to-distance ratio \(M/D\) is known via the tracking of the orbits of the so-called \(S\)-stars. In particular, the \(S0-2\) star [30] has been tracked by two instruments (Keck and VLTI), whose combined (and uncorrelated, since they are obtained from two independent instruments) data allow to quantify the fractional deviation \(\delta\) between the inferred radius of a Schwarzschild black hole of angular size (dimensionless form) \(\theta_{sh,Sch}=6\sqrt{3}\theta_{g}\), where \(\theta_{g}=M/D\) is its angular gravitational radius, as [24] \[\delta\equiv\frac{r_{sh}}{r_{sh,Sch}}-1\approx-0.060\pm 0.065. \tag{18}\] In turn, this constraint can be transformed into the shadow's size as at \[4.54\lesssim r_{sh}/M\lesssim 5.22\, \tag{19}\] at \(1\sigma\) and \[4.21\lesssim r_{sh}/M\lesssim 5.66\, \tag{20}\] at \(2\sigma\). As one can see from these inferred constraints, bounds on the shadow's size are much more generous in alternative spherically symmetric geometries that reduce it, which is actually the majority of the models considered in this work, as we shall see later. For the sake of this work, and to enhance any potential differences in their cast images, we shall take as our reference value the \(2\sigma\) bound of Eq. (20) in order to constrain the parameter space of each geometry. Note also that the addition of rotation would modify the shadow's size, though this is assumed to be small as happens in the Kerr solution [31]: a more complete analysis of this problem should however take this ingredient into account. ### Thin-disk emission model Under the hypothesis that the universality of the Kerr (Schwarzschild) solution is replaced by the universality of an alternative metric, the image cast from any such object should be compatible with any scenario for the accretion flow. We shall thus use this idea to employ the bounds on the size above to constrain the space of parameters of alternative spherically symmetric geometries to subsequently enact their predictions in the opposite end of the geometry of the accretion flow, namely, that in which the disk is infinitesimally thin. In such a case, the shadow edge does not coincide with the critical curve [10] but instead the brightness depression can be strongly reduced (nonetheless there is a lower limit for such a size which only depends on the background geometry and dubbed as the inner shadow [11]). Note that such a shadow's size in the image is still directly unobservable, but the photon rings are not, which are our main concern here. The proper treatment of the imaging of a black hole surrounded by its accretion disk requires the use of GRMHD simulations of the plasma making up the disk under a pool of assumptions for the particles' velocities and temperature, the opacity and geometrical shape of the disk, its magnetic properties, and so on, see e.g. [27]. However, it is possible to develop semi-analytic approximations to this problem capable to capture the most influential features of the disk contributing to the image while being in agreement with the main outputs of these GRMHD simulations. This yields a simplified analytical and numerical treatment of the photon ring features of the image, this way allowing for a more efficient comparison of different geometries of the shadowcaster. For the sake of this work we focus on the Grallapusasca-Marrone (GLM) models, which are based on the Unbounded SU Johnson Distribution, and read as [12] \[I(r;\gamma,\mu,\sigma)=\frac{\exp\left(-\frac{1}{2}(\gamma+\mathrm{arcsinh} \left(\frac{r-\mu}{\sigma}\right))^{2}\right)}{\sqrt{(r-\mu)^{2}+\sigma^{2}}}. \tag{21}\] These GLM models assume a monochromatic emission (in the frame of disk), and contain three freely-adjustable parameters which control the features of the disk's intensity: \(\gamma\) is related to its rate of growth from asymptotic infinity to the peak, \(\mu\) shifts the profile to a desired location, while \(\sigma\) sets its dilation. For an optically thin flow with a purely equatorial emission, every emitted photon suffers gravitational redshift on its run-away from the black hole. In the absence of absorption, this can be computed according to Liouville's theorem, which demands the conservation of the flux \(I_{\nu_{\mathrm{o}}}/\nu_{0}^{3}=I_{\nu_{\mathrm{e}}}/\nu_{\mathrm{e}}^{3}\), where \(\nu_{0}\) and \(\nu_{\mathrm{e}}\) refer to the frequency in the observer's and emitter's frames, respectively. Using the monochromatic character of \(I_{\nu_{\mathrm{e}}}\equiv I(r)\), and the fact that in the spherically symmetric geometry (1) one has the relation \(\nu_{\mathrm{o}}=A^{3/2}(r)\nu_{\mathrm{e}}\), so we can integrate the above relation between fluxes to all frequencies as \(I_{ob}=\int d\nu_{0}I(r)\) to find the result1 Footnote 1: Note that in the case in which we are using effective geodesics, we need to add a factor \(H^{2}(r)\) to this expression in order to account for the transformation of line element (10). Absorption and other potential transport effects emerging from nonlinearities in the electromagnetic Lagrangian will be studied elsewhere. \[I_{ob}=\sum_{n=0}^{2}A^{2}(r)I(r)\, \tag{22}\] where in this expression we have introduced the contributions up to the second photon ring, \(n=2\), our target in this work. For the sake of this work we shall suitably adapt to the non-rotating case the three original models included in [12], which correspond to the following choices \[\mathrm{GLM3}\ :\ \gamma=-2\,,\qquad\mu=\frac{17M}{3}\,,\qquad \sigma=\frac{M}{4}\, \tag{23}\] \[\mathrm{GLM1}\ :\ \gamma=-\frac{3}{2}\,,\quad\mu=0\,,\qquad\sigma= \frac{M}{2}\,\] (24) \[\mathrm{GLM2}\ :\ \gamma=0\,,\qquad\mu=0\,,\qquad\sigma=\frac{M}{2}. \tag{25}\] Figure 1: The GLM intensity profiles (21) for the choices (23), (24) and (25), respectively. These profiles are depicted in Fig. 1. GLM3 has a peak brightness located slightly above the corresponding innermost stable circular orbit of a Schwarzschild black hole (i.e. \(r\gtrsim 6M\)), while GLM1/GLM2 go all the way down to \(r=0\) (note that the black hole horizon will appear well before getting there) with different shapes. While the latter two models are thus more suitable to describe the overflow of the plasma in orbit around the heart of M87 and Sgr A\({}^{\star}\), the inner edge of the direct emission of the disk in such cases will be smaller than the one of the \(n=2\) ring: consequently, photon rings will be stacked on top of the direct emission, troubling their direct visualization. We thus employ GLM3 to complement the analysis, since in such a case the inner edge of the direct emission is truncated at a larger distance to allow for such a visualization. In order to perform our simulations we use our own Geodesic Rays and Visualization of IntensiTY profiles (GRAVITYp) code (other codes are available in the market such as GYOTO [32] or AART [33]), allowing for ray-tracing and illumination of any spherically symmetric metric, and we use the three intensity profiles above but normalized to their maximum values, which is its peak for the GLM3 model, but their values at the horizon for the GLM1/GLM2 models on a case-by-case basis for each metric. Such geometries are discussed next. ## III Choice of spherically symmetric space-times In this work we consider 16 alternative spherically symmetric space-times extracted out of the work [25]. In what follows we explain the motivation behind such a pick of space-times and the choice of parameters for the sake of the generation of images. Regarding such parameters, they are pushed as far as possible to be compatible with the shadow's radius at \(2\sigma\), as given by Eq. (19). In doing that, not every metric proposal saturates the EHT bound(s). This is due to the fact that either a) the model's parameter must be bounded below the EHT constraints in order for an event horizon to be present, b) there is no limit the parameter can be pushed to before incurring in incompatibilities with the EHT bound. Furthermore, in some cases the EHT bound is weaker than other bounds found via analysis of several astrophysical phenomena. For the sake of our analysis, Vagnozzi et. al. constraints on the viable parameter space will be refined to more finely match the shadow's size limits: this is so because at such large deviations from the Schwarzschild's prediction the features of the photon rings become more sensitive to small modifications in the model's parameters. On the other hand, the spherical symmetry of the system strongly simplifies the problem as compared to the realistic rotating case, requiring less sophisticated treatment of the geodesic curves and, by extension, less computing power. Since we are dealing with spherically symmetric space-times we start our considerations from the Schwarzschild black hole \[A(r)=1-\frac{2M}{r}. \tag{26}\] Having a single parameter, the Schwarzschild black hole (BH) predicts a unique event horizon, \(r_{h}=2M\), a unique critical impact parameter, \(b_{c}=3\sqrt{3}M\) (hence a single shadow's radius), and a unique photon sphere radius, \(r_{ps}=3M\). This way, it is the benchmark every other metric is tested against. Furthermore, in order to interpret the parameter \(M\) as the mass as seen from an asymptotic observer, our analysis of spherically symmetric space-times will only consider those metrics whose behavior at large distances (assuming asymptotic flatness) is dominated by the (Schwarzschild) mass term. This will allow us to compare the predictions of all alternative models on as an equal-footing as possible. It is important to stress that we consider this pool of geometries in a (mostly) theory-agnostic approach, namely, disregarding the theory combining gravitational (i.e. either GR or modified gravity) plus matter fields they come from, and some potential drawbacks such theories and their corresponding geometries may have2. The latter comes mostly from the violation of the energy conditions and potential instabilities which may render the configurations non-viable, but for the sake of this work we are only interested in the comparison between their cast images. Note, however, that since some of these geometries can be framed within a modified gravity perspective, some of these drawbacks of their GR-formulation (most notably the violation of the energy conditions for "regular" geometries) may be potentially lifted. Footnote 2: Nonetheless, we shall not be oblivious to the fact that three of the geometries considered here have been identified to be derived from reasonable enough NED theories; hence the development of the framework of effective geodesics in the previous section. ### Geometries and shadow constraints 1. **Reissner-Nordstrom (RN) BH**. The canonical modification of the Schwarzschild geometry is to add a charge term to form the Reissner-Nordstrom solution \[A(r)=1-\frac{2M}{r}+\frac{q_{e}^{2}}{r^{2}}\.\] (27) A critical curve is present in this model if the electric charge fulfils the bound \(q_{e}^{2}\leq(9/8)M^{2}\). Compatibility with \(2\sigma\) shadow's radius (20) allows us to push the electric charge to the value \(q_{e}=0.939M\). Since this is below the bound \(q_{e}^{2}\leq M^{2}\) marking the transition from charged black holes to over-charged (naked singularity) solutions, an event horizon will be present in this case. Note, however, that such a value is well above reasonable estimates on how much charged a black hole may be from astrophysical considerations [34], though we shall disregard such a fact in order to have an overall view on how images from charged space-times look like before engaging in other samples. 2. **Euler-Heisenberg (EH) NED BH**. Our first example of a NED-supported geometry is a natural generalization of the RN geometry via the function \[\mathcal{L}(F)=-F+4\mu F^{2}\,\] (28) and its spherically symmetric geometry (interpreted as supported by a magnetic monopole with charge \(q_{m}\)) is characterized by the function [35] \[A=1-\frac{2M}{r}+\frac{q_{m}^{2}}{r^{2}}-\frac{2\mu q_{m}^{4}}{5r^{6}}\.\] (29) Images of these configurations were discussed in [36; 37]. Note that here \(\mu\) is a constant which can be related to the effective series expansions of Quantum Electrodynamics the EH model is derived from [38], but Vagnozzi et. al. take it as a free parameter and fixes it to \(\mu=0.3\). For such a value they report the constraint \(q_{m}\lesssim 0.8M\) though we find we can push it a bit harder up to \(q_{m}=0.88M\) for our generation of images. 3. **Bardeen's regular BH**. Bardeen's proposal [39] is to remove curvature singularities at the center of black holes by replacing the point-like region of the Schwarzschild/RN black hole by a de Sitter core [40]; this is achieved via a magnetically charged solution defined in terms of the metric function \[A(r)=1-\frac{2Mr^{2}}{(r^{2}+q_{m}^{2})^{3/2}}\,\] (30) with \(q_{m}\leq\sqrt{16/27}M\approx 0.77M\) in order to describe a black hole. Vagnozzi et al. report that all values within this range are compatible with \(2\sigma\) shadow's radius. It is known that Bardeen's space-time can be obtained as a solution of the Einstein field equations coupled to an NED [41]. However, such a function has a bizarre shape that does not lead to functions \(H\) and \(h\) which smoothly recover the background geodesics in the \(q_{m}\to 0\) limit, and hence we follow the same route as Vagnozzi et. al. and consider images generated within background geodesics. 4. **Hayward's regular BH**. Hayward's model is based on similar premises as that of Bardeen's one, and furthermore it has been widely studied as a toy-model to simulate gravitational collapse and development of de Sitter cores. Its metric function reads [42] \[A(r)=1-\frac{2Mr^{2}}{r^{3}+2l^{2}M}\,\] (31) with the same bound as Bardeen, \(l\lesssim\sqrt{16/27}M\), to describe a black hole. This is another instance of a space-time geometry that can also be obtained as a solution of the Einstein field equations coupled with NED, but whose \(H\) and \(h\) functions do not smoothly recover the background geodesics in the \(l\to 0\) limit. Likewise in the Bardeen model, Hayward's solution is compatible with the \(2\sigma\) bounds at every \(l\), so we again push the parameter \(l\) of the model until nearly saturating the critical bound to describe a black hole. 5. **Frolov BH**. Frolov's choice [43] is similar in spirit to both the Bardeen and Hayward models, but it contains two parameters. The metric function is given by \[A(r)=1-\frac{(2Mr-q_{e}^{2})r^{2}}{r^{4}+(2Mr+q_{e}^{2})l^{2}}\,\] (32) where \(0<q_{e}\leq 1\) is seen as an electric charge, and again \(l\lesssim\sqrt{16/27}M\). In Vagnozzi et. al. [25] they propose to fix \(l=0.3\), which results in a constraint \(q_{e}\lesssim 0.9M\). However, at the value saturating this bound Frolov's solution does not describe a black hole, but instead a naked object by a small margin; for instance, a value of \(q_{e}=0.875M\) describes a black hole instead, but only a slightly larger shadow radius. Configurations without event horizons may produce additional photon ring contributions due to light rays that travel above (but near) the critical curve, and are reflected back due to the presence of local maxima or an infinite potential slope; several such examples have been worked out recently in the literature, see e.g. [44; 45; 46]. For the sake of our work here, their analysis would muddy the comparison of alternative spherically symmetric geometries on equal-footing since we are only interested on the \(n=2\) ring and not in higher-order rings, so we opt for considering Frolov black holes with \(q_{e}=0.875\). 6. **Kazakov-Solodukhin (KS) regular BH**. It arises in a string-inspired model and is given by [47] \[A(r)=-\frac{2M}{r}+\frac{\sqrt{r^{2}-l^{2}}}{r}\,\] (33) Despite its shape it actually reduces to the Schwarzschild metric at large distances, \(r\gg l\), so it belongs to our acceptable class of models. The single parameter of the model is required to be positive \(l>0\) in order to avoid the central singularity. Vagnozzi et. al. report that \(2\sigma\) observations require that \(l\lesssim M\), but we need to decrease it down to \(l=0.942M\) to saturate the shadow's bound. 7. **Sen BH**. This proposal [48] belongs to dilaton gravity and also includes a magnetic charge contribution, now within the mass term as (in the non-rotating limit) \[A(r)=1-\frac{2M}{r+q_{m}^{2}/M}\,\] (34) where \(q_{m}\lesssim 0.75M\) at \(2\sigma\) but X-ray reflection spectroscopy yield a slightly stronger constraint \(q_{m}\lesssim 0.6M\)[49], so we take here the latter bound. 8. **Einstein-Maxwell-Dilation (EMD) BH**. A model in which an additional scalar field is included - the dilaton - within GR coupled to a Maxwell field (EMD gravity) yields a line element given by [50] \[A(r)=1-\frac{2M}{r}\left(\sqrt{1+\frac{q_{e}^{4}}{4M^{2}r^{2}}}-\frac{q_{e}^{2}}{ 2Mr}\right)\,\] (35) with \(q_{e}\lesssim M\) in Vagnozzi et. al. which we slightly refine as \(q_{e}=0.995M\). 9. **Dark matter (DM)-surrounded BH**. A model incorporating a surrounding dark matter fluid via a correcting term to the Schwarzschild solution was proposed in [51] as given by the line element \[A(r)=1-\frac{2M}{r}+\frac{k}{r}\log\left(\frac{r}{|k|}\right)\,\] (36) Vagnozzi et. al. report \(k\lesssim 0.15M\) but we find we just need that constant to take the value \(k=0.128M\) to saturate the bound on the shadow's size. 10. **Simpson-Visser (SV) black bounce BH**. Our first example of a non-trivial \(C(r)\) function is provided by the so-called black bounce, which denotes a metric originally introduced by Simpson and Visser [52], and whose philosophy is to replace the radial coordinate of the Schwarzschild solution by a radial function implementing a bounce, the latter interpreted as the throat of a wormhole. In order to do it so, Simpson and Visser follow the prescription of Ellis [53] from the shift of the radial coordinate, so that the metric functions read as \[A(r)=1-\frac{2M}{(r^{2}+a^{2})^{1/2}}\,,\qquad C(r)=r^{2}+a^{2}\.\] (37) Because of the way it is built, this model has the same critical impact parameter and photon sphere radius as its seed metric - the Schwarzschild black hole - for every \(a\), so it is not constrained by the EHT results at all. For the sake of our images (a detailed analysis was made by some of us in [54]) we choose to remain within the sub-class of these configurations that have an horizon (corresponding to \(0<a\leq 1\), so we take the value \(a=0.5\)). 11. **Loop Quantum Gravity (LQG) BH.** A solution found within the context of Loop Quantum Gravity takes the form [55] \[A(r)=\frac{(r-r_{-})(r-r_{+})(r+r_{\star})^{2}}{r^{4}}\,\] (38) with the definitions \(r_{+}=r_{S}(1+P)^{2},r_{-}=r_{S}P^{2}/(1+P)^{2},r_{\star}=\sqrt{r_{+}r_{-}}\) and \(P\) is a parameter of the theory. Vagnozzi et. al report the constraint \(P\lesssim 0.08M\) for compatibility with \(2\sigma\) shadow; we refine such a constraint as \(P=0.082M\). 12. **Conformal scalar model (ConfSca) BH**. This is an example of a family of configurations which look like the RN one but with a minus sign in front of the charge term, i.e. [56] \[A(r)=1-\frac{r_{S}}{r}-\frac{s}{r^{2}}\,\] (39) so we can take it as a benchmark for this kind of metrics. While Vagnozzi et. al. (note that we have reversed the sign for \(s\) as compared to them) report that \(s\lesssim 0.4M\), we find we can push it up to \(s=0.45M\) for compatibility with \(2\sigma\) shadow's size. 13. **Janis-Newman-Winicour (JNW) naked singularity**. For completeness, and for the sake of comparison with black hole images given its historic relevance, we consider the naked singularity of the Janis-Newman-Winicour solution, supported by a massless scalar field, and given by the function [57] \[A(r)=\left[1-\frac{2M}{r(1-\nu)}\right]^{1-\nu};C(r)=r^{2}\left[1-\frac{2M}{r( 1-\nu)}\right]^{\nu}\,\] (40) where \(\nu\) is a parameter related to the scalar charge of the field supporting it. Vagnozzi et. al. report the constraint \(\nu\lesssim 0.45M\), though we find we can push it up to \(\nu=0.4835M\). The absence of a horizon means that light rays above the maximum of the potential will find no obstacle to reach the center of the solution, thus posing a different scenario than black hole spacetimes. 14. **Bronnikov's regular NED BH**. Bronnikov's model is another example of a regular magnetic black hole solution given by the metric function [58] \[A(r)=1-\frac{2M}{r}\left(1-\tanh\left[\frac{q_{m}^{2}}{2Mr}\right]\right)\,\] (41) and supported by a well-behaved NED of the form \[\mathcal{L}(F)=4F\cosh^{-2}\left[a(2F)^{1/4}\right]\,\] (42) where the constant \(a\) is related to the magnetic charge via the relation \(a=q_{m}^{3/2}/(2M)\) in order to remove the central singularity. Vagnozzi et. al. report the constraint \(q_{m}\lesssim M\) at \(2\sigma\); however we find a much more restricted range, saturated at \(q_{m}=0.905M\), which is the value we take here. 15. **The Ghosh-Kumar (GK) BH**. This is a simple modification of the Schwarzschild geometry via the function [59; 60; 61; 62] \[A(r)=1-\frac{r_{S}}{\sqrt{r^{2}+q_{m}^{2}}}\,\] (43) and thus, close in spirit to the black bounce geometries such as the SV one above. As in the Bardeen and Hayward solutions, despite the fact that it can be generated within a NED, the corresponding function is of bizarre shape and so they are the corresponding effective geodesics functions. This way, we opt for considering the usual background geodesics as in Vagnozzi et. al., and push a little bit their bound of \(q_{m}\lesssim 1.6M\) up to \(q_{m}=1.63M\) for the generation of our images. 16. **The Ghosh-Culetu-Simpson-Visser (GCSV) NED regular BH**. It corresponds to the function [60; 61; 62] \[A(r)=1-\frac{r_{S}}{r}e^{-q_{m}^{2}/r_{S}}. \tag{44}\] In Vagnozzi et. al. they report that \(q_{m}\) can be pushed up to \(|q_{m}|\lesssim M\) using the background geodesics. We instead opt for considering the effective ones given the fact that the model is supported by a NED with Lagrangian density [63] \[L(F)=F\exp\left[-\frac{q_{m}}{r_{S}}(2q_{m}^{2}F)^{1/4}\right]\, \tag{45}\] whose associated effective functions \(H\) and \(h\) turn out to be well behaved. Furthermore, this analysis complements the one carried out in Ref. [46] about the multi-ring structure of the sub-family of configurations without event horizons. The effective potential for this set of sixteen spherically symmetric geometries (plus Schwarzschild) is depicted in Fig. 2 for models with background geodesics and effective ones, respectively. Some comments are in order. The fact that the JNW geometry lacks horizons makes its potential qualitatively deviate from the others, not being defined everywhere. As for the potentials of the effective geodesics, they show weird behaviors in the innermost region; however, being covered by a horizon, such a part of the potential plays no role in the generation of images. This would not be so in those cases in which the EHT bound is not saturated and further pushing the space of parameters of the geometries would make the horizon go away. In such a case the internal shape of the potential _does_ matter in the generation of a multi-ring structure provided that it has additional minima/maxima or an infinite slope at the center; however such a feature will not be present in our images. ### Some comments on the discarded models There are many other models whose constraints from the shadow's radius are reported within Vagnozzi et. al. [25] and which are not considered in this work. Here we briefly provide the reasons why (besides practical reasons of keeping the length of the paper within reasonable limits). First, as mentioned before, we do not consider models which are not asymptotically flat, which would prevent the identification of the constant \(M\) as the asymptotic mass of the space-time and thus the generation of images of the corresponding objects on an equal-footing. This leaves outside of our analysis models such as \(f(R)\) [R], the DS wormhole [K], Rindler [AH], or the topological defect [AJ]. Second, we do not consider space-times that can be rewritten (via e.g. a simple redefinition of constants) in an usual RN-like form, since the same constraints placed upon the RN solution can be converted into constraints on each theory's parameters and this way the photon rings features are the same. This includes as examples BHCSH [Q1] if \(s>0\), Horneski [S1] if \(p<0\), MOG [T], braneworlds [U], GUPa [AL A] and GUPb [AL B], or the second non-commutative gravity model [AM B]. Third, we exclude those models which have too tight constraints on their space of parameters to significantly alter photon ring features, or are directly ruled out: this includes the SV WH [I2], the Morris-Thorne WH [J], the null NS [O], Aether models [Z], 4D Gauss-Bonnet gravity [AA], or asymptotically-safe gravity [AB]. We have also avoided consideration of glued solutions containing potential discontinuities, or others demanding excessive computational times. ## IV Results and physical discussion ### Lyapunov exponents and extinction rates In Table 1 we report our findings on the main geometrical and image features of the alternative spherically symmetric space-times considered in the previous section, or Figure 2: The effective potential for spherically symmetric geometries with background geodesics (top) and for effective ones (bottom). Only the outermost part of the potential with \(V>0\) is relevant for generation of images, since zeros in \(V(r)\) mean presence of horizons. ganized according to decreasing values of the Lyapunov exponent. The latter is a measure of the instability scale of nearly bound orbits, namely, those which hover very close to the critical curve, \(r\approx r_{m}+\delta r_{0}\) where \(\delta r_{0}\ll r_{m}\). This way, after a number of half-orbits \(n\), the particle will be located at \[\delta r_{n}\approx e^{\gamma n}\delta r_{0}\, \tag{46}\] (for a detailed account of such orbits, see [64]). The Lyapunov exponent \(\gamma\) is the number we are interested in here, since it controls the flux of intensity among successive images of the disk, that is [16] \[\frac{I_{n+1}}{I_{n}}\sim e^{-\gamma}\ \ \mbox{for}\ \ n\gg 1. \tag{47}\] It turns out that such a number is an universal quantifier of a given geometry in the limit \(n\to\infty\), in which it loses its entire dependence on the accretion disk modelling. Since in this work we are interested in the \(n=2\) ring, which offers a good compromise between a weak enough dependence on the disk's emission modelling and realistic/optimistic interferometric detection in the future, we shall approximate it by the \(I_{2}/I_{1}\) flux. In this regard, we are taking advantage of the fact that the sequence of photon rings quickly approximate the (gravitationsally lensed) critical curve, the latter corresponding to the limit \(n\to\infty\). Indeed, in the Schwarzschild geometry, for \(n=2\) the corresponding Lyapunov exponent approximates the exact value \(\gamma=-\pi\) by an error of \(\sim 0.3\%\), far below other observational uncertainties in this problem. We find its value for every geometry by tracking the relative locations of the \(n=1\) and \(n=2\) trajectories of the light rays in their winding around the black hole. Following this approach we report the values of such a Lyapunov exponent in Table 1, where we observe that eleven geometries decrease its value, four increase it, and one leaves it unchanged. By inspection of this Table we see that any correlation between such an index and the compactness (i.e. the mass-horizon radius ratio) is weak. Indeed, while most alternative geometries are more compact than its Schwarzschild counterpart (and in such a case its shadow's radius is also smaller), and lower horizon radius tend to decrease the corresponding value of the Lyapunov exponent, this trend is weak and furthermore contaminated by the three geometries supported by effective geodesics (note that models that do not saturate the EHT bound, and the JNW by its lack of a horizon, must also be left aside from this comparison by obvious reasons). As for the photon sphere radius, no correlation with the Lyapunov exponent is found. In any case and, as already pointed out by Vagnozzi et. al., most geometries (indeed in our case all except three) decrease the critical impact parameter (the shadow's radius in the EHT interpretation), where the corresponding constraints leave a wider margin for modifications with respect to the pre \begin{table} \begin{tabular}{|c|c||c||c|c|c|c|c|c|} \hline Space-time & \(r_{h}\) & \(b_{ps}\) & \(r_{ps}\) & \(\mathbf{Lyapunov}\) [\(I_{1}/I_{2}\)] & \(\frac{11}{6\pi I_{M1}}\) [\(I_{GLM1}^{\frac{n+1}{2}}\)] & \(\frac{11}{6\pi I_{GLM2}^{\frac{n+1}{2}}}\) \\ \hline LQG & 1.708 & 4.216 & 2.521 & 3.372 [29.150] & 35.59 & 31.01 & 29.53 \\ \hline KS & 2.214 & 5.559 & 3.279 & 3.288 [26.809] & 30.95 & 28.24 & 26.93 \\ \hline ConfSca & 2.204 & 5.556 & 3.274 & 3.278 [26.530] & 30.66 & 27.96 & 26.65 \\ \hline DM & 1.671 & 4.212 & 2.493 & 3.268 [26.259] & 32.28 & 27.95 & 26.56 \\ \hline \hline Schwarzschild & 2 & 3\(\sqrt{3}\) & 3 & 3.150 [23.352] & 27.83 & 24.74 & 23.45 \\ \hline \hline SV & 2 & 3\(\sqrt{3}\) & 3 & 3.107 [22.367] & 26.79 & 23.69 & 22.41 \\ \hline JNW NS & 0 & 4.213 & 1.453 & 3.096 [22.128] & 21.02 & 20.52 & 19.95 \\ \hline Sen & 1.64 & 4.558 & 2.514 & 2.887 [17.946] & 22.96 & 19.27 & 18.02 \\ \hline GCSV (e) & 1.560 & 4.217 & 2.370 & 2.693 [14.782] & 19.88 & 15.73 & 14.16 \\ \hline EMD & 1.421 & 4.211 & 2.234 & 2.665 [14.381] & 19.47 & 15.62 & 14.43 \\ \hline Bronnikov (e) & 1.449 & 4.213 & 2.253 & 2.587 [13.292] & 18.34 & 14.51 & 13.15 \\ \hline Hayward & 1.337 & 4.916 & 2.652 & 2.542 [12.708] & 17.45 & 14.02 & 12.77 \\ \hline RN & 1.343 & 4.209 & 2.197 & 2.527 [12.524] & 17.26 & 13.78 & 12.61 \\ \hline EH (e) & 1.490 & 4.212 & 2.197 & 2.418 [11.229] & 18.41 & 12.20 & 10.88 \\ \hline Frolov & 1.179 & 4.283 & 2.216 & 2.403 [11.066] & 15.58 & 12.41 & 11.17 \\ \hline Bardeen & 1.093 & 4.524 & 2.301 & 2.253 [9.516] & 13.27 & 10.75 & 9.56 \\ \hline GK & 1.158 & 4.216 & 2.038 & 2.100 [8.166] & 11.74 & 9.21 & 8.21 \\ \hline \end{tabular} \end{table} Table 1: The alternative spherically symmetric geometries considered in this work (see the main text for abbreviations and model’s parameters chosen) ordered in decreasing values of their Lyapunov exponent, the latter computed for the \(n=2\) trajectory (see the corresponding discussion in the text). Here we list those quantities relevant for the generation of images (in units of \(M\); (e) denotes quantities computed in the effective propagation geometry) as well as those relevant to characterize them; \(r_{h}\): horizon radius; \(b_{ps}\): critical impact parameter; \(r_{ps}\): photon sphere radius; Lyapunov exponent of nearly bound orbits and its associated (theoretical) luminosity extinction rate \(I_{1}/I_{2}\) [in brackets]; \(I_{n=2}^{\frac{n}{n}}\): the (observable) extinction rate (sub-labels for GLM type of emission profile). Digit precision limited to three decimals for theoretical quantities and to two for observational ones. We single out using double rows/columns the Schwarzschild solution as the benchmark metric, and the critical impact parameter (the shadow’s radius in the EHT interpretation) as the inferred quantity by the EHT, acting as the constraint the parameter space of all these geometries is subjected to. dictions of the Schwarzschild geometry. Keeping with the discussion of the Lyapunov exponent and its relation to the exponential decay of the luminosity of the photon rings, one should note that this is a theoretical expectancy based on the assumption that every photon trajectory will cross emission regions with similar properties. This is certainly not the case since the profile is sensitive to the radius, and hence to the impact parameter (furthermore we are not taking into account any source variabilities on the typical timescale of an orbit). In other words, the Lyapunov exponent is not a direct observable but one would expect deviations in the actual intensity fluxes fed by the pick of the emission profile; to what extend such observable deviates from the theoretical Lyapunov number is a question of great interest in connecting theoretical properties with actual observables. Indeed, this is what we find when computing the (inverse) flux ratio between the \(n=1\) and \(n=2\) (which shall be referred to as the _extinction rate_) rings for the GLM models, as reported in Table 1, and whose images (disregarding the Schwarzschild black hole itself) we discuss next. We point out that in generating such images the observed luminosity is normalized to its total value for every geometry and GLM model via Eq.(22), in order to consider as similar settings for each case as possible. ### GLM3 model We first consider the GLM3 model, which allows us to clearly isolate the \(n=1\) and \(n=2\) rings, as depicted in Fig. 3. There we provide a zoom in of the image around the photon rings for each spherically symmetric geometry ordered according to the decreasing values of their Lyapunov exponent, and restricting the relevant impact parameter space to (mostly) remove the direct emission from the figures. It is transparent that there are significant differences regarding several aspects of these rings: their locations in the impact parameter space, their widths, luminosities, and finally in the distance to each another. Furthermore, despite the fact that both the effective NED geometries and the naked JNW solution trouble the comparison, we can appreciate a trend in the evolution of these photon rings, most notably in the width separating them, which tends to increase as the Lyapunov exponent decreases. In particular, the \(n=1\) photon ring has a non-negligible thickness preventing the identification of a well-defined diameter, while the \(n=2\) one does appear as a sharp feature in all images. All these aspects allow to distinguish spherically symmetric geometries from each other at equal emission model, something in agreement with our initial expectations regarding the features of photon rings to depend less on the emission properties as we get to larger values of \(n\). In this case, the extinction rate clearly correlates with the Lyapunov index: save by a few exceptions lower values of the latter leads to lower extinction rates; indeed the Lyapunov exponent systematically underestimates the extinction rate as compared to the GLM3 one. Another comment is related to the special features of the naked JNW geometry, as given by a much wider \(n=1\) ring and a closer distance to the direct emission, clearly appearing in the top right end of its figure. This goes along our previous warning on the fact that horizonless compact objects have special features regarding the contributions to the luminosity of its rings depending on the shape of its effective potential, which in some cases may (even if partially) neglect the assumption on the exponential suppression of the luminosity of successive rings, troubling its comparison with usual black hole space-times. ### GLM1/GLM2 models The GLM3 model is rather unnatural given the fact that we place the emission region at a truncated and arbitrary region (but the same) for every spherically symmetric geometry, and it is designed only to probe the structure of the photon rings without the "contamination" of the direct emission. As opposed to this, in the GLM1/GLM2 models the accretion flow goes all the way down to the event horizon (whenever present), and thus are better aligned with astrophysical expectations. The imaging of the 16 alternative configurations according to these two models is presented in Figs. 4 and 5, respectively. Such images are consistent with what we know about observed (by the EHT) images: they are largely dominated by the bright ring of radiation caused by the direct emission of the disk, and have the typical central brightness depression on their center; superimposed on the direct emission we find the slight boost of luminosity caused by the \(n=1\) and \(n=2\) photon rings, though only the former is neatly visible. This is a trivial consequence of the extinction rate between the photon rings, as reported in the corresponding columns of Table 1. Indeed, such a rate closely tracks the Lyapunov index, with deviations between the latter (theoretical) and the former (observational) being \(\lesssim 15\%\) between each other for every geometry in the GLM1 model, and \(\lesssim 5\%\) in the GLM2 one, and typically underestimated in the theoretical prediction (note in this sense that effective geodesics slightly decrease such rates as compared to what one would find should it use the background geodesics instead). This implies that the (theoretical) Lyapunov index is not a bad guidance in the actual (observable) luminosity of the photon rings after all. We also observe neat differences in the location and width of the photon rings as well as in the depth of the brightness depression among background geometries, which is just a reflection of the data displayed in Table 1. Similarly as in the GLM3 model, the naked JNW solution distorts the trend of images, since in such a case the distribution of luminosities of the photon rings inserted in the direct emission is significantly changed as compared to black hole space-times, and so the depth is of the central brightness depression. A comment re lated to this is that in these thin-disk models the depth of the central brightness depression can be much smaller than the inferred EHT shadow's size: in the GLM3 model this is translated into a \(n=2\) ring that can penetrate well inside the corresponding critical curve, while in the GLM1/GLM2 it is the direct emission itself which clearly lies inside it. This is expected on the grounds of previous studies in the field with thick but not fully spherical disk [19], where the size of the central blackness depression is tied to the apparent (lensed) location of the (equatorial) horizon. The bottom line of our results is that even when the shadow's boundary is assumed to be degenerate between different spherically symmetric geometries [26] (assuming EHT data, hypothesis and interpretations), it turns out that their corresponding first and second photon rings contain sufficiently sharp differences to allow to distinguish between such geometries in a thin-disk context, something in agreement with other findings in the field [14]. In practical terms, however, the properties of the disk are comparatively poorly known, and this may significantly alter the results to the point of mistaking alternative geometries from each another (and from Schwarzschild's). Here we resorted to inferred GLM-type models of the disk from the results of the GRMHD simulations, yet there is still plenty of room for improvement in the comparison with observed images. Figure 3: Zoom in of the \(n=1\) (brighter) and \(n=2\) (dimmer) photon rings in the impact parameter space for (from left to right and top to bottom) LQG, KS, ConfSca, DM, SV, JNW, Sen, GCSV (e), EMD, Bronnikov (e), Hayward, RN, EH (e), Frolov, Bardeen, and GK, ordered in decreasing values of their Lyapunov exponent (units of \(M=1\)) using the emission model GLM3 in Eqs.(21) and (23). ## V Conclusion and prospects In this work we have generated images of a selected pool of alternative spherically symmetric geometries, extracted from the work of Vagnozzi et. al. in [25]. To do so, we applied (and refined) the constraints derived there in the space of parameters of each model from the inferred correlation between the size of the bright ring and the shadow's size itself by the EHT Collaboration on Sgr A\({}^{\star}\)[24] (subject to the caveats pointed out there), and generated such images when each geometry is surrounded by an infinitesimally-thin accretion disk with three samples of analytical profiles for the emission provided by the GLM ones. We thus computed the Lyapunov exponent of nearly-bound orbits and seek for any correlation with actual extinction rates of the luminosity between the \(n=1\) and \(n=2\) photon rings. Our results show that, when pushed to the extreme of its parameter's space by the calibrated shadow's size in a thick disk geometry, in the opposite end of an infinitely-thin geometry different alternative spherically symmetric geometries significantly deviate in the physical features relevant for such images (horizon and photon sphere radius), and dramatically in their extinction rates, up to a factor three from one end of the (upward) modifications to the shadow's size to the other (downward). Furthermore, such rates strongly correlate with the theoretical (Lyapunov) prediction, particularly in the GLM2 model (and to a lesser extent in the GLM1), thus rendering a usefulness to such theoretical quantities in connecting Figure 4: Images (from left to right and top to bottom) for LQG, KS, ConfSca, DM, SV, JNW, Sen, GCSV (e), EMD, Bronnikov (e), Hayward, RN, EH (e), Frolov, Bardeen, and GK, ordered in decreasing values of their Lyapunov exponent (units of \(M=1\)) using the emission model GLM1 in Eqs.(21) and (24). them to observations. Indeed, significant visual differences exist between the photon rings of each GLM model, as seen when isolated from each other and from the direct emission in the GLM3 model, as well as in the features of the full images of the (more realistic) GLM1/GLM2 models. This suggests that, in this scenario, it would be possible to distinguish between this pool of alternative spherically symmetric geometries in their optical appearance (at fixed emission profile); in other words, it is possible to oppose the variability on the disk's properties with fixed background geometry as tests of Kerr hypothesis (upon successful incorporation of the rotation in this framework) with the variability on background geometry with fixed accretion disks as tests of non-Kerr geometries. There are, however, many known caveats that render the above conclusion premature. First of all, there are the assumptions on the optical, geometrical, and emission properties of the disk. In addition to the previously discussed assumption of an infinitesimally-thin accretion disk, our analysis also assumes that emission is completely monochromatic and optically thin, in the accretion disk's frame. In reality, accretion disks possess complex emission profiles that are not expected to be optically thin at all frequencies. Thus there is room for modelling improvement in this area. In this sense, we note that the EHT collaboration operates at a constant 230 GHz frequency (i.e. in the observer's frame) [24]; furthermore at such a frequency opacity tends to suppress the \(n=2\) ring, though the signal is expected to reappear at higher frequencies, such as the planned 345GHz of fu Figure 5: Images (from left to right and top to bottom) for LQG, KS, ConfSca, DM, SV, JNW, Sen, GCSV (e), EMD, Bronnikov (e), Hayward, RN, EH (e), Frolov, Bardeen, and GK, ordered in decreasing values of their Lyapunov exponent (units of \(M=1\)) using the emission model GLM2 in Eqs.(21) and (25) ture upgrades of VLBI [19]. On the emission profiles, the fact that GLM models are analytical approximations to GRMHD simulations for Kerr black holes, means that we have no solid reason to expect that other black holes will have the same exact intensity profile, since the relevant geometrical features for the generation of images (e.g. horizon and photon sphere radius) may vary significantly from one geometry to another. This way, should we be able to look at a compact object and retrieve data on photon ring intensities, this would not immediately translate into reliable constraints for the background geometry without priors on the properties of the disk. This difficulty could be circumvented by appealing to universal polarimetric signatures [65], or to the more recent concept of photon ring _autocorrelations_, a two-point correlation of fluctuations in the intensity along a given photon ring [66]. Finally, the inclusion of rotation and inclination is expected to moderately modify the extinction rate numbers. Rotation actually turns the critical curve into a photon shell and adds two more critical exponents in the characterization of photon rings [67] (besides altering the shadow's size too), which have a non-negligible impact in the theoretical luminosity. As for inclination, the critical curve also depends on it, so one should also expect significant deviations in the features of the associated rings [12]: for instance, for a Kerr black hole at full speed and at the \(\theta_{0}=17^{o}\) inclination of M87 the factor \(e^{-\gamma}\) gets a \(\sim 13\%\) modification (on the observer's spin-oriented part of the ring [68]). In Fig. 6 we provide a quick glance to the inclined images at the M87 angle \(\theta_{0}=17^{o}\) (top), and at a much more extreme angle of \(\theta_{0}=80^{o}\) (bottom), of a Schwarzschild black hole and the two alternative black hole geometries with the largest modifications (upward and downward) to its Lyapunov index considered in this work, namely, LQG and GK, for the seemingly favored GLM2 model. Even in this spherical symmetry setting, there are apparent visual differences among each model. The incorporation of both rotation and inclination would render the problem of characterizing photon rings in detail significantly more complicated than the simplified analysis made here (see e.g. the analysis of [21] on this problem), and goes well beyond the scope of this work. This problem would be further exacerbated in misaligned (tilted) accretion disk scenarios, which is consistent with low-luminosity active galaxy nuclei [69]. This scenario introduces another dimension to the problem in the form of an additional angle between the disk orientation and the black hole spin vector. Such a disk tilt would affect the emission geometry and brightness because it breaks the axisymmetric nature of the accretion flow and results in increased flux variability, thus also significantly altering the emission profile, and supposedly the extinction rates too (see [70], and also [71] for a discussion on this problem). To conclude, while photon rings may contain valuable information of putative non-Kerr geometries, much work and far better modelling is still necessary in order to hope to disentangle the contribution of background geometries and accretion disk features in black hole images for pho Figure 6: Inclined images at \(17^{o}\) (top) and \(80^{o}\) (bottom) degrees of inclination for LQG (left), Schwarzschild (middle) and GK (right) for the GLM2 model. ton rings to become useful as tests for the presence of new gravitational Physics [72]. ## Acknowledgements FSNL acknowledges support from the Fundacao para a Ciencia e a Tecnologia (FCT) Scientific Employment Stimulus contract with reference CEECINST/00032/2018, and funding through the research grants CERN/FIS-PAR/0037/2019 and PTDC/FIS-AST/0054/2021. FSNL and LS also acknowledge support from the research grants UIDB/04434/2020 and UIDP/04434/2020. This work is also supported by the Spanish Agencia Estatal de Investigacion (grant PID2020-116567GB-C21 funded by MCIN/AEI/10.13039/501100011033 and ERDF A way of making Europe), the project PROMETEO/2020/079 (Generalitat Valenciana), the EU's Horizon 2020 research and innovation (RISE) programme H2020-MSCA-RISE-2017 (FunFiCO-77740), and by the European Horizon Europe staff exchange (SE) programme HORIZON-MSCA-2021-SE-01 (NewFunFiCO10108625). This article is based upon work from COST Actions CA18108 and CA21136.
2304.07136
One Explanation Does Not Fit XIL
Current machine learning models produce outstanding results in many areas but, at the same time, suffer from shortcut learning and spurious correlations. To address such flaws, the explanatory interactive machine learning (XIL) framework has been proposed to revise a model by employing user feedback on a model's explanation. This work sheds light on the explanations used within this framework. In particular, we investigate simultaneous model revision through multiple explanation methods. To this end, we identified that \textit{one explanation does not fit XIL} and propose considering multiple ones when revising models via XIL.
Felix Friedrich, David Steinmann, Kristian Kersting
2023-04-14T14:01:12Z
http://arxiv.org/abs/2304.07136v2
# One Explanation Does Not Fit XII ###### Abstract Current machine learning models produce outstanding results in many areas but, at the same time, suffer from shortcut learning and spurious correlations. To address such flaws, the explanatory interactive machine learning (XIL) framework has been proposed to revise a model by employing user feedback on a model's explanation. This work sheds light on the explanations used within this framework. In particular, we investigate simultaneous model revision through multiple explanation methods. To this end, we identified that _one explanation does not fit XIL_ and propose considering multiple ones when revising models via XIL. MotivationNowadays, machine learning models generally suffer from flaws, e.g. model bias (Bianchi et al., 2022; Bender et al., 2021) or confounding behavior (Geirhos et al., 2020; Lapuschkin et al., 2019). Furthermore, they remain opaque and little understood. Therefore, it becomes crucial to make models understandable as their applications get more and more integrated into our lives. As a remedy, explainable artificial intelligence (XAI) has emerged with methods to explain a model, often its outputs, to the user. One step further, several works leverage such explanations in the learning setting, e.g., to improve a model beyond explainability or unconfound it (Teso and Kersting, 2019; Teso et al., 2022; Selvaraju et al., 2019; Friedrich et al., 2022). User interaction plays a central role therein, substantially enhancing recent applications (Ouyang et al., 2022). A promising framework that leverages explanations interactively to improve a model's performance is XIL (_cf._ Fig. 1). Given a model which provides an explanation (Explain, Fig. 1) for a decision of a selected example, the user can interact with the model and provide corrective feedback on the explanation. This way, the model is not only optimized for the actual task but also revised to align its explanations with the user-provided feedback. However, XIL's Explain module has only been realized and investigated for one explainer at a time (e.g., RRR (Ross et al., 2017) uses Input Gradients (IG)). This work transfers the _one explanation does not fit all_ paradigm (Arya et al., 2019; Sokol and Flach, 2020) to XIL. In general, each explainer has inherent limitations, e.g., IG (Hechtlinger, 2016) provides only local explanations. In turn, revising a model via XIL implemented with a single explainer does not ensure a model's revision in its entirety. That means explainers' different capabilities and limitations translate to XIL methods and impact their effectiveness (Friedrich et al., 2022). Therefore, as a sensible next step, we investigate XIL with combinations of explainers. We show this helps further improve model revision regarding explanation quality. This work motivates designing future methods that leverage explanations of multiple explainers, as no single best explainer exists to be optimized for. MethodsPrevious approaches (Ross et al., 2017; Schramowski et al., 2020; Shao et al., 2021) were already leveraging explanations to revise a model. They usually follow the paradigm of optimizing two objectives at the same time: the prediction (\(\mathcal{L}^{\text{pred}}\)) and explanation loss (\(\mathcal{L}^{\text{sil}}\)). The former is the same as in the standard training objective, while the explanation loss additionally constrains the explanations based on user feedback. The combined loss enforces a concurrent optimization of model outputs and explanations. So far, \(\mathcal{L}^{\text{sil}}\) was only implemented with a single Figure 1: XIL by Friedrich et al. (2022). A model generates explanations (Explain) and a user provides corrective feedback to revise the model. explainer (Explain). For example, RRR (Ross et al., 2017) uses IG (Hechtlinger, 2016), RBR (Shao et al., 2021) uses influence functions (IF, Koh and Liang (2017)), while RRR-G (Schramowski et al., 2020) use gradient-weighted class activation maps (GradCAM, Selvaraju et al. (2017)). In contrast to these methods, we realize \(\mathcal{L}^{\text{sil}}\) with multiple explainers, giving \[\mathcal{L}=\mathcal{L}^{\text{pred}}+\sum_{i}\lambda_{i}\mathcal{L}_{i}^{ \text{sil}}, \tag{1}\] where \(\lambda\) weights each explanation loss. The modified objective optimizes the model regarding multiple explainers, overcoming the limitations of specific ones. ResultsWe base our experimental evaluation on the DecoyMNIST dataset --a variation of MNIST with decoy squares in the image corners, confounding the training data. We measure model performance with prediction accuracy and the explanation quality via a wrong reason measure (wr, cf. A.1; lower is better). It examines how wrong a model's explanation for a specific prediction is, given ground-truth wrong reasons. Further details and results can be found in A.2 and A.3. Tab. 1a shows that XIL methods independent of the internally-used explainer successfully revise a model in terms of accuracy. However, Tab. 1b demonstrates that the model still relies on wrong reasons when generating explanations with various explainers. For example, applying RRR (employing IG explanations) substantially reduces wr for IG and Integrated Gradients (IntGrad), but GradCAM and LIME (Ribeiro et al., 2016) scores are still high, i.e. have high activations in the confounder area. This highlights that the wr score of the internally-used explainer alone is no suitable indicator for confounding behavior. More importantly, the results show that revising a model with XIL through one explainer does not generalize to (all) different explainers. In contrast, Tab. 2 illustrates that leveraging a combination of various explanations into one XIL method reduces wr among multiple explainers while the accuracy remains on par (Tab. 3). Combining RRR and RRR-G yields low wr scores for all explainers (except LIME, though improved), where single methods struggle with. Moreover, combining RRR and RBR shows that not directly related explainers (GradCAM or LIME) can be improved, too. The final combination again highlights that combining multiple explanations better _fits_ XIL, i.e., further improving a model's explanation quality. However, one can see that combining methods does not set all scores to zero. This questions the reliability and robustness of explainers, an active research area (Adebayo et al., 2018). Hence, more research on explainers is needed. Furthermore, as the rightmost column in Tab. 2 shows (GradCAM score is not lowest), another exciting avenue for future work entails further investigating \(\lambda_{i}\) to trade off the influence of each explainer. Finally, the increase in computational cost must be kept in mind. ConclusionIn this work, we studied XIL's performance from the perspective of explanation methods. We found that optimizing for a single explanation method _does not fit_ XIL. Instead, combining \begin{table} \begin{tabular}{c|c c c c c c} & RRR+RRR-G & RRR+RBR & RRR-G+RBR & RRR+RRR-G+RBR \\ \hline IG & **0.0**\(\pm\)0.0 & **0.0**\(\pm\)0.0 & 1.0\(\pm\)0.1 & **0.0**\(\pm\)0.0 \\ GradCAM & 3.1\(\pm\)1.7 & 11.8\(\pm\)2.9 & **2.3**\(\pm\)1.5 & 3.5\(\pm\)2.5 \\ IntGrad & 2.2\(\pm\)0.1 & **0.0**\(\pm\)0.0 & 13.5\(\pm\)0.1 & **0.0**\(\pm\)0.0 \\ LIME & 29.6\(\pm\)0.8 & 31.0\(\pm\)0.9 & 33.1\(\pm\)0.8 & **27.9**\(\pm\)1.0 \\ \end{tabular} \end{table} Table 2: Mean wr scores [%] with sd (5 runs) on DecoyMNIST. The columns depict combinations of explainers used for XIL. The wr scores of combined methods are lower than for methods based on single explainers (_cf._ Tab. 1b). Lower is better; best values bold. \begin{table} \begin{tabular}{c|c c c c c} & RRR+RRR-G & RRR+RBR & RRR-G+RBR & RRR+RRR-G+RBR \\ \hline IG & **0.0**\(\pm\)0.0 & **0.0**\(\pm\)0.0 & 1.0\(\pm\)0.1 & **0.0**\(\pm\)0.0 \\ GradCAM & 3.1\(\pm\)1.7 & 11.8\(\pm\)2.9 & **2.3**\(\pm\)1.5 & 3.5\(\pm\)2.5 \\ IntGrad & 2.2\(\pm\)0.1 & **0.0**\(\pm\)0.0 & 13.5\(\pm\)0.1 & **0.0**\(\pm\)0.0 \\ LIME & 29.6\(\pm\)0.8 & 31.0\(\pm\)0.9 & 33.1\(\pm\)0.8 & **27.9**\(\pm\)1.0 \\ \end{tabular} \end{table} Table 2: Mean wr scores [%] with sd (5 runs) on DecoyMNIST. The columns depict combinations of explainers used for XIL. The wr scores of combined methods are lower than for methods based on single explainers (_cf._ Tab. 1b). Lower is better; best values bold. different explanation methods through simultaneous optimization further improves explanation quality, even beyond the optimized explanation methods. Emphasizing the complexity of faithful and explainable models, our results contribute to this goal and motivate future research. ### Acknowledgements The authors thank Raynard Widjaja for preliminary results. This work benefited from the Hessian Ministry of Science and the Arts (HMWK) projects "The Third Wave of Artificial Intelligence - 3AI" and hessian.AI as well as from the ICT-48 Network of AI Research Excellence Center "TAILOR" (EU Horizon 2020, GA No 952215).
2304.08205
VECO 2.0: Cross-lingual Language Model Pre-training with Multi-granularity Contrastive Learning
Recent studies have demonstrated the potential of cross-lingual transferability by training a unified Transformer encoder for multiple languages. In addition to involving the masked language model objective, existing cross-lingual pre-training works leverage sentence-level contrastive learning or plugs in extra cross-attention module to complement the insufficient capabilities of cross-lingual alignment. Nonetheless, synonym pairs residing in bilingual corpus are not exploited and aligned, which is more crucial than sentence interdependence establishment for token-level tasks. In this work, we propose a cross-lingual pre-trained model VECO~2.0 based on contrastive learning with multi-granularity alignments. Specifically, the sequence-to-sequence alignment is induced to maximize the similarity of the parallel pairs and minimize the non-parallel pairs. Then, token-to-token alignment is integrated to bridge the gap between synonymous tokens excavated via the thesaurus dictionary from the other unpaired tokens in a bilingual instance. Experiments show the effectiveness of the proposed strategy for cross-lingual model pre-training on the XTREME benchmark.
Zhen-Ru Zhang, Chuanqi Tan, Songfang Huang, Fei Huang
2023-04-17T12:23:41Z
http://arxiv.org/abs/2304.08205v1
# VECO 2.0: Cross-lingual Language Model Pre-training with Multi-granularity Contrastive Learning ###### Abstract Recent studies have demonstrated the potential of cross-lingual transferability by training a unified Transformer encoder for multiple languages. In addition to involving the masked language model objective, existing cross-lingual pre-training works leverage sentence-level contrastive learning or plugs in extra cross-attention module to complement the insufficient capabilities of cross-lingual alignment. Nonetheless, synonym pairs residing in bilingual corpus are not exploited and aligned, which is more crucial than sentence interdependence establishment for token-level tasks. In this work, we propose a cross-lingual pre-trained model VECO 2.0 based on contrastive learning with multi-granularity alignments. Specifically, the sequence-to-sequence alignment is induced to maximize the similarity of the parallel pairs and minimize the non-parallel pairs. Then, token-to-token alignment is integrated to bridge the gap between synonymous tokens excavated via the thesaurus dictionary from the other unpaired tokens in a bilingual instance. Experiments show the effectiveness of the proposed strategy for cross-lingual model pre-training on the XTREME benchmark1. Footnote 1: Rank 1st on March 17, 2023 on XTREME leaderboard. [https://sites.research.google/xtreme/](https://sites.research.google/xtreme/) ## 1 Introduction Pre-trained models play an important role as a backbone for various NLP downstream tasks. The models have expanded from monolingual to multilingual with development, where cross-lingual pre-trained models have demonstrated their superior performance on cross-lingual NLP tasks [15; 8; 11]. To construct the universal representation between different languages, previous works mainly focus on two pre-training objectives, which are Multilingual Masked Language Model (MMLM) and Translation Language Model (TLM). MMLM is the multilingual version of MLM modeling each language separately in the shared semantic space and TLM performs MLM on concatenated parallel sentence pairs to implicitly capture the alignment via attention mechanism, both of them align the masked tokens with the context without considering sentence-level information. To overcome that, HICTL [38] and infoXLM [7] incorporate sentence-level contrastive learning to enhance the alignment among parallel sentences. However, one potential issue lies in that the token-to-token alignment for synonyms hidden in parallel corpus is ignored and lacks exploitation, despite sequence-to-sequence and token-to-sequence exploration have been included in the above approach, especially token alignment across languages is more crucial for token-oriented downstream tasks, i.e. cross-lingual Named Entity Recognition (NER). Furthermore, instead of implicitly building interdependence in TLM relying on self-attention module, VECO [28] plugs a cross-attention module into Transformers encoder to explicitly capture alignment whose extra architecture has to be adapted and leads to extra parameters. In light of the above motivation, we propose **V**arious granularity aligned **E**ncoder with **CO**ntrastive Learning (**VECO 2.0**) in sequence-to-sequence and token-to-token alignments. Specifically, VECO 2.0 maximizes the similarity of the parallel pairs and minimizes the non-parallel pairs in a batch based on the sequence semantic representation of bilingual corpus for sequence-to-sequence alignment. Besides, the synonyms residing in the parallel pairs are excavated via the thesaurus dictionary to construct the parallel token pairs. Similarly, VECO 2.0 bridges the gap between token pairs while separating from the other unpaired tokens in the instance. The above strategy is implemented on Transformer encoder architecture that can be directly adapted and combined with MLM and TLM tasks to establish the comprehensive alignment for token-sequence, sequence-sequence and token-token level, resulting in the universal representation cross languages. We evaluate VECO 2.0 on a variety of representative cross-lingual NLU tasks in XTREME [21] benchmark including tasks of sentence-pair classification, structured prediction, question answering and sentence retrieval. Comparative experiments against multiple cross-lingual pre-trained models clearly demonstrate the effectiveness and superiority of our model. In addition, the ablation study further validates the two auxiliary alignment tasks play a crucial role in the sequence-level and token-level downstream tasks, respectively, guiding us in taking the task characteristics into account when pre-training. Finally, we pre-train a larger-scaled model based on the proposed mechanism, coupled with the fine-tuning and ensemble strategy, ranking first on the XTREME leaderboard on March 17, 2023. ## 2 Related Work ### Multilingual Language Models Most existing multilingual language models are built based on Transformer encoder [37] architecture. Among them, mBERT [15] is the first multilingual language model which constructs the sharing vocabulary and performs masked language modeling (MLM) task on monolingual corpus among multiple languages. XLM [23] introduces parallel data and extends MLM to translation language modeling (TLM) which randomly masks words in concatenated parallel sentences. XLM-R [11] builds a larger vocabulary size of 250k and trains with multilingual MLM objective for only monolingual data in scaling amount based on RoBERTa [27] architecture. InfoXLM [7] adds sentence-level contrastive learning loss for bilingual pairs to maximize the mutual information between translation pairs. ERNIE-M [30] integrates back-translation into the pre-training process to generate pseudo-parallel Figure 1: An illustration of the proposed multi-granularity contrastive learning where the left is the sequence-to-sequence alignment for a batch, and the right is the token-to-token alignment for an instance. In (a), \((x_{i},y_{i})\) indicates the parallel pair in a batch and \((x_{j},y_{j})\) is the another pair. In (b), \(\{a_{1},a_{2},a_{3}\}\) and \(\{b_{1},b_{2},b_{3},b_{4}\}\) denote the tokens of \(x_{i}\) and \(y_{i}\), respectively. \((a_{2},b_{3})\) is a synonymous token pair filtered by thesaurus dictionary. pairs for monolingual corpus, enabling alignment between different languages. HICTL [38] proposes sentence-level and word-level contrastive learning to distinguish the parallel sentence and related words for each sentence. XLM-E [8] introduces ELECTRA-style [10] tasks including multilingual replaced token detection and translation replaced token detection. XY-LENT [32] leverages X-Y bitexts coupled with a novel sampling strategy rather than previous English-centric bitexts. Furthermore, there are also various multilingual language models built on Transformer encoder-decoder architecture which focus on the improvement of text generation and machine translation. For instance, mBART [26] is a sequence-to-sequence denoising model pre-trained on monolingual corpora using the BART autoregressive objective. mT5 [42] introduces a multilingual variant of T5 [34] and significantly improves the performance. VECO [28] provides a cross-attention module to build the interdependence between languages, which can be used to initialize both encoder-decoder models for NLG tasks and encoder models for NLU tasks. Despite contrastive learning being utilized in our encoder model similar to infoXLM and HICTL, the key difference from previous work lies in that we construct the multi-granularities contrastive loss for alignment. Compared to the individual sentence-level contrastive learning in infoXLM, we also add token-level contrastive loss for synonym alignment. In contrast to word-level contrastive learning in HICTL which enhances the connection between related words and sentences for token-to-sequence alignment, VECO 2.0 bridges the representations of synonyms pairs embedded in the bilingual corpus rather than the tokens and the sentence they belong to, resulting in better performance in token-level cross-lingual downstream tasks. ### Contrastive Learning **Contrastive Learning** (CTL) has been applied and validated in the field of computer vision [20; 3; 5; 4], then transferred to natural language processing [17; 16; 40; 18]. The key idea of that is drawing positive pairs closer while separating the negative pairs by a contrastive loss, where the construction of pairs and loss function definition is significant, enabling models to learn better representation. SimCLR [3] is the classic in-batch CTL method that builds the augmented images as positive pairs and the other unpaired images within the batch compose negative pairs. Meanwhile, the loss function infoNCE [36] is adapted as optimization objectives. In the NLP areas, simCSE [17] leverages the various representation via dropout encoded by language model to serve as the positive pairs. CERT [16] designs self-supervised CTL task to learn sentence-level representation, complementing the previous token-level task in pre-training, such as MLM in BERT [15]. The specific CTL method is augmenting input with back translation [35] and following MoCo [20] which maintains a queue of negative candidate instances. ## 3 Pre-training VECO 2.0 is the upgrade language model of VECO [28] which enhances the parallel corpus alignment cross granularities. Different from VECO plugging an additional cross-attention module into the Transformer encoder to explicitly build the interdependence between languages, VECO 2.0 is under the encoder-only architecture with fewer parameters. Accordingly, VECO 2.0 proposes the **M**ulti-granularity **C**ontrastive **L**earning (MCTL), i.e., sequence-to-sequence and token-to-token alignment auxiliary tasks to tackle the representations across languages. Besides vanilla Masked Language Modeling (MLM) task for monolingual corpus and Translation Language Modeling (TLM) task for bilingual corpus, the two multi-granularity auxiliary tasks for parallel corpus are illustrated as Figure 1. ### Sequence-to-Sequence Alignment An in-batch sequence-to-sequence contrastive loss is designed to bridge the gap of parallel corpus and widen the distance between unpaired sentences in semantic space from a coarse-grained perspective. Formally, let \(\mathcal{X}=\{x_{i}\ |\ 1\leq i\leq n\}\) and \(\mathcal{Y}=\{y_{i}\ |\ 1\leq i\leq n\}\) be the instances set in a training batch respectively, where \(n\) is the batch size and \(y_{i}\) corresponds to the translation of \(x_{i}\). For pair \((x_{i},y_{i})\), taking the encoded representation of \(x_{i}\) as query, and other instances except \(x_{i}\) in a batch, i.e. \(\mathcal{X}^{\setminus x_{i}}\cup\mathcal{Y}\) is considered as keys. Since \(y_{i}\) is the positive samples corresponding to \(x_{i}\), the contrastive loss for \(x_{i}\) can be calculated as followed: \[l_{ctl}(x_{i})=-\log\frac{\exp(s(x_{i},y_{i})/\tau)}{\sum\limits_{k\in\mathcal{X }^{\setminus x_{i}}\cup\mathcal{Y}}\exp(s(x_{i},k)/\tau)} \tag{1}\] Here, \(\tau\) is the temperature parameter and \(s(x,y)\) reflects similarity between \(x\) and \(y\), where we initialize it with cosine similarity \(s(x,y)=\frac{x.y}{\|x\|\cdot\|y\|}\). Symmetrically and similarly, \(y_{i}\) is also assumed as a query and the candidate keys are included in \(\mathcal{X}\cup\mathcal{Y}^{\setminus y_{i}}\). On the condition that \(x_{i}\) is the positive sample for \(y_{i}\), the contrastive loss for \(y_{i}\) is constructed with the following formula: \[l_{ctl}(y_{i})=-\log\frac{\exp(s(y_{i},x_{i})/\tau)}{\sum\limits_{k\in \mathcal{X}\cup\mathcal{Y}^{\setminus y_{i}}}\exp(s(y_{i},k)/\tau)} \tag{2}\] Accordingly, the sequence-to-sequence contrastive loss for a training batch in size \(n\) is determined by the following: \[\mathcal{L}_{seq}=\frac{1}{2n}\sum\limits_{i=1}^{n}\{l_{ctl}(x_{i})+l_{ctl} (y_{i})\} \tag{3}\] ### Token-to-Token Alignment Although sequence-to-sequence alignment is proposed for better overall cross-linguistic representation, there are still many synonyms residing in the translated pairs waiting to be fully exploited, which act as anchor words in some downstream tasks (e.g. NER). Motivated by that, a strategy for token-to-token alignment is investigated in this paper. Specifically, for parallel corpus \((x_{i},y_{i})\), we assume that there are several synonym pairs that can be exploited via the mapping and filter from thesaurus dictionary as shown in Figure 1. Let \(\mathcal{S}_{i}=\{(a_{j},b_{j})\mid a_{j}\in x_{i},b_{j}\in y_{i},1\leq j\leq| \mathcal{S}|\}\) be the set of synonym pairs, where \(a_{j}\) and \(b_{j}\) are positive samples with each other. The remaining tokens in the instance \(x_{i}\) and \(y_{i}\) are served as negative samples. Formally speaking, let \(\mathcal{W}\) be the set of tokens from \(x_{i}\) and \(y_{i}\), and the contrastive loss for \(a_{j}\) and \(b_{j}\) are defined as followed separately. \[l_{ctl}(a_{j})=-\log\frac{\exp(s(a_{j},b_{j})/\tau)}{\sum\limits_{k\in \mathcal{W}^{\setminus x_{j}}}\exp(s(a_{j},k)/\tau)} \tag{4}\] \[l_{ctl}(b_{j})=-\log\frac{\exp(s(b_{j},a_{j})/\tau)}{\sum\limits_{k\in \mathcal{W}^{\setminus k_{j}}}\exp(s(b_{j},k)/\tau)} \tag{5}\] Then, the token-to-token loss for the parallel pair \((x_{i},y_{i})\) can be derived via the aggregation of token pairs \(\mathcal{S}_{i}\): \[l_{tok}(x_{i},y_{i})=\frac{1}{2|\mathcal{S}_{i}|}\sum\limits_{j=1}^{|\mathcal{ S}_{i}|}\{l_{ctl}(a_{j})+l_{ctl}(b_{j})\} \tag{6}\] Finally, the token-to-token alignment loss for a training batch is determined by averaging loss for all bilingual pairs: \[\mathcal{L}_{tok}=\frac{1}{n}\sum\limits_{i=1}^{n}l_{tok}(x_{i},y_{i}) \tag{7}\] ### Pre-training Tasks For monolingual corpus, VECO 2.0 employs MLM task which randomly replaces the token with [MASK] to align it with the context in its own language. For bilingual data, the TLM task which concatenates parallel sentences and performs MLM objective is utilized to implicitly attend to a part of words across languages. In addition, the above multi-granularity alignment tasks are also combined to capture cross-language correlation explicitly. Formally, the entire loss for the training corpus is optimized as followed: \[\mathcal{L}=\begin{cases}\mathcal{L}_{MLM}&,\ \ \ \text{if}\ \ \text{ monolingual}\\ \mathcal{L}_{TLM}+\mathcal{L}_{seq}+\mathcal{L}_{tok}&,\ \ \text{if}\ \ \text{ bilingual}\end{cases} \tag{8}\] ## 4 Experiments ### Pre-training Corpus We pre-train on monolingual and bilingual corpus involving 109 languages. For monolingual data, we follow XLM-R [11] using CC-1002[39] from Common Crawl and extract 2.5TB data. For bilingual data, we collect 4TB parallel pairs from OPUS3. Footnote 2: [https://data.statmt.org/cc-100/](https://data.statmt.org/cc-100/) Footnote 3: [https://opus.nlpl.eu/](https://opus.nlpl.eu/) To mitigate the imbalance between high and low-resource languages, we sample monolingual corpus with the multinomial distribution following XLM [12]. Specifically, in \(N\) languages, the sampling probability \(q_{i}\) for language \(i\)\((1<j<N)\) can be formalized as follow: \[q_{i}=\frac{p_{i}^{\alpha}}{\sum_{j=1}^{N}p_{j}^{\alpha}},\ \text{where}\ \ p_{i}=\frac{n_{i}}{\sum_{k=1}^{N}n_{k}} \tag{9}\] Here, \(n_{i}\) is the number of sentences for language \(i\) and \(\alpha\) corresponds to the smoothing parameter which controls the language sampling rate. The lower the value of \(\alpha\), the more inclined the low resource language. We employ \(\alpha=0.5\). ### Implementation Details The model is large-scaled that has 559M parameters in 24 layers with 1024 hidden size and 4096 feed-forward size. We adopt the 250k shared vocabulary same as XLM-R and apply subword tokenization directly on raw text data using Sentence Piece Model [22]. Following XLM-R[11] and VECO [28], we do not use language embedding for better generalization. Table 1 shows the details of the involved model. The model parameters are initialized by the encoder of VECO. During the training phase, to balance the monolingual and bilingual corpus, we alternately sample a batch of monolingual segments and a batch of parallel sentences. In the token-to-token alignment task, The MUSE4[13] is utilized as the desaurus dictionary. Footnote 4: [https://github.com/facebookresearch/MUSE](https://github.com/facebookresearch/MUSE) ### Downstream Tasks In this paper, we evaluate our model on XTREME [21] which is a massively multilingual benchmark for evaluating cross-lingual generalization. Specifically, XTREME includes 9 tasks from 4 categories covering 40 languages: * Sentence-pair classification: Cross-lingual Natural Language Inference (XNLI) [14] and Cross-lingual Paraphrase Adversaries from Word Scrambling (PAWS-X) [44]. XNLI aims \begin{table} \begin{tabular}{l l l l l l l l} Model & Architecture & \#Params. & \#Layers & \#Lang.s & \#Vocab. & Tasks & Training Data \\ \hline mBERT & Encoder & 172M & 12 & 104 & 110k & MLM & Wikipedia \\ XLM & Encoder & 570M & 24 & 100 & 200k & MLM,TLM,CTM & Wikipedia+Translation \\ XLM-R & Encoder & 559M & 24 & 100 & 250k & MLM & CommonCrawl \\ HICTL & Encoder & 559M & 24 & 100 & 250k & MLM, TLM, HICTL & CommonCrawl+Translation \\ VECO & Flexible & 662M & 24\({}^{*}\) & 50 & 250k & MLM, TLM, CA-MLM & CommonCrawl+Translation \\ VECO 2.0 & Encoder & 559M & 24 & 109 & 250k & MLM, TLM, MCTL & CommonCrawl+Translation \\ \hline \end{tabular} \end{table} Table 1: The details of the compared cross-lingual model. * denotes VECO unifies the encoder and decoder. to predict the relation between the premise and hypothesis sentence, i.e. entailment, contradiction or neutral. PAWS-X determines whether the two sentences are paraphrased of each other. * Structured prediction: POS tagging from the Universal Dependencies v2.5 [29] treebanks which each word is assigned one of 17 universal POS tags and NER from Wikiann [31] annotated with LOC, PER, and ORG tags in IOB2 format. * Question answering: Cross-lingual Question Answering (XQuAD) [1], Multilingual Question Answering (MLQA) [24], and the gold passage version of the Typologically Diverse Question Answering dataset (TyDiQA-GoldP) [9]. They are all extractive QA where the answer spans concealed in the context. * Sentence retrieval: Building and Using Parallel Corpora (BUCC) [45] and Tatoeba dataset [2] which aim to extract parallel sentences between the English corpus and target languages. Here, for cross-lingual setting, all tasks provide English training data and dev/test set in all involved languages, except sentence retrieval tasks which have no training data and are in a zero-shot setting. ### Fine-tuning Setting We consider the most common cross-lingual setting for evaluation of the cross-lingual language model. Specifically, only the English training corpus is used for fine-tuning and directly evaluating on dev/test dataset in other languages, aiming to measure the cross-lingual transfer capability of the model. We adopt the fine-tuning training scripts provided by XTREME 5. The hyperparameters setting for fine-tuning are shown in Appendix C. Footnote 5: [https://github.com/google-research/xtreme](https://github.com/google-research/xtreme) ### Results As shown in Table 2, we evaluate VECO 2.0 on the nine tasks of XTREME in the cross-lingual setting compared with mBERT [15], XLM [8], XLM-R [11], HICTL [38] and VECO [28]. It can be found that VECO 2.0 outperforms the mBERT, XLM, XLM-R and VECO on all nine tasks of 12 metrics with an average improvement of +15.4%, 19.5%, +7.0%, 2.1%, surpasses HICTL on 9/12 metrics with an average advancement of +5.6%, respectively. Specifically, for sentence-pair classification tasks, the performance of VECO 2.0 on XNLI and PAWS-X is slightly inferior to HICTL and VECO by 0.6%, 0.2% separately, but still superior to other models, which we suppose the token-to-token alignment affects the sentence-to-sentence alignment to some extent in the condition that XNLI focuses on the sentence-level semantic representation. We further validate that in the ablation study. For structured prediction, VECO 2.0 exceeds the sub-optimal performance by 0.3% with VECO on POS tagging and 1.0% with HICTL on NER. The advantageous performance of NER we attribute to the potential of the token-to-token alignment task \begin{table} \begin{tabular}{l c c c c c c c c c c c c} \hline \hline Datasets & \multicolumn{3}{c}{Pair sentence} & \multicolumn{3}{c}{Structured prediction} & \multicolumn{3}{c}{Question answering} & \multicolumn{3}{c}{Sentence retrieval} \\ & XNLI & PAWS-X & POS & NER & XQuAD & MLQA & TyDiQA & BUCC & Tatoeba & \\ \hline \#Langs & 15 & 7 & 33 & 40 & 11 & 7 & 9 & 5 & 33 & AVG \\ \hline Metrics & Acc. & Acc. & F1 & F1 & F1 & EM & F1 & EM & F1 & EM & F1 & Acc. \\ \hline \hline \multicolumn{11}{c}{_Cross-lingual zero-shot transfer (models are trained on English data)_} \\ \hline mBERT & 65.4 & 81.9 & 71.5 & 62.2 & 64.5 & 49.4 & 61.4 & 44.2 & 59.7 & 43.9 & 56.7 & 38.7 & 59.8 \\ XLM & 69.1 & 80.9 & 71.3 & 61.2 & 59.8 & 44.3 & 48.5 & 32.6 & 43.6 & 29.1 & 56.8 & 32.6 & 55.7 \\ XLM-R & 79.2 & 86.4 & 73.8 & 65.4 & 76.6 & 60.8 & 71.6 & 53.2 & 65.1 & 45.0 & 66.0 & 57.3 & 68.2 \\ HICTL & **81.0** & 87.5 & 74.8 & 66.2 & 77.9 & 61.7 & **72.8** & **54.5** & 66.0 & 45.7 & 68.4 & 59.7 & 69.6 \\ VECO & 79.9 & **88.7** & 72.1 & 65.7 & 77.3 & 61.8 & 71.7 & 53.2 & 67.6 & 49.1 & 85.0 & 75.1 & 73.1 \\ VECO 2.0 & 80.4 & 88.5 & **75.4** & **67.2** & **78.9** & **63.7** & 72.7 & 54.3 & **71.1** & **54.7** & **86.2** & **81.8** & **75.2** \\ \hline \hline \multicolumn{11}{c}{_XTREME leaderboard_} \\ \hline VECO 2.0 & 88.3 & 93.4 & 85.3 & 84.0 & 85.9 & 73.2 & 80.5 & 63.9 & 85.4 & 74.2 & 93.8 & 96.2 & 85.8 \\ \hline \hline \end{tabular} \end{table} Table 2: XTREME results on each dataset. The results of mBERT, XLM and XLM-R are from [21], and those for VECO and HICTL are from their respective papers [28] and [38]. The detailed results for each language are in Appendix D. (**bold**: the best score; underline: the second.) to bridge the distance between synonymous cross-lingual entities in the semantic space. For question answering, VECO 2.0 achieves the best performance, surpassing VECO by 1.75% and HICTL by 1.5 % in the average of F1 and EM on XQuAD. For the average of F1 and EM on MLQA, VECO 2.0 outperforms VECO by 1.05% and is slightly inferior to HICTL by 0.15%. On TyDiQA-GoldP, VECO 2.0 achieves significant gains by a wide margin against other models, where over VECO 4.55% and HICTL 7.05%. For sentence retrieval, both VECO and VECO 2.0 significantly improve performance on two retrieval tasks compared to other models, but VECO 2.0 further raises it by over VECO 1.2% on BUCC and 6.7% on Tatoeba respectively. In summary, our method delivers the best overall performance. We conclude the reasons for improvement lie in the strong representation alignment and interdependence establishment across different languages via the proposed multi-granularity contrastive learning tasks, which will be further investigated in the ablation study. ### Analysis #### 4.6.1 Cross-lingual Transfer Gap The cross-lingual transfer gap [21] is the difference between performance on the English test set and the average performance on the other languages. The lower transfer gap indicates the better cross-lingual transfer capability of the model and a transfer gap of 0 suggests the perfect cross-lingual transfer. Table 3 demonstrates the comparison of VECO 2.0 and the other cross-lingual pre-trained model on XTREME tasks, which can be summarized that VECO 2.0 have the better cross-lingual transferability on average among them. In particular, combined with Table 2, VECO 2.0 has not only better average performance across all languages on XQuAD, MLQA, TyDiQA and NER but also a lower transfer gap on them against VECO. For task XNLI, our performance is better than VECO, but the transfer gap is higher, which we suggest probably is caused by overfitting the English data. While the condition is reversed for task PAWS-X, in other words, VECO 2.0 has greater transferability although the performance is slightly inferior to VECO. #### 4.6.2 Ablation Study To explore the impact of each alignment loss, we conducted a series of ablation experiments as presented in Table 4. We pre-train the same steps on the base scaled model across various settings using monolingual and bilingual corpus in selected languages, with the only difference being the training objective. Specifically, we compared four settings, starting with the vanilla MLM for \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline Tasks & XNLI & POS & NER & TyDiQA-GoldP & BUCC & AVG \\ Metrics & Acc. & F1 & F1 & EM & F1 & F1 & AVG \\ \hline MLM+TLM & 71.1 & 67.0 & 53.5 & 32.6 & 48.8 & 23.4 & 49.4 \\ \hline MLM+TLM+Seq-Seq CTL & 72.3\({}_{(+1.2)}\) & 67.4\({}_{(+0.3)}\) & 53.9\({}_{(+0.4)}\) & 33.0\({}_{(+0.4)}\) & 49.8\({}_{(+1.0)}\) & 48.9\({}_{(+25.5)}\) & 54.2\({}_{(+4.8)}\) \\ MLM+TLM+Tok-Tok CTL & 70.2\({}_{(-0.9)}\) & 67.1\({}_{(+0.1)}\) & 54.4\({}_{(+0.9)}\) & 31.9\({}_{(-0.7)}\) & 49.3\({}_{(+0.5)}\) & 24.4\({}_{(+0.9)}\) & 49.5\({}_{(+0.1)}\) \\ MLM+TLM+MCTL & 71.3\({}_{(+0.2)}\) & 68.1\({}_{(+1.1)}\) & 55.5\({}_{(+2.0)}\) & 33.7\({}_{(+1.1)}\) & 51.4\({}_{(+2.6)}\) & 34.9\({}_{(+11.4)}\) & 52.5\({}_{(+3.1)}\) \\ \hline \hline \end{tabular} \end{table} Table 4: Ablation study of base-sized VECO 2.0. We gradually add alignment tasks on the basis of MLM + TLM with consistent training data and hyperparameters. MCTL indicates that both Seq-Seq CTL and Tok-Tok CTL are used. The number in \((\cdot)\) reflects the difference between the current setting and the baseline MLM + TLM. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline Model & XNLI & PAWS-X & XQuAD & MLQA & TyDiQA & NER & POS & Avg. \\ \hline mBERT & 16.5 & 14.1 & 25 & 27.5 & 22.2 & 23.6 & 25.5 & 22.1 \\ XLM-R & 10.2 & 12.4 & 16.3 & 19.1 & 13.3 & 19.8 & 24.3 & 16.5 \\ XLM & 14.7 & 13.1 & 19.6 & 26.3 & 27.9 & 22.0 & 24.8 & 21.2 \\ VECO & **8.9** & 7.5 & 16.6 & 20.2 & 10.2 & 18.5 & **21.4** & 14.8 \\ VECO 2.0 & 9.2 & **7.3** & **16.2** & **20.1** & **6.8** & **18.1** & **21.4** & **14.1** \\ \hline \hline \end{tabular} \end{table} Table 3: The cross-lingual transfer gap of different pre-trained models on XTREME tasks. The transfer gap is the difference between performance on the English test set and the average performance in the other languages, where the lower score the better transferability. For the QA tasks, we show EM scores. (**bold**: the best scores.) monolingual data and TLM for parallel data without any auxiliary alignment tasks, and subsequently incorporating sequence-to-sequence and token-to-token CTL loss in each order. To sum up, the following observations can be made based on the reported results. First, sequence alignment has a significant impact on improving the retrieval task BUCC, next is the classification task XNLI, which suggests that sequence alignment can effectively bridge the semantic representation of parallel data. Second, the token alignment task demonstrates better improvement for token-level downstream tasks, i.e. NER, than the sentence alignment task. But it hurts the performance of sentence-level tasks like XNLI to some extent, which we attribute to the potential disruption of overall sentence representation caused by the attraction between synonyms. Third, when sequence and token alignment are used in conjunction denoted as MCTL, we observe a general boost compared to vanilla MLM+TLM, which is more pronounced and evenly distributed than using only sequence alignment task without considering the effect of extreme values brought from BUCC. Besides, for the QA task TyDiQA-Gold, it can be observed that combining two auxiliary tasks jointly works better than separate training, where we attribute this improvement to the fact that the QA task requires both an overall sentence semantic understanding of the question and the segmentation of the answer token corresponding to the sequence and token alignment respectively. In general, sentence-level downstream tasks, i.e. classification and retrieval require sentence alignment whereas token alignment is more crucial for token-level tasks like NER. For QA, the involvement of both sentence and token alignment is significant for achieving optimal results. Considering the variety and different characteristics of the downstream tasks in XTREME, we ultimately incorporate the two alignment tasks to achieve a greater and more consistent improvement. ## 5 XTREME Leaderboard Submission Based on the key strategy of VECO 2.0, we submit our results coupled with effective fine-tuning and ensemble methods to the XTREME leaderboard. The details involved are shown below. ### Model Setting In addition to the large model, we also pre-train an xlarge scaled model of 3.5B parameters which has 36 layers with 2,560 hidden size and 10,240 feed-forward size. The pre-training corpus and shared vocabulary are kept consistent with the setting of large model. We initialize the parameters of the model with XLM-R xlarge [19]. ### Task Setting Translate-TrainIn the translate-train setting, the cross-lingual or multilingual pre-trained model is fine-tuned on the collection of all data, i.e. golden training corpus in English and the corresponding translated corpus in other languages. XTREME offers translated training corpus in other languages for translate-train setting and translated test corpus in English for translate-test setting for most tasks 6. Note that for two structure prediction tasks (POS, NER), the position of token labels in the translated text generally differs from that in the source text. For the POS task, we follow VECO [28] that uses the model trained only on the English training dataset as a teacher, to label the translated text. For the NER task, we follow EASYPROJECT [6] that utilizes a simple mark-then-translate method to label the translated text by inserting special markers around the labeled spans in the original sentence. In practice, we additionally filter the translated text whose projected entities are equal to the prediction of models in the cross-lingual setting. It is found that the label of translate-train data filtered is of high quality, leading to the best experimental results. Footnote 6: [https://github.com/google-research/xtreme](https://github.com/google-research/xtreme) Translate-TestIn the translate-test setting, the pre-trained model is trained on the English training data and evaluated on test data translated from the target language to English. Referring to the settings and results of XTREME [21], we conduct experiments under translate-test settings in tasks of sentence-pair classification and question answering. Similar to the translate-train setting, XTREME also offers translated test corpus for these tasks. For sentence-pair classification, we predict the original and translated test data at the same time, then we consider both logits to obtain the final prediction via a certain strategy, e.g. maximum or mean. For question answering, we first use a model trained on the English training dataset to predict the answer on the translate-test text, and then translate to answer to the target language by Google Translate. Note that there exists a mismatch between the translated answer and the span in original text. Unlike mapping the answer to the closest span, we find that using it as an anchor for selecting the top-k results can achieve better results. Therefore, in practice, we fine-tuned the pre-trained model under the translate-train setting to predict the top-k results on original test data, then we select the final answer by the token level similarity with the answer of the translate-test. ### Fine-tuning Strategy We leverage different fine-tuning strategies including Child-Tuning [41], Hype [43], R-Drop [25], and implement an alignment-enhanced fine-tuning method for parallel training corpus in translate-train setting, where the alignment loss between bilingual data is attached to the original downstream tasks loss. The specific loss function we use is either Kullback-Leibler divergence following R-drop, or the infoNCE loss in contrastive learning similar to the one we used in the pre-training phase. It is worth noting that sentence retrieval is a zero-shot task without fine-fining, but we found the performances of the model fine-tuned by related downstream tasks is better than direct inference on the pre-trained model, which is also validated in [33]. In practice, we leverage XNLI to assist with sentence retrieval tasks. ### Ensemble Setting We fine-tune with different hyper-parameters e.g. learning rates and random seeds to generate the ensemble results. Detailed settings and hyper-parameters are shown in Appendix. Due to the different characteristics of the downstream tasks, we used separate ensemble settings. **Sentence-pair Classification** We consider all the candidate probabilities to decide the final answer including the prediction of translate-test data. **Structured Prediction** We take the probabilities to decide the label at the token level. For the NER task, we additionally filter the illegal entity under the BIO scheme. **Question Answering** We first ensemble the result at the span level under the translate-train setting, and then take the translate-test result into consideration to decide the final answer as mentioned above. **Sentence Retrieval** We obtain the representation of each layer of the models, and ensemble the result at pair level to decide the final output. Finally, we achieve the average results of 85.8 on 9 tasks as shown in Table 2, which ranked 1st on the XTREME leaderboard on March 17, 2023. ## 6 Conclusion In this paper, we investigate a multi-granularity alignment task via contrastive learning for cross-lingual language model pre-training, where the synonymous sequence and tokens in the parallel corpus are exploited to bridge the gap of semantic representation between languages, establishing the comprehensive alignment for token-sequence, sequence-sequence and token-token level combined with MLM and TLM. Extensive experiments on the broad downstream tasks of XTREME show the effectiveness of the proposed model. At the same time, we reveal the strategies involved in the first rank of the XTREME leaderboard, hoping to inspire future work. ## Acknowledgements We would like to express our sincere appreciation to Fuli Luo, Xiangpeng Wei, Wei Wang, Haiyang Xu for their valuable suggestions and contributions to this work.
2308.10947
Carroll black holes
Despite the absence of a lightcone structure, some solutions of Carroll gravity show black hole-like behaviour. We define Carroll black holes as solutions of Carroll gravity that exhibit Carroll thermal properties and have a Carroll extremal surface, notions introduced in our work. The latter is a Carroll analogue of a Lorentzian extremal surface. As examples, we discuss the Carroll versions of Schwarzschild, Reissner-Nordstroem, and BTZ black holes and black hole solutions of generic 1+1 dimensional Carroll dilaton gravity, including Carroll JT and Carroll Witten black holes.
Florian Ecker, Daniel Grumiller, Jelle Hartong, Alfredo Pérez, Stefan Prohazka, Ricardo Troncoso
2023-08-21T18:00:06Z
http://arxiv.org/abs/2308.10947v3
**Carroll black holes** ## Abstract Despite the absence of a lightcone structure, some solutions of Carroll gravity show black hole-like behaviour. We define Carroll black holes as solutions of Carroll gravity that exhibit Carroll thermal properties and have a Carroll extremal surface, notions introduced in our work. The latter is a Carroll analogue of a Lorentzian extremal surface. As examples, we discuss the Carroll versions of Schwarzschild, Reissner-Nordstrom, and BTZ black holes and black hole solutions of generic 1+1 dimensional Carroll dilaton gravity, including Carroll JT and Carroll Witten black holes. ###### Contents * 1 Introduction * 2 Actions and solutions of 2d Carroll dilaton gravity * 2.1 Generic 2d Carroll dilaton gravity * 2.1.1 First-order formulation * 2.1.2 Equations of motion * 2.1.3 Second-order formulation * 2.1.4 PSM formulation * 2.2 Carroll dilaton gravity as limit * 2.2.1 Dilaton gravity from spherical reduction * 2.2.2 Magnetic Carroll dilaton gravity from ultra-relativistic expansion * 2.2.3 Spherical reduction of magnetic Carroll gravity * 2.2.4 Magnetic Carroll dilaton gravity from a Hamiltonian perspective * 2.3 Solutions of Carroll dilaton gravity * 2.3.1 Constant dilaton vacua * 2.3.2 Linear dilaton vacua * 2.3.3 Carrollian Birkhoff theorem * 2.3.4 Singularities of Carrollian manifolds #### 2.3.5 Global aspects of Carroll thermal solutions * 3 Carroll thermal properties * 3.1 Energy * 3.2 Temperature * 3.3 Entropy and first law * 3.4 A word about dimensions * 3.5 Specific heat * 4 Carroll extremal surfaces * 4.1 Standard extremal surfaces in PSM formulation * 4.2 Carroll extremal surfaces in PSM formulation * 4.3 Carroll extremal surfaces in first- and second-order formulations * 4.4 Carroll black holes * 5 Examples for 2d Carroll black holes * 5.1 Carroll JT model * 5.2 Example of boundary conditions for CJT * 5.3 Carroll-Schwarzschild black hole, 2d perspective * 5.4 Carroll CGHS * 5.5 Carroll Witten black hole * 6 Carroll-Schwarzschild black hole, 4d perspective * 7 Charged and rotating Carroll black holes * 7.1 General remarks on charged Carroll black holes in 2d * 7.2 Carroll-Reissner-Nordstrom * 7.3 Carroll BTZ * 8 Summary and Outlook * A Carroll symmetries * A.1 Global Carroll symmetries * A.2 Local Carroll symmetries * B Lorentzian and Carrollian PSMs ## 1 Introduction After laying dormant for about half a century [1, 2], Carroll symmetries recently attracted considerable attention. Mathematically, Carroll spacetimes are equipped with degenerate (otherwise Euclidean) metrics with a one-dimensional kernel. For example, a four-dimensional (4d) Carroll spacetime has signature \((0,+,+,+)\). Geometrically, Carroll-signature collapses lightcones so that standard Lorentzian notions of horizons are un available. Physically, Carroll symmetries arise in a variety of contexts, including ultrarelativistic (speed of light to zero) and near-horizon limits, which explains their topicality, cf. [3] for a review. Indeed, Carroll symmetries emerge naturally on null surfaces and therefore find their place at null infinity [4], at black hole horizons [5, 6, 7], but also at spatial [8] and time-like infinity [9] in the extreme situation that one direction decouples. Due to its restricted mobility, Carrollian physics shares features with fractons [10, 11, 12, 13] (more precisely, particles with conserved electric charge and dipole moment) and is relevant in models with so-called "spacetime subsymmetries" [14, 15] (see also [16, 17]). Another condensed matter application of Carroll symmetries is flat bands [18] (the Fourier-transformed statement of collapsed lightcones). Additionally, Carroll symmetries may be relevant for cosmology [19], black hole microstates [20], CFT deformations [21, 22, 23, 24], and govern Bjorken flow [25]. Finally, the conformal Carroll algebra is isomorphic to BMS in one higher dimension [4], which led to numerous developments in flat space holography, especially in 2+1 bulk dimensions [26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36] and in 3+1 bulk dimensions [37, 38, 39, 40]. Most applications mentioned above use Carroll symmetries as global spacetime symmetries, i.e., at the same level as special relativity uses Poincare symmetries. Given the success of general relativity as a theory of gravity, which renders spacetime symmetries local, it is tempting to also make Carroll symmetries local [41, 42] and consider theories of Carroll gravity, e.g., two-dimensional (2d) Carroll dilaton gravity [43, 44]. Whether one considers Carroll gravity as a limit of general relativity (see Fig. 1 below for various versions of such a limit) or as an intrinsic gravity model, one quickly faces a conundrum: some of the solutions of these theories behave in several ways like black holes (see, e.g., [17, 45]) and yet, obviously, there cannot be any black holes in Carroll gravity according to their standard definition in terms of an event horizon [46]. This lacuna motivates the present work and fits a longer-term theme of finding a better black hole definition that may apply to quantum theory. Our main goal is to define entities, which we shall refer to as "Carroll black holes", that arise either as the (magnetic) Carroll limit of black hole solutions of general relativity, like Schwarzschild-Tangherlini [47, 48], or as intrinsic solutions of Carroll gravity, like the Carroll versions of JT [49, 50] or Witten black holes [51, 52, 53]. A key entity that we define in our work is a Carroll extremal surface, a Carroll analogue of a Lorentzian extremal surface (see e.g. [54] for their classical and quantum definitions). Lorentzian extremal surfaces arise as bifurcation surfaces on Killing horizons and are thus part of eternal black hole geometries. Despite the absence of horizons in Carroll geometries, the analogue of a bifurcation surface still exists, and we declare it to be a defining property of Carroll black holes. We shall be more precise and detailed in the body of our paper, but for now, a schematic formula that summarizes our main definition is \[\text{Carroll black hole }=\text{ Carroll extremal surface }+\text{ Carroll thermal properties}\,. \tag{1}\] While the concepts developed in our work are independent of the dimension, we get a lot of mileage from considering simple models of Carroll gravity where the solution space is under complete analytic control. Therefore, in a substantial part of our work, we focus on 2d Carroll dilaton gravity models and, as a byproduct, construct their solution spaces. Busy readers happy to skip the details can continue with Section 6, which discusses many of the main features of this work in 3+1 dimensions. This paper is organized as follows. In Section 2, we derive all solutions of 2d Carroll dilaton gravity, also addressing different formulations of these models, different orders of limits, and a higher-dimensional perspective. In Section 3, we focus on the Carroll analogues of some thermal properties, mass, temperature, and entropy, highlighting subtleties with dimensionalities. In Section 4, we introduce a geometric key concept, Carroll extremal surfaces, to define Carroll black holes. In Section 5, we apply our results and definitions to the Carroll JT model, the Carroll limit of Schwarzschild, the Carroll CGHS model, and the Carroll Witten black hole. In Section 6, we elaborate on the 4d perspective of the Carroll-Schwarzschild black hole and the associated wormhole picture. In Section 7, we generalize our results to charged and rotating Carroll black holes, recovering BPS bounds well-known from the Lorentzian case. In Section 8, we conclude with an outlook of some research avenues suggested by our work. For readers confronted for the first time with global and local Carroll symmetries, we review them concisely in Appendix A. Finally, Appendix B describes a map between Lorentzian and Carrollian Poisson-sigma models. ## 2 Actions and solutions of 2d Carroll dilaton gravity In this Section, we construct all solutions of 2d Carroll dilaton gravity. We start with a review of generic 2d Carroll dilaton gravity in Subsection 2.1, including different formulations of the same theory. In Subsection 2.2, we obtain 2d Carroll dilaton gravity through different limits. In Subsection 2.3, we derive all classical solutions locally, exploiting methods similar to the Lorentzian case and finding similar results, especially a constant and a linear dilaton sector. We also address some global aspects of the solutions that hint already at special loci, which we shall later identify as Carroll extremal surfaces. ### Generic 2d Carroll dilaton gravity Generic 2d Carroll dilaton gravity was constructed in the first-order and Poisson-sigma model (PSM) formulations in [43]. Here, we recall this model and provide further details, in particular, the equations of motion and the second-order formulation. #### 2.1.1 First-order formulation Consider the 2d Carroll dilaton gravity bulk action [43] \[\boxed{I_{\text{1st}}[\omega,\,\tau,\,e,\,X,\,X_{\text{H}},\,X_{\text{P}}]= \frac{k}{2\pi}\,\int_{\mathcal{M}}\mathcal{L}} \tag{2}\] on a 2d manifold \(\mathcal{M}\) with coupling \(k\) and the Lagrange-2-form \[\boxed{\mathcal{L}=X\,\,\text{d}\omega+X_{\text{H}}\big{(}\,\text{d}\tau+ \omega\wedge e\big{)}+X_{\text{P}}\,\,\text{d}e+\mathcal{V}(X,\,X_{\text{H}}) \,\tau\wedge e} \tag{3}\] where the potential \(\mathcal{V}(X,\,X_{\text{H}})\) is an arbitrary function. The scalar fields are dilaton \(X\) and Lagrange multipliers for torsion constraints \(X_{\text{H}}\), \(X_{\text{P}}\). The 1-forms are spatial einbein \(e\), temporal einbein \(\tau\) and Carroll boost connection \(\omega\). The composite 2-forms are curvature \(\Omega=\text{d}\omega\), torsion \(T=\text{d}\tau+\omega\wedge e\), and intrinsic torsion \(\Theta=\text{d}e\) where the latter is defined as the part of the torsion independent of the boost connection. We summarily refer to the scalar fields as \(X^{I}=(X,X_{\text{H}},X_{\text{P}})\) and to the 1-forms as \(A_{I}=(\omega,\tau,e)\). The Lagrange-2-form (3) [and hence also the action (2)] is invariant under local Carroll boosts (see Appendix A for a summary of Carroll symmetries) \[\delta_{\lambda}X =0 \delta_{\lambda}X_{\text{H}} =0 \delta_{\lambda}X_{\text{P}} =X_{\text{H}}\,\lambda \tag{4a}\] \[\delta_{\lambda}\omega =\text{d}\lambda \delta_{\lambda}\tau =-e\,\lambda \delta_{\lambda}e =0\,. \tag{4b}\] The transformations (4) show that the dilaton \(X\), the field \(X_{\text{H}}\) and the spatial einbein \(e\) are Carroll boost invariant, while the 1-form \(\omega\) is the Carroll boost connection. The non-invariances of the temporal einbein \(\tau\) and the scalar \(X_{\text{P}}\) conspire such that the sum of the middle two terms in the Lagrange-2-form is Carroll boost-invariant. Moreover, the action (2) is invariant under two additional gauge symmetries \(\lambda_{\text{H}}\) and \(\lambda_{\text{P}}\) (we define \(\partial_{X}:=\partial/\partial X\) and \(\partial_{\text{H}}:=\partial/\partial X_{\text{H}}\)) \[\delta_{\lambda_{\text{H}}}X =0 \delta_{\lambda_{\text{H}}}X_{\text{H}} =0 \delta_{\lambda_{\text{H}}}X_{\text{P}} =\mathcal{V}\,\lambda_{\text{H}} \tag{5a}\] \[\delta_{\lambda_{\text{H}}}\omega =-(\partial_{X}\mathcal{V})\,e\lambda_{\text{H}} \delta_{\lambda_{\text{H}}}\tau =\mathrm{d}\lambda_{\text{H}}-(\partial_{\text{H}}\mathcal{V})\,e \lambda_{\text{H}} \delta_{\lambda_{\text{H}}}e =0 \tag{5b}\] and \[\delta_{\lambda_{\text{P}}}X =-X_{\text{H}}\,\lambda_{\text{P}} \delta_{\lambda_{\text{P}}}X_{\text{H}} =-\mathcal{V}\,\lambda_{\text{P}} \delta_{\lambda_{\text{P}}}X_{\text{P}} =0 \tag{6a}\] \[\delta_{\lambda_{\text{P}}}\omega =(\partial_{X}\mathcal{V})\,\tau\lambda_{\text{P}} \delta_{\lambda_{\text{P}}}\tau =\omega\,\lambda_{\text{P}}+(\partial_{\text{H}}\mathcal{V})\, \tau\lambda_{\text{P}} \delta_{\lambda_{\text{P}}}e =\mathrm{d}\lambda_{\text{P}} \tag{6b}\] On-shell they generate diffeomorphisms along a vector field \(\xi^{\mu}\) by virtue of the standard relations1\(\lambda_{\text{H}}=\tau_{\mu}\,\xi^{\mu}\) and \(\lambda_{\text{P}}=e_{\mu}\,\xi^{\mu}\): Footnote 1: Additionally, we need a compensating Carroll boost generated by \(\lambda=\omega_{\mu}\,\xi^{\mu}\). \[\delta_{\xi}X\approx\xi^{\mu}\partial_{\mu}X \delta_{\xi}\omega_{\mu} \approx\xi^{\nu}\partial_{\nu}\omega_{\mu}+\omega_{\nu}\partial_{ \mu}\xi^{\nu} \tag{7a}\] \[\delta_{\xi}X_{\text{H}}\approx\xi^{\mu}\partial_{\mu}X_{\text{H}} \delta_{\xi}\tau_{\mu} \approx\xi^{\nu}\partial_{\nu}\tau_{\mu}+\tau_{\nu}\partial_{\mu} \xi^{\nu}\] (7b) \[\delta_{\xi}X_{\text{P}}\approx\xi^{\mu}\partial_{\mu}X_{\text{P}} \delta_{\xi}e_{\mu} \approx\xi^{\nu}\partial_{\nu}e_{\mu}+e_{\nu}\partial_{\mu}\xi^{\nu} \tag{7c}\] The Lie variations above follow from the gauge symmetries together with the Carrollian equations of motion displayed below in (8) (\(\approx\) denotes on-shell equivalence). #### 2.1.2 Equations of motion Varying the action (2) with respect to all fields yields the equations of motion. \[\delta X \text{Carroll curvature:} \Omega=\mathrm{d}\omega=-\partial_{X}\mathcal{V}(X,\,X_{\text{H}} )\,\tau\wedge e \tag{8a}\] \[\delta X_{\text{H}} \text{Carroll torsion:} T=\mathrm{d}\tau+\omega\wedge e=-\partial_{\text{H}}\mathcal{V}(X,\,X_{ \text{H}})\,\tau\wedge e\] (8b) \[\delta X_{\text{P}} \text{No intrinsic torsion:} \Theta=\mathrm{d}e=0\] (8c) \[\delta\omega \text{Carroll metric:} \mathrm{d}X+X_{\text{H}}\,e=0\] (8d) \[\delta\tau \text{Carroll Casimir:} \mathrm{d}X_{\text{H}}+\mathcal{V}(X,\,X_{\text{H}})\,e=0\] (8e) \[\delta e \text{Auxiliary field:} \mathrm{d}X_{\text{P}}-\mathcal{V}(X,\,X_{\text{H}})\,\tau-X_{ \text{H}}\,\omega=0 \tag{8f}\] The first equation (8a) determines the Carroll curvature, which generally is non-zero but trivially vanishes whenever the potential is independent of the dilaton field. The second equation (8b) shows that on-shell Carroll torsion vanishes whenever the potential is independent of \(X_{\text{H}}\). The third equation (8c) reveals that there is never intrinsic torsion, regardless of how the potential is chosen. The fourth equation (8d) allows algebraically determining the spatial einbein (and hence the Carroll metric) in terms of the Carroll boost invariant scalars, \(X\) and \(X_{\text{H}}\). The fifth equation (8e) entails a conserved Casimir function, which we shall uncover below when discussing linear dilaton vacua. The final equation (8f) allows determining the auxiliary field \(X_{\text{P}}\) in terms of the potential \(\mathcal{V}(X,\,X_{\text{H}})\) and the geometric data extracted from the other five equations of motion or, alternatively, if \(X_{\text{P}}\) is gauge fixed suitably it provides an algebraic constraint relating \(\tau\) and \(\omega\), It can be useful to map solutions of different models to each other if they can be related by suitable Weyl rescalings. Therefore, consider a dilaton-dependent Weyl rescaling, parametrized by \(\alpha\), of the Carroll metric \[e\to\tilde{e}=e\,e^{\alpha(X)} \tag{9}\] that leaves invariant the dilaton, \(X\to\tilde{X}=X\). Such Weyl rescalings are compatible with the absence of intrinsic torsion (this would not be the case if the Weyl factor did depend on time). Consistency with Carroll boosts demands that also \(\tau\) scales in the same way as \(e\), \[\tau\to\tilde{\tau}=\tau\,e^{\alpha(X)}\,. \tag{10}\] The Carroll metric equation (8d) implies that \(X_{\text{\tiny H}}\) transforms inversely to \(e\), \[X_{\text{\tiny H}}\to\tilde{X_{\text{\tiny H}}}=X_{\text{\tiny H}}\,e^{-\alpha (X)} \tag{11}\] and consistency with Carroll boosts demands the same scaling for \(X_{\text{\tiny P}}\). \[X_{\text{\tiny P}}\to\tilde{X_{\text{\tiny P}}}=X_{\text{\tiny P}}\,e^{-\alpha (X)} \tag{12}\] The Carroll Casimir equation (8e) \[\text{d}X_{\text{\tiny H}}+\mathcal{V}(X,\,X_{\text{\tiny H}})\,e=0\qquad \leftrightarrow\qquad\text{d}\tilde{X_{\text{\tiny H}}}+\tilde{\mathcal{V}}( \tilde{X},\,\tilde{X_{\text{\tiny H}}})\,\tilde{e}=0 \tag{13}\] establishes the transformation behaviour of the potential \[\mathcal{V}(X,\,X_{\text{\tiny H}})\to\tilde{\mathcal{V}}(\tilde{X},\,\tilde{ X_{\text{\tiny H}}})=e^{-2\alpha(X)}\left(\mathcal{V}(X,\,X_{\text{\tiny H}})-X_{ \text{\tiny H}}^{2}\,\partial_{X}\alpha\right). \tag{14}\] The auxiliary field equation (8f) yields an inhomogeneous shift for \(\omega\), \[\omega\to\tilde{\omega}=\omega+(\partial_{X}\alpha)(X_{\text{\tiny H}}\tau+X_ {\text{\tiny P}}e)\,. \tag{15}\] The Carroll torsion and curvature equations (8a,8b) are compatible with all the transformations above, replacing consistently everywhere quantities with their tilde counterparts. Thus, dilaton-dependent Weyl rescalings (9) act pretty much in the same way as in the Lorentzian case (see [55]) and can be used to introduce or eliminate a kinetic potential function \(U(X)\) in potentials of the type (16) below. #### 2.1.3 Second-order formulation To set the stage for other discussions, we translate the first-order/PSM formulation to the second-order formulation. While the functional dependence of \(\mathcal{V}\) can, in principle, be arbitrary, we consider it to be of the form \[\mathcal{V}(X,\,X_{\text{\tiny H}})=-\frac{U(X)}{2}X_{\text{\tiny H}}^{2}+V(X) \tag{16}\] for some kinetic and potential function of the dilaton, \(U(X)\) and \(V(X)\), respectively. (This is also the most commonly used form of the potential in Lorentzian 2d dilaton gravity [55].) To get the second-order formulation, one needs to integrate out \(\omega\) and \(X_{\text{\tiny H}}\) by their own equations of motion (8). For this we introduce the dual vectors \(v^{\mu}\) and \(e^{\mu}\) satisfying \[v^{\mu}\tau_{\mu}=-1\qquad e^{\mu}e_{\mu}=1\qquad\delta^{\mu}_{\nu}=-v^{\mu} \tau_{\nu}+e^{\mu}e_{\nu}. \tag{17}\] The \(\omega\)-equation (8d) can be solved algebraically for \(X_{\text{\tiny H}}\) \[X_{\text{\tiny H}}=-e^{\mu}\partial_{\mu}X\, \tag{18}\] and also leads to a constraint \(v^{\mu}\partial_{\mu}X=0\). The \(X_{\text{\tiny H}}\)-equation (8b) is solved by splitting \[\omega=\tilde{\omega}+t+\rho\,e \tag{19}\] with a torsionless part \(\hat{\omega}\) satisfying \(\mathrm{d}\tau+\hat{\omega}\wedge e=0\), a torsion part \(t\), and an arbitrary undetermined function \(\rho\). The latter embodies the usual ambiguity that the Carrollian spin connection is not entirely determined by the equations of motion. Explicitly, the different parts read \[\hat{\omega}_{\mu}=-e^{\nu}\partial_{\mu}\tau_{\nu}+e^{\nu}\partial_{\nu}\tau_{ \mu}:=-2e^{\nu}\partial_{[\mu}\tau_{\nu]}\qquad\qquad t=U(X)\,X_{\mathrm{H}}\tau \tag{20}\] where the latter is determined by the \(X_{\mathrm{H}}\)-equation. Plugging these solutions into the first-order action (2) with (3) yields \[I_{\mathrm{1}^{\mathrm{st}}}=\frac{k}{2\pi}\int_{\mathcal{M}}\Big{(}X\;\mathrm{ d}\hat{\omega}-\rho\,\mathrm{d}X\wedge e+X_{\mathrm{P}}\,\mathrm{d}e+\Big{(}- \frac{U(X)}{2}\big{(}e^{\mu}\partial_{\mu}X\big{)}^{2}+V(X)\Big{)}\,\tau\wedge e \Big{)}\,. \tag{21}\] It can be seen that \(\rho\) plays the role of a Lagrange multiplier for the above-mentioned constraint \(v^{\mu}\partial_{\mu}X=0\). The vielbein postulates (see Appendix A.2) relate the connection \(\hat{\omega}\) to the Riemann curvature tensor by \(R^{\lambda}{}_{\sigma\mu\nu}=-v^{\lambda}e_{\sigma}(\mathrm{d}\hat{\omega})_{\mu\nu}\). On the other hand, using the change of basis \(\mathrm{d}x^{\mu}=-v^{\mu}\tau+e^{\mu}e\), we write \(\mathrm{d}\hat{\omega}=\partial_{[\mu}\hat{\omega}_{\nu]}\mathrm{d}x^{\mu} \wedge\mathrm{d}x^{\nu}=2\partial_{[\mu}\hat{\omega}_{\nu]}e^{\mu}v^{\nu}\tau \wedge e\) implying \(\mathrm{d}\hat{\omega}=\frac{R}{2}\tau\wedge e\) where we defined2\(R=2e^{\mu}e^{\nu}R^{\lambda}{}_{\mu\nu\lambda}\). Finally, we rewrite \(\mathrm{d}e=2e^{\mu}v^{\nu}\partial_{[\mu}e_{\nu]}\tau\wedge e=K\tau\wedge e\) in terms of the trace of the extrinsic curvature \(K\) and define the volume form \(\tau\wedge e=\tau_{\mu}e_{\nu}\,\mathrm{d}x^{\mu}\wedge\mathrm{d}x^{\nu}= \varepsilon^{\mu\nu}\tau_{\mu}e_{\nu}\,\mathrm{d}^{2}x=\det(\tau,e)\;\mathrm{d }^{2}x\). Footnote 2: This relation between the Riemann tensor and the curvature scalar works in 2d only. In higher dimensions, one can still define a Carroll invariant curvature scalar in terms of Cartan variables (see, e.g., [56]) which, however, cannot be obtained as a contraction of the Riemann tensor. To be explicit, the Carroll invariant curvature scalar in \(D>2\) would read \(R=-2v^{\mu}e_{\alpha}^{\alpha}R_{\mu\nu}{}^{\alpha}+e_{\alpha}^{\mu}e_{\nu}^{ \nu}R_{\mu\nu}^{\mu}\), while the Riemann tensor can be expressed (see [41]) as \(R^{\nu}{}_{\sigma\mu\nu}=-v^{\nu}e_{\alpha\sigma}R_{\mu\nu}{}^{\alpha}+e_{ \sigma}^{\mu}e_{\sigma\rho}R_{\mu\nu}^{\mu}\). A naive contraction would give \(2e^{\mu}e_{\alpha}^{\nu}e_{\alpha}^{\alpha}R^{\lambda}{}_{\mu\nu\lambda}=-2v ^{\mu}e_{\alpha}^{\nu}R_{\mu\nu}{}^{\alpha}+2e_{\alpha}^{\mu}e_{\nu}^{\nu}R_{ \mu\nu}^{\alpha}\) which is not Carroll invariant except in 2d. Then, the last term vanishes and \(R=2e^{\mu}e^{\nu}R^{\lambda}{}_{\mu\lambda\nu}\). Inserting these results into the first-order action (21) yields the second-order Carroll dilaton gravity action \[I_{\mathrm{2}^{\mathrm{nd}}}[e,\tau,\rho,X_{\mathrm{P}},X]=\frac{k}{4\pi}\int _{\mathcal{M}}\,\mathcal{L}_{\mathrm{2}^{\mathrm{nd}}} \tag{22}\] with \[\boxed{\mathcal{L}_{\mathrm{2}^{\mathrm{nd}}}=\mathrm{d}^{2}x\,\det(\tau,e) \left(XR+2\rho\,v^{\mu}\partial_{\mu}X+2X_{\mathrm{P}}K-U(X)\big{(}e^{\mu} \partial_{\mu}X\big{)}^{2}+2V(X)\right)\,.} \tag{23}\] As we will show later, comparing with the literature (e.g. [56, 57, 58]) allows identifying this action with the magnetic Carroll theory. This action is Carroll invariant if the Lagrange multipliers transform under Carroll boosts as \[\delta_{\lambda}\rho=-U\lambda e^{\mu}\partial_{\mu}X+\nabla_{\mu}(e^{\mu} \lambda)\qquad\qquad\delta_{\lambda}X_{\mathrm{P}}=-\lambda e^{\mu}\partial_{ \mu}X \tag{24}\] where \(\nabla\) is the connection associated with \(\hat{\omega}\) via the vielbein postulates. The left equality is compatible with the transformation behaviour of \(\omega\) in (5) by using (19). The right equality agrees with the corresponding on-shell transformation in the first-order formalism (4). #### 2.1.4 PSM formulation There is another, gauge-theoretic, formulation of 2d Carroll dilaton gravity that lies at the heart of the original construction in [43]. Since some statements are phrased succinctly in this PSM formulation, we briefly summarize its main aspects. PSMs are topological, non-linear gauge theories in 2d [59, 60] and arise as the most general consistent deformation of abelian BF-theories [61]. The PSM bulk action [43] \[\boxed{I_{\text{\tiny PSM}}[A_{I},X^{I}]=\frac{k}{2\pi}\,\int_{\mathcal{M}}\left( X^{I}\,\text{d}A_{I}+\frac{1}{2}\,P^{IJ}(X^{K})\,A_{I}\wedge A_{J}\right)} \tag{25}\] with the field content \[A_{I}=(\omega,\,\tau,\,e)\qquad\qquad X^{I}=(X,\,X_{\text{\tiny H}},\,X_{ \text{\tiny F}}) \tag{26}\] and the Poisson tensor \[P^{IJ}=\begin{pmatrix}0&0&X_{\text{\tiny H}}\\ 0&0&\mathcal{V}(X,\,X_{\text{\tiny H}})\\ -X_{\text{\tiny H}}&-\mathcal{V}(X,\,X_{\text{\tiny H}})&0\end{pmatrix} \tag{27}\] is equivalent to the first-order action (2). The scalar fields \(X^{I}\) are target space coordinates of a Poisson manifold, and for each of them, there is an associated gauge connection 1-form \(A_{I}\). The non-linear Jacobi identities \[P^{IL}\partial_{L}P^{JK}+P^{JL}\partial_{L}P^{KI}+P^{KL}\partial_{L}P^{IJ}=0 \tag{28}\] hold, essentially because the potential depends only on Carroll boost-invariant combination of the target space coordinates \(X^{I}\), viz., \(X\) and \(X_{\text{\tiny H}}\). The action (25) is invariant under the non-linear gauge symmetries \[\delta_{\lambda}X^{J}=\lambda_{I}\,P^{IJ}\qquad\qquad\delta_{\lambda}A_{I}= \text{d}\lambda_{I}+\left(\partial_{I}P^{JK}\right)A_{J}\lambda_{K}\,. \tag{29}\] It is easy to verify that (29) with the choices above is equivalent to (4)-(6). PSMs have no local physical degrees of freedom. So all physical excitations can be considered holographically as boundary states. Since the Poisson tensor is anti-symmetric and hence must have an even rank, a Poisson tensor associated with a 3-dimensional target space like in (27) necessarily has a non-trivial kernel. Physically, this kernel corresponds to a conserved Casimir that can be interpreted as mass, as we shall see in Section 3.1. ### Carroll dilaton gravity as limit While it is not necessary to do so, quite often it can be helpful to consider Carroll gravity as a singular limit of relativistic gravity, e.g., to build some intuition about the theory and its solution space. Additionally, it can be expedient to think of (some models of) 2d dilaton gravity as coming from the dimensional reduction of higher-dimensional Einstein gravity. We address both issues in this Subsection, beginning with the latter. To help orient the reader, we provide Fig. 1 which also highlights the consistency, i.e., commutativity, of the various ways we take the limits. In particular, we show that we can first dimensionally reduce and then take the Carroll limit, or the other way around. Readers who do not care about the details of why this is true may take a glance at Fig. 1 and then skip ahead to Subsection 2.3, where we provide all solutions of Carroll 2d dilaton gravity. #### 2.2.1 Dilaton gravity from spherical reduction For this work to be self-contained, we review the standard spherical reduction of the Einstein-Hilbert action. Figure 1: Left (orange): Lorentzian theories. Right (yellow): Carroll theories. Take Einstein gravity in \(D>3\) spacetime dimensions \[I_{\mbox{\tiny EH}}=\frac{c^{3}}{16\pi G}\,\int\mathrm{d}^{D}x\sqrt{-g^{(D)}}\, \mathcal{R}^{(D)} \tag{30}\] and impose spherical symmetry, i.e., assume the isometry group of the metric has \(SO(D-1)\) as a (not necessarily proper) subgroup, with \((D-2)\)-spheres as orbits. Without loss of generality, pick coordinates adapted to spherical symmetry, \[\mathrm{d}s^{2}=g_{\alpha\beta}(x^{\gamma})\,\mathrm{d}x^{\alpha}\,\mathrm{d}x ^{\beta}+\Phi^{2}(x^{\gamma})\,\,\mathrm{d}\Omega^{2}_{\mathcal{S}^{(D-2)}} \tag{31}\] where \(\Phi\) is a scalar function depending on the first two coordinates only and \(\alpha,\beta,\gamma=0,1\). The quantity \(\mathrm{d}\Omega^{2}_{\mathcal{S}^{(D-2)}}\) denotes the metric of the round \((D-2)\)-sphere. This ansatz splits the higher-dimensional metric into a 2d metric \(g_{\alpha\beta}\) and a 2d scalar field \(\Phi\), which is precisely the field content of 2d dilaton gravity. Expressing the Ricci scalar \(\mathcal{R}^{(D)}\) in terms of these lower-dimensional quantities yields \[\mathcal{R}^{(D)}=\mathcal{R}-2(D-2)\frac{\nabla^{2}_{\mbox{\tiny(LC)}}\Phi}{ \Phi}-(D-3)(D-2)\frac{(\partial\Phi)^{2}}{\Phi^{2}}+\frac{(D-3)(D-2)}{\Phi^{2} }\,, \tag{32}\] where \(\mathcal{R}\) and \(\nabla_{\mbox{\tiny(LC)}}\) are Ricci scalar and covariant derivative associated to the 2d Levi-Civita connection. The volume form splits into a product of a scalar factor, the 2d volume form, and a volume factor associated with the round \((D-2)\)-sphere, \[\mathrm{d}^{D}x\sqrt{-g^{(D)}}=\Phi^{D-2}\,\,\mathrm{d}^{2}x\sqrt{-g}\,\, \mathrm{d}^{D-2}y\sqrt{g^{(D-2)}}\,. \tag{33}\] To recover 2d dilaton gravity in standard conventions, we redefine \[\Phi=\frac{D-2}{\lambda}X^{\frac{1}{D-2}} \tag{34}\] with a constant \(\lambda\) of inverse length dimension (to get dimensionless dilaton \(X\)). After integrating over the \((D-2)\)-sphere, and up to total derivatives, the action (30) reduces to \[I_{\mbox{\tiny sph.red.}}[g_{\alpha\beta},X]=\frac{k_{\mbox{\tiny rel}}}{4\pi }\,c^{3}\,\int\mathrm{d}^{2}x\sqrt{-g}\left(X\mathcal{R}-U(X)(\partial X)^{2} +2V(X)\right) \tag{35}\] with the potentials \[U(X)=-\frac{D-3}{(D-2)\,X}\qquad\qquad V(X)=\lambda^{2}\,\frac{D-3}{2(D-2)}\,X ^{\frac{D-4}{D-2}} \tag{36}\] and the coupling constant \[k_{\mbox{\tiny rel}}=\frac{\pi^{(D-1)/2}}{2\Gamma((D-1)/2)\,G}\left(\frac{D-2 }{\lambda}\right)^{(D-2)}\,. \tag{37}\] In summary, we can describe Schwarzschild-Tangherlini black holes in terms of 2d dilaton gravity, in the sense that the classical solutions of (35) with (36) are in one-to-one correspondence with spherically symmetric solutions to \(D\)-dimensional Einstein gravity (30). The next step is to take the Carroll limit of the action (35) to obtain Carroll dilaton gravity. #### 2.2.2 Magnetic Carroll dilaton gravity from ultra-relativistic expansion We start by considering the so-called magnetic limit of Lorentzian 2d dilaton gravity given by the second-order action (35), except that we allow here for arbitrary potentials \(U(X)\) and \(V(X)\). Using the conventions of [65], we switch to pre-Carrollian variables \[g_{\mu\nu}=-c^{2}T_{\mu}T_{\nu}+E_{\mu}E_{\nu} g^{\mu\nu}=-\frac{1}{c^{2}}V^{\mu}V^{\nu}+E^{\mu}E^{\nu} \tag{38}\] where \[V^{\mu}T_{\mu}=-1 E^{\mu}E_{\mu}=1 E^{\mu}E_{\nu}-V^{\mu}T_{\nu}=\delta^{\mu}_{\nu} \tag{39}\] and the other contractions are zero. The fields \(T_{\mu}\), \(V^{\mu}\), \(E_{\mu}\), \(E^{\mu}\) and \(X\) are assumed to be Taylor-expandable in \(c^{2}\). In particular, we have to leading-order (LO) the Carrollian fields \[V^{\mu}=v^{\mu}+\mathcal{O}(c^{2}) E_{\mu}=e_{\mu}+\mathcal{O}(c^{2}) X=X+\mathcal{O}(c^{2})\,, \tag{40}\] where we denoted the leading-order term in the dilaton expansion by the same letter. As the subleading terms in the \(X\)-expansion do not play a role in the following and both sides are invariant under Carroll boosts this is just a convenient definition. The first goal is to rewrite all quantities in the relativistic action (35) in terms of the variables on the left-hand side of (40). The relativistic Levi-Civita connection is thereby organized in a specific way, \[\overset{\text{(LC)}}{\Gamma}{}^{\rho}{}_{\mu\nu}=\frac{1}{2}g^{\rho\alpha} \left(\partial_{\mu}g_{\alpha\nu}+\partial_{\nu}g_{\alpha\mu}-\partial_{\alpha }g_{\mu\nu}\right)=\frac{1}{c^{2}}\overset{\text{(--2)}}{C}{}^{\rho}{}_{\mu\nu} +\tilde{C}^{\rho}{}_{\mu\nu}+S^{\rho}{}_{\mu\nu}+c^{2}\overset{\text{(2)}}{C} {}^{\rho}{}_{\mu\nu} \tag{41}\] where all orders in \(c^{2}\) transform tensorially except the \(c^{0}\) part. This part is further split into a connection \(\tilde{C}^{\rho}{}_{\mu\nu}\) satisfying \[\overset{\text{(\ref{eq:C})}}{\nabla}_{\nu}E_{\mu}=0 \overset{\text{(\ref{eq:C})}}{\nabla}_{\nu}V^{\mu}=0 \tag{42}\] and a tensorial part \(S^{\rho}{}_{\mu\nu}\). In this way, a true Carrollian connection is obtained at leading order in the limit \(c\to 0\) (for further details see [65]). While one has to keep in mind that the pre-Carrollian variables \(E_{\mu}\) and \(V^{\mu}\) are still Lorentz-covariant, it is useful to treat them as if they were Carrollian in order to determine the split of the \(\mathcal{O}(c^{0})\) part of the relativistic Levi-Civita connection into \(\tilde{C}^{\rho}{}_{\mu\nu}\) and \(S^{\rho}{}_{\mu\nu}\). In other words, we introduce a pre-Carrollian connection3\(\tilde{\Omega}_{\mu}\) on the frame bundle which satisfies the Carrollian vielbein postulates Footnote 3: In 2d, there is no rotational part of the connection and the boost part has only one component denoted by the one-form \(\tilde{\Omega}\). \[\partial_{\mu}T_{\nu}-\tilde{C}^{\lambda}{}_{\mu\nu}\,T_{\lambda} +\tilde{\Omega}_{\mu}E_{\nu} =0 \partial_{\mu}E_{\nu}-\tilde{C}^{\lambda}{}_{\mu\nu}\,E_{\lambda} =0 \tag{43}\] \[\partial_{\mu}V^{\nu}+\tilde{C}^{\nu}{}_{\mu\lambda}\,V^{\lambda} =0 \partial_{\mu}E^{\nu}+\tilde{C}^{\nu}{}_{\mu\lambda}E^{\lambda}+V^ {\nu}\tilde{\Omega}_{\mu} =0 \tag{44}\] where the off-diagonal equations are compatible with (42). Solving for \(\tilde{C}^{\rho}{}_{\mu\nu}\) in terms of \(\tilde{\Omega}_{\mu}\) and the vielbein leads to \[\tilde{C}^{\lambda}{}_{\mu\nu} =-V^{\lambda}\big{(}\partial_{\mu}T_{\nu}+\tilde{\Omega}_{\mu}E_{ \nu}\big{)}+\frac{1}{2}\,\Pi^{\lambda\rho}\big{(}\partial_{\mu}\Pi_{\nu\rho}+ \partial_{\nu}\Pi_{\mu\rho}-\partial_{\rho}\Pi_{\mu\nu}\big{)}-\Pi^{\lambda \rho}T_{\nu}\mathcal{K}_{\mu\rho}\] \[=-V^{\lambda}\big{(}\partial_{\mu}T_{\nu}+\tilde{\Omega}_{\mu}E_{ \nu}\big{)}+E^{\lambda}\partial_{\mu}E_{\nu} \tag{45}\] where \(\Pi_{\mu\nu}=E_{\mu}E_{\nu}\) and \({\cal K}_{\mu\rho}\) is the extrinsic curvature. In 2d, the latter object only has one component, \[{\cal K}_{\mu\nu}={\cal K}E_{\mu}E_{\nu}=-\frac{1}{2}{\cal L}_{V}(E_{\mu}E_{\nu} )\qquad\Rightarrow\qquad{\cal K}=2E^{\mu}V^{\nu}\partial_{[\mu}E_{\nu]}. \tag{46}\] As usual in a second-order formulation, \(\tilde{\Omega}_{\mu}\) is not an independent dynamical variable but is determined in terms of the vielbein such that its torsion vanishes. However, in a (pre-) Carrollian geometry, this can only be achieved to a certain degree as there are torsion components independent of \(\tilde{\Omega}_{\mu}\), corresponding to intrinsic torsion. One, therefore, sets only those torsion components to zero that depend on the pre-Carrollian connection. In 2d, this leads to only one equation, \[{\rm d}T+\tilde{\Omega}\wedge E=0 \tag{47}\] solved by \[\tilde{\Omega}_{\mu}=-2E^{\nu}\partial_{[\mu}T_{\nu]}+\gamma(x^{\alpha})E_{\mu }. \tag{48}\] Here, \(\gamma\) is some arbitrary function representing the remaining freedom in the pre-Carrollian connection [56, 58]. Inserting into (45) yields \[\tilde{C}^{\lambda}{}_{\mu\nu}=-V^{\lambda}\partial_{(\mu}T_{\nu)}-V^{\lambda} T_{(\mu}{\cal L}_{V}T_{\nu)}+E^{\lambda}\partial_{\mu}E_{\nu}-\gamma V^{\lambda}E_{ \mu}E_{\nu} \tag{49}\] which is not torsion-free, but contains the expected intrinsic torsion \(\tilde{C}^{\rho}{}_{[\mu\nu]}=E^{\rho}\partial_{[\mu}E_{\nu]}\) even in the limit \(c\to 0\). While the arbitrary function \(\gamma\) is set to zero in [65], we keep it, for now, to see how it contributes at later stages of the expansion. We will see below that \(\gamma\) does not play a role. If we go back to (41) and use the pre-Carrollian variables we get at order \(c^{0}\) \[\left.\stackrel{{\mbox{\tiny(LC)}}}{{\Gamma}}{}^{\lambda}{}_{ \mu\nu}\right|_{c^{0}}=-V^{\lambda}\partial_{(\mu}T_{\nu)}-V^{\lambda}T_{(\mu}{ \cal L}_{V}T_{\nu)}+E^{\lambda}\partial_{\mu}E_{\nu}+E^{\lambda}E^{\alpha}T_{ \nu}{\cal K}_{\mu\alpha}=:\tilde{C}^{\lambda}{}_{\mu\nu}+S^{\lambda}{}_{\mu\nu}\,. \tag{50}\] Using (49), the expansion of the Levi-Civita connection (41) has coefficients \[\left.\stackrel{{\mbox{\tiny(--2)}}}{{C}}{}^{\rho}{}_{ \mu\nu}=-V^{\rho}{\cal K}_{\mu\nu}\right. \tag{51}\] \[\tilde{C}^{\rho}{}_{\mu\nu}=-V^{\rho}\partial_{(\mu}T_{\nu)}-V^{ \rho}T_{(\mu}{\cal L}_{V}T_{\nu)}+E^{\rho}\partial_{\mu}E_{\nu}-\gamma V^{\rho }E_{\mu}E_{\nu}\] (52) \[S^{\rho}{}_{\mu\nu}=E^{\rho}E^{\lambda}T_{\nu}{\cal K}_{\mu \lambda}+\gamma V^{\rho}E_{\mu}E_{\nu}\] (53) \[\left.\stackrel{{\mbox{\tiny(2)}}}{{C}}{}^{\rho}{}_{ \mu\nu}=-E^{\rho}E^{\alpha}T_{(\mu}({\rm d}T)_{\nu)\alpha}. \tag{54}\] The Riemann tensor associated with the Levi-Civita connection is defined by \[{\cal R}^{\lambda}{}_{\mu\nu\sigma}=\partial_{\nu}\stackrel{{ \mbox{\tiny(LC)}}}{{\Gamma}}{}^{\lambda}{}_{\sigma\mu}+\stackrel{{ \mbox{\tiny(LC)}}}{{\Gamma}}{}^{\lambda}{}_{\nu\beta}\stackrel{{ \mbox{\tiny(LC)}}}{{\Gamma}}{}^{\beta}{}_{\sigma\mu}-\left(\nu \leftrightarrow\sigma\right). \tag{55}\] We work directly with the Ricci tensor \({\cal R}_{\mu\nu}:={\cal R}^{\lambda}{}_{\mu\lambda\nu}\) in the following. It can be organized as \[{\cal R}_{\mu\nu}=\frac{1}{c^{4}}\stackrel{{\mbox{\tiny(--4)}}}{{R} }_{\mu\nu}+\frac{1}{c^{2}}\stackrel{{\mbox{\tiny(--2)}}}{{R}}_{ \mu\nu}+\stackrel{{\mbox{\tiny(0)}}}{{R}}_{\mu\nu}+c^{2}\stackrel{{ \mbox{\tiny(2)}}}{{R}}_{\mu\nu}+c^{4}\stackrel{{\mbox{\tiny(4)}}} {{R}}_{\mu\nu} \tag{56}\] with the coefficients up to \({\cal O}(c^{2})\) given by \[\left.\stackrel{{\mbox{\tiny(--4)}}}{{R}}_{\mu\nu}=0\right. \tag{57}\] \[\left.\stackrel{{\mbox{\tiny(--2)}}}{{R}}_{\mu\nu}= \stackrel{{\mbox{\tiny(--5)}}}{{\nabla}}{}_{\lambda} \stackrel{{\mbox{\tiny(--2)}}}{{C}}{}^{\lambda}{}_{\nu\mu}-2\tilde{C}^{ \alpha}{}_{[\nu\lambda]}\stackrel{{\mbox{\tiny(--2)}}}{{C}}{}^{ \lambda}{}_{\alpha\mu}+S^{\lambda}{}_{\lambda\beta}\stackrel{{\mbox{ \tiny(--2)}}}{{C}}{}^{\beta}{}_{\nu\mu}-S^{\alpha}{}_{\nu\lambda}\stackrel{{ \mbox{\tiny(--2)}}}{{C}}{}^{\lambda}{}_{\alpha\mu}\right.\] (58) \[\left.\stackrel{{\mbox{\tiny(0)}}}{{R}}_{\mu\nu}= \stackrel{{\mbox{\tiny(--5)}}}{{R}}_{\mu\nu}+\stackrel{{ \mbox{\tiny(--5)}}}{{\nabla}}{}_{\lambda}S^{\lambda}{}_{\nu\mu}-\stackrel{{ \mbox{\tiny(--5)}}}{{\nabla}}{}_{\nu}S^{\lambda}{}_{\lambda\mu}-2\tilde{C}^{ \lambda}{}_{[\nu\beta]}S^{\beta}{}_{\lambda\mu}-\stackrel{{\mbox{ \tiny(--2)}}}{{C}}{}^{\beta}{}_{\nu\beta}\stackrel{{\mbox{\tiny(--2)}}}{{ \mu}}-\stackrel{{\mbox{\tiny(--2)}}}{{C}}{}^{\lambda}{}_{\nu\beta} \stackrel{{\mbox{\tiny(--2)}}}{{C}}{}^{\beta}{}_{\lambda\mu}\] (59) \[\left.\stackrel{{\mbox{\tiny(2)}}}{{R}}_{\mu\nu}= \stackrel{{\mbox{\tiny(--5)}}}{{\nabla}}{}_{\lambda}\stackrel{{ \mbox{\tiny(--2)}}}{{C}}{}^{\lambda}{}_{\nu\mu}-2\tilde{C}^{\alpha}{}_{[ \nu\lambda]}\stackrel{{\mbox{\tiny(--2)}}}{{C}}{}^{\lambda}{}_{ \alpha\mu}-\stackrel{{\mbox{\tiny(--2)}}}{{C}}{}^{\lambda}{}_{\nu \beta}S^{\beta}{}_{\lambda\mu}-\stackrel{{\mbox{\tiny(--2)}}}{{C}}{} _{\alpha\mu}S^{\alpha}{}_{\nu\lambda}. \tag{60}\] We use these expressions to compute the Ricci scalar expansion \[\mathcal{R}=-\frac{1}{c^{4}}V^{\mu}V^{\nu}\overset{\text{\tiny$(2)$}}{R }_{\mu\nu}+\frac{1}{c^{2}}\Big{(}E^{\mu}E^{\nu}\overset{\text{\tiny$(2)$}}{R}_{ \mu\nu}- V^{\mu}V^{\nu}\overset{\text{\tiny$(0)$}}{R}_{\mu\nu}\Big{)}\] \[-V^{\mu}V^{\nu}\overset{\text{\tiny$(2)$}}{R}_{\mu\nu}+E^{\mu}E^{ \nu}\overset{\text{\tiny$(0)$}}{R}_{\mu\nu}+\mathcal{O}(c^{2}) \tag{61}\] leading to \[\overset{\text{\tiny$(-4)$}}{R} =0 \tag{62}\] \[\overset{\text{\tiny$(-2)$}}{R} =-\overset{\text{\tiny$(C)$}}{\nabla}_{\mu}(V^{\mu}\mathcal{K})+ \mathcal{K}^{2}\] (63) \[\overset{\text{\tiny$(0)$}}{R} =E^{\mu}E^{\nu}\overset{\text{\tiny$(C)$}}{R}_{\mu\nu}-\overset{ \text{\tiny$(C)$}}{\nabla}_{\rho}\Big{(}E^{\rho}\overset{\text{\tiny$(C)$}}{ \nabla}_{\mu}E^{\mu}\Big{)}+\overset{\text{\tiny$(C)$}}{\nabla}_{\rho}\big{(}V^ {\rho}\gamma\big{)}-\mathcal{K}\gamma\] (64) \[=E^{\mu}E^{\nu}\overset{\text{\tiny$(C)$}}{R}_{\mu\nu}\Big{|}_{ \gamma=0}-\overset{\text{\tiny$(C)$}}{\nabla}_{\rho}\Big{(}E^{\rho}\overset{ \text{\tiny$(C)$}}{\nabla}_{\mu}E^{\mu}\Big{)}. \tag{65}\] In the last equality, we used that the last two terms in (64) cancel with the \(\gamma\)-dependence in the first term such that all the \(\gamma\)-dependence in the last line is in the total derivative term. In the following, we use that the Carroll compatible derivative allows writing a total divergence as \[\int\mathrm{d}^{2}x\det(T,E)\overset{\text{\tiny$(C)$}}{\nabla}_{\mu}X^{\mu}= -\int\mathrm{d}^{2}x\det(T,E)\,\mathcal{K}T_{\mu}X^{\mu} \tag{66}\] up to boundary terms [65]. We are ready to expand in \(c^{2}\) and rewrite the relativistic dilaton gravity action (35) in terms of the pre-Carrollian variables. \[I_{\text{\tiny dil}}=c^{2}\overset{\text{\tiny$(2)$}}{I}+c^{4}\overset{\text {\tiny$(4)$}}{I}+\mathcal{O}(c^{6}) \tag{67}\] The first two terms in this expansion are \[\overset{\text{\tiny$(2)$}}{I} =\frac{k_{\text{\tiny rel}}}{4\pi}\int\mathrm{d}^{2}x\,\det(T,E) \,\Big{(}\mathcal{K}V^{\mu}\partial_{\mu}X+U(V^{\mu}\partial_{\mu}X)^{2}\Big{)} \tag{68}\] \[\overset{\text{\tiny$(4)$}}{I} =\frac{k_{\text{\tiny rel}}}{4\pi}\int\mathrm{d}^{2}x\,\det(T,E) \,\Big{(}XE^{\mu}E^{\nu}\overset{\text{\tiny$(C)$}}{R}_{\mu\nu}\Big{|}_{\gamma =0}-U(X)(E^{\mu}\partial_{\mu}X)^{2}+2V(X)\] \[-X\overset{\text{\tiny$(C)$}}{\nabla}_{\rho}\big{(}E^{\rho} \overset{\text{\tiny$(C)$}}{\nabla}_{\mu}E^{\mu}\big{)}\Big{)}. \tag{69}\] To extract the magnetic action, we rewrite \(\overset{\text{\tiny$(2)$}}{I}\) as \[\overset{\text{\tiny$(2)$}}{I} =\frac{k_{\text{\tiny rel}}}{4\pi}\int\mathrm{d}^{2}x\,\det(T,E) \,\Big{(}\frac{1}{2}\mathcal{K}V^{\mu}\partial_{\mu}X+\frac{1}{2}\big{(} \mathcal{K}+2UV^{\mu}\partial_{\mu}X\big{)}V^{\mu}\partial_{\mu}X\Big{)}\] \[=\frac{k_{\text{\tiny rel}}}{4\pi}\int\mathrm{d}^{2}x\,\det(T,E) \,\Big{(}-c^{4}\frac{2\mathcal{K}}{V^{\mu}\partial_{\mu}X}X_{p}^{2}+c^{2}2X_{ p}\mathcal{K}\] \[-c^{4}\frac{2V^{\mu}\partial_{\mu}X}{\mathcal{K}+2UV^{\mu} \partial_{\mu}X}\rho^{2}+c^{2}2\rho V^{\mu}\partial_{\mu}X\Big{)} \tag{70}\] where on-shell evaluation of the auxiliary fields \(X_{p}\) and \(\rho\) reproduces the action in the first line. The introduction of these auxiliary fields effectively redistributes the powers of \(c\) such that \(\overset{\text{\tiny$(2)$}}{I}\) is converted into terms contributing to \(\overset{\text{\tiny$(4)$}}{I}\) and \(\overset{\text{\tiny$(6)$}}{I}\). The magnetic theory is obtained by rescaling Newton's constant as \(G\to c^{4}G_{M}\), defining \(k:=k_{\text{\tiny rel}}c^{4}\) and taking the limit \(c\to 0\). Effectively, this picks out the action \(\overset{\text{\tiny$(4)$}}{I}\) together with the auxiliary field contributions from \(\overset{(2)}{I}\) at leading order. In this case the connection \(\tilde{C}\) reduces to the Carroll compatible connection \(\Gamma\) satisfying \(\nabla(e_{\mu}e_{\nu})=0=\nabla v^{\mu}\), where \(\nabla\) is the associated derivative. The dust settles and we obtain the magnetic Carroll dilaton gravity action \[I^{L}_{\text{\tiny mag}}[e,\tau,\rho,X_{\text{\tiny P}},X]=\frac{k}{4\pi}\int \,\mathcal{L}^{L}_{\text{\tiny mag}} \tag{71}\] with \[\mathcal{L}^{L}_{\text{\tiny mag}}=\text{d}^{2}x\,\det(\tau,e) \left(X(e^{\mu}e^{\nu}R_{\mu\nu}\big{|}_{\gamma=0}-\nabla_{\mu}a^{\mu})+2\rho \,v^{\mu}\partial_{\mu}X+2X_{\text{\tiny P}}K\right.\] \[\left.-U(X)(e^{\mu}\partial_{\mu}X)^{2}+2V(X)\right) \tag{72}\] where it was assumed that the potential \(V\) does not scale with \(c\) and the leading term of \(\mathcal{K}\) is denoted by \(K\). The field \(X_{\text{\tiny P}}\) acts as a Lagrange multiplier setting the intrinsic torsion of \(\Gamma^{\rho}{}_{\mu\nu}\) given by the extrinsic curvature \(K\) to zero. On-shell this leaves us with a Carrollian torsion-free connection satisfying the requirements of [58]. The spatial acceleration vector \(a^{\mu}\) is defined by \[a^{\mu}=e^{\mu}e^{\nu}a_{\nu}\qquad\qquad a_{\mu}=2v^{\nu}\partial_{[\nu}\tau_ {\mu]} \tag{73}\] as in [66]. As divergence terms \(\nabla_{\mu}X^{\mu}\) are independent of the ambiguous \(\gamma\)-dependent term in the Carroll compatible connection, we see that \(\gamma\) does not enter in this action. Therefore we could have set it to zero from the beginning, without loss of generality, like in [65]. The curvature term can be further massaged by using the identity \[e^{\mu}e^{\nu}R_{\mu\nu}\big{|}_{\gamma=0}=-e^{\mu}e^{\nu}\nabla_{\mu}a_{\nu} -(e^{\mu}a_{\mu})^{2}=-\nabla_{\mu}a^{\mu} \tag{74}\] which holds only in 2d. Using additionally the definition of the Carrollian curvature scalar \(R=2e^{\mu}e^{\nu}R_{\mu\nu}\big{|}_{\gamma=0}\) we find that the Lagrange-2-form (72) matches with (23). #### 2.2.3 Spherical reduction of magnetic Carroll gravity Let us look at the other corner of Fig. 1 and see if the two paths from \(D\)-dimensional Einstein gravity to 2d magnetic Carroll dilaton gravity commute. For this, we start with the magnetic limit of \(D\)-dimensional Einstein gravity, impose spherical symmetry, and reduce the corresponding action by the angular variables. We approach this in the second-order formulation, i.e., we work with the spin connection expressed in terms of the vielbein variables by using the torsion constraints. Nevertheless, we will not insert the expressions explicitly to keep the notation cleaner. The \(D\)-dimensional torsion components read \[T^{0}=\text{d}\tau+\omega^{a}\wedge e_{a}\qquad\qquad T^{a}=\text{d}e^{a}+ \omega^{ab}\wedge e_{b} \tag{75}\] where \(a=1,...,D-1\) and \(\omega^{ab}=-\omega^{ba}\). By spherical symmetry, we write the Carroll metric as \[h_{MN}\,\text{d}x^{M}\,\text{d}x^{N}=h_{\mu\nu}(x^{\sigma})\,\text{d}x^{\mu} \,\text{d}x^{\nu}+\Phi^{2}(x^{\sigma})\,\,\text{d}\Omega^{2}_{S^{(D-2)}}. \tag{76}\] It is convenient to phrase things in terms of a choice of vielbein, \[h=e\otimes e+\delta_{lm}e^{l}\otimes e^{m}=\bar{e}\otimes\bar{e}+\Phi^{2} \delta_{lm}\bar{e}^{l}\otimes\bar{e}^{m} \tag{77}\] where the internal indices take values \(l,m=2,...,D-1\) and in the second equality we defined a transverse vielbein \(\bar{e}^{l}\) normalized to the unit sphere \(S^{(D-2)}\). To keep the notation simple we set \(e^{1}\equiv e\), \(\omega^{1}\equiv\omega\). The relations between the vielbeins are \[\bar{\tau}_{M} =\tau_{M}\qquad\qquad\qquad\bar{e}_{M}=e_{M}\qquad\qquad\qquad \bar{e}_{M}^{l}=\Phi^{-1}e_{M}^{l} \tag{78}\] \[\bar{v}^{M} =v^{M}\qquad\qquad\qquad\bar{e}^{M}=e^{M}\qquad\qquad\qquad\bar{ e}_{l}^{M}=\Phi\,e_{l}^{M} \tag{79}\] where \(\tau_{M}v^{M}=-1\), \(e_{M}e^{M}=1\), and \(e^{l}_{M}e^{M}_{m}=\delta^{l}_{m}\) define the dual versions. We assume throughout that the coordinates are chosen in such a way that only \(\bar{e}^{l}_{M}\) depend on the internal coordinates. The barred vielbeins are assumed to satisfy the Carrollian torsion constraints, which as a reminder, are not obtained by setting all the torsion to zero but only those components that allow the elimination of the spin connection from the first-order action. We have \[\mathrm{d}\bar{\tau}+\bar{\omega}\wedge\bar{e}=0\qquad\qquad\mathrm{d}\bar{e}^ {l}+\bar{\omega}^{lm}\wedge\bar{e}_{m}=0 \tag{80}\] where the remaining torsion component \(\mathrm{d}\bar{e}\) crucially is not set to zero. The second of these equations corresponds to the spherical part which is thus fixed to be torsion-free. Let us assume \(\bar{\omega}=\omega\) and \(\bar{\omega}^{lm}=\omega^{lm}\). Furthermore, we impose vanishing torsion in the unbarred space by the same principle. In particular, this means that the following conditions are imposed \[T^{0}(e_{a},v)=T_{a}(e_{b},e_{c})=T^{0}(e_{a},e_{b})=T_{[a}(e_{b]},v)=0. \tag{81}\] This sets to zero all the higher-dimensional torsion except the components \(T_{(a}(e_{b}),v)\) which correspond to the intrinsic torsion (see, e.g., Appendix B of [65]). The conditions can be solved for \(\omega^{1l}=-\omega^{l1}\) and \(\omega^{l}\) by \[\omega^{l} =\varphi\,\bar{e}^{l} \omega^{l1}=(e^{\mu}\partial_{\mu}\Phi)\bar{e}^{l} \tag{82}\] where \(\varphi\) is an arbitrary function. This can be inserted to compute the decomposition of the scalar curvature \[R^{(D)}=-2v^{M}e_{a}^{N}\Omega^{a}_{MN}+e_{a}^{M}e_{b}^{N}\Omega^{ab}_{MN} \tag{83}\] where \[\Omega^{a}=\mathrm{d}\omega^{a}+\omega^{ab}\wedge\omega_{b} \Omega^{ab}=\mathrm{d}\omega^{ab}+\omega^{a}{}_{c}\wedge\omega^{cb}. \tag{84}\] Explicitly, the components are given by \[\Omega^{1} =\bar{\Omega}^{1}=\mathrm{d}\bar{\omega} \Omega^{1l}=-\,\mathrm{d}(e^{\mu}\partial_{\mu}\Phi)\wedge\bar{e}^{ l}=-\Omega^{l1} \tag{85}\] \[\Omega^{l} =\mathrm{d}\varphi\wedge\bar{e}^{l}+(e^{\mu}\partial_{\mu}\Phi) \bar{e}^{l}\wedge\bar{\omega} \Omega^{lm}=\bar{\Omega}^{lm}-(e^{\mu}\partial_{\mu}\Phi)\bar{e}^{ l}\wedge\bar{e}^{m} \tag{86}\] leading to the scalar curvature \[R^{(D)}=-4v^{\mu}e^{\nu}\partial_{[\mu}\bar{\omega}_{\nu]}+ \frac{2(D-2)}{\Phi}\Big{(}(e^{\mu}\partial_{\mu}\Phi)v^{\nu}\bar{\omega}_{\nu }-v^{\mu}\partial_{\mu}\varphi-e^{\mu}\partial_{\mu}(e^{\alpha}\partial_{ \alpha}\Phi)\Big{)}\\ -(D-2)(D-3)\frac{(e^{\mu}\partial_{\mu}\Phi)^{2}}{\Phi^{2}}+\frac {(D-2)(D-3)}{\Phi^{2}} \tag{87}\] where we used \(R_{S^{(D-2)}}=(D-2)(D-3)\). Plugging this result into the \(D\)-dimensional action (see [58]), using the field redefinition (34), and integrating over the angles yields \[I_{\text{\tiny Car}} =\frac{1}{16\pi G_{M}}\int\mathrm{d}^{D}x\,\mathrm{det}(\tau,e^{ a})\,R^{(D)} \tag{88}\] \[=\frac{k}{4\pi}\int\mathrm{d}^{2}x\,\mathrm{det}(\tau,e)\Big{(}4X e^{\mu}v^{\nu}\partial_{[\mu}\bar{\omega}_{\nu]}-2\lambda\varphi X^{\frac{3-D}{2-D}} K+2\lambda\varphi\frac{D-3}{D-2}X^{\frac{1}{2-D}}v^{\mu}\partial_{\mu}X\] \[\qquad\qquad\qquad\qquad+2v^{\mu}\bar{\omega}_{\mu}e^{\nu} \partial_{\nu}X-2e^{\mu}\partial_{\mu}(e^{\alpha}\partial_{\alpha}X)-U(e^{\mu }\partial_{\mu}X)^{2}+2V\Big{)} \tag{89}\] with \(k\), \(U\), and \(V\) defined in the same way as in Section 2.2.1 but \(G\) exchanged with the magnetic version \(G_{M}\). The curvature terms simplify by noticing that \[-2e^{\mu}\partial_{\mu}(e^{\alpha}\partial_{\alpha}X)=-2X\nabla_{\mu}a^{\mu} \qquad\qquad 2v^{\mu}\bar{\omega}_{\mu}e^{\nu}\partial_{\nu}X=2X\nabla_{\mu}a^{\mu} \tag{90}\] up to total derivatives. These two terms, therefore, cancel in the action. The quantity \(\bar{\omega}\) is the 2d spin connection evaluated on a solution of the torsion constraints, i.e., \[\bar{\omega}=\hat{\omega}+\rho e \tag{91}\] with an undetermined component given by the arbitrary function \(\rho(x^{\alpha})\). Together with the definition of the curvature scalar \(R=4e^{\mu}v^{\nu}\partial_{[\mu}\hat{\omega}_{\nu]}\) the action reads \[I_{\text{\tiny Car}}=\frac{k}{4\pi}\int\text{d}^{2}x\det(\tau,e)\Big{(}XR+2 \rho v^{\mu}\partial_{\mu}X+2X_{\text{\tiny P}}K-U(e^{\mu}\partial_{\mu}X)^{2} +2V\Big{)} \tag{92}\] where we discarded all boundary terms and redefined the Lagrange multipliers \(\rho\), \(X_{\text{\tiny P}}\) as specific combinations of the free functions \(\rho\) and \(\varphi\), \[\rho\to\rho-\varphi\lambda\frac{D-3}{D-2}X^{\frac{1}{2-D}}\qquad\qquad\varphi \to-\frac{X_{\text{\tiny P}}}{\lambda}X^{\frac{3-D}{D-2}}. \tag{93}\] This matches with the result (23) and shows that spherical reduction and taking the magnetic limit commute. A similar result was found already for the Galilean case [67]. #### 2.2.4 Magnetic Carroll dilaton gravity from a Hamiltonian perspective Let us introduce ADM variables [68] and perform the computation that coined the term "magnetic" for 2d dilaton gravity. For a comparison with the Einstein-gravity case, see, e.g., [58]. Introducing a foliation \(\Sigma_{t}\) with a time function \(t\) we choose adapted coordinates \((t,x^{i})\) and write the metric as \[\text{d}s^{2}=-c^{2}N^{2}\,\text{d}t^{2}+h_{ij}\big{(}\,\text{d}x^{i}+N^{i}\, \text{d}t\big{)}\big{(}\,\text{d}x^{j}+N^{j}\,\text{d}t\big{)} \tag{94}\] such that we get a future-directed timelike unit normal vector \[n=\frac{1}{Nc}\big{(}\partial_{t}-N^{i}\partial_{i}\big{)}\qquad\qquad\qquad \qquad\qquad n^{2}=-1. \tag{95}\] The label \(x^{i}\) here actually only refers to a single coordinate since we are in 2d. In these adapted coordinates, the contracted Gauss-Codazzi equation reads \[R=R^{(1)}+K^{2}+K^{ij}K_{ij}-\frac{2}{Nc}\big{(}\partial_{t}K-N^{i}\partial_ {i}K\big{)}-\frac{2}{N}D_{i}D^{i}N \tag{96}\] where \(D_{i}\) is the Levi-Civita derivative associated to \(h_{ij}\). We use \(K_{ij}=Kh_{ij}\) and \(R^{(1)}=0\) in 2d. Furthermore, the trace of extrinsic curvature can be written as \[K=-\frac{1}{2Nc}h^{ij}\big{(}\dot{h}_{ij}-2D_{i}N_{j}\big{)} \tag{97}\] where the dot denotes a partial derivative with respect to \(t\). This allows writing the relativistic second-order action (35) in terms of ADM variables, \[I_{\text{\tiny dil}}[X,h_{ij},N,N^{i}]=\int\text{d}t\,L \tag{98}\] with \[L=\frac{k_{\text{\tiny rel}}c^{3}}{4\pi}\int\text{d}x\sqrt{h} \Big{(}2K\big{(}\dot{X}-N^{i}\partial_{i}X\big{)}-2c\,Nh^{ij}D_{i}D_{j}X\] \[\qquad\qquad\qquad+\frac{U}{Nc}\big{(}\dot{X}-N^{i}\partial_{i}X \big{)}^{2}-Nc\,Uh^{ij}D_{i}XD_{j}X+2NcV\Big{)}. \tag{99}\] Defining momentum densities \[\pi_{X} =\frac{\delta L}{\delta\dot{X}}=\frac{k_{\text{rel}}c^{3}\sqrt{h}}{4 \pi}\Big{(}2K+2U\big{(}\dot{X}-N^{i}\partial_{i}X\big{)}\Big{)} \tag{100}\] \[\pi^{ij} =\frac{\delta L}{\delta\dot{h}_{ij}}=-\frac{k_{\text{rel}}c^{2} \sqrt{h}}{4\pi N}\,h^{ij}\big{(}\dot{X}-N^{i}\partial_{i}X\big{)} \tag{101}\] we perform a Legendre transformation and write the action in Hamiltonian form \[I_{\text{\tiny{dil}}}[X,\pi_{X},h_{ij},\pi^{ij}]=\int\mathrm{d}t\,\mathrm{d}x \Big{(}\pi_{X}\dot{X}+\pi^{ij}\dot{h}_{ij}-N\mathcal{H}-N^{i}\mathcal{H}_{i} \Big{)} \tag{102}\] where the momentum constraint is \[\mathcal{H}_{i}=\pi_{X}D_{i}X-2D_{j}\pi^{j}{}_{i} \tag{103}\] and the Hamiltonian constraint reads \[\mathcal{H} =\mathcal{H}^{M}+\mathcal{H}^{E} \tag{104}\] \[\mathcal{H}^{M} =\frac{k\sqrt{h}}{4\pi}\Big{(}Uh^{ij}D_{i}XD_{j}X-2V+2h^{ij}D_{i} D_{j}X\Big{)}\] (105) \[\mathcal{H}^{E} =-\frac{4\pi c^{2}}{k\sqrt{h}}\Big{(}\pi^{ij}h_{ij}\pi_{X}+U\pi^{ ij}\pi_{ij}\Big{)}. \tag{106}\] In the above expression, we already rescaled \(G\to G_{M}c^{4}\) and defined \(k:=k_{\text{\tiny{rel}}}c^{4}\) [cf. (37)] such that the leading order in a \(c\to 0\) expansion corresponds to the magnetic dilaton gravity action \[\boxed{I^{H}_{\text{\tiny{mag}}}[X,\pi_{X},h_{ij},\pi^{ij}]=\int\mathrm{d}t\, \mathrm{d}x\Big{(}\pi_{X}\dot{X}+\pi^{ij}\dot{h}_{ij}-N\mathcal{H}^{M}-N^{i} \mathcal{H}_{i}\Big{)}\.} \tag{107}\] As in the case of Einstein gravity, the momenta cannot be eliminated by their equations of motion anymore as they only appear linearly. Instead, they act as Lagrange multipliers for the (second-class) constraints \[\delta\pi_{X}:\qquad\dot{X}-N^{i}\partial_{i}X=0\qquad\quad \leftrightarrow\qquad n^{\mu}\partial_{\mu}X=0 \tag{108}\] \[\delta\pi^{ij}:\qquad\dot{h}_{ij}-2D_{(i}N_{j)}=0\qquad \leftrightarrow\qquad K_{ij}=0 \tag{109}\] which are the same constraints obtained in the first-order approach if we identify \(v^{\mu}\propto n^{\mu}\) and \(h_{ij}=e_{i}e_{j}\). Let us see if the actions themselves are also equivalent. The action (23) reads \[I_{\text{\tiny{2nd}}}=\frac{k}{2\pi}\int\mathrm{d}^{2}x\,\mathrm{d}\epsilon( \tau,e)\Big{(}2Xe^{\mu}v^{\nu}\partial_{[\mu}\hat{\omega}_{\nu]}+\rho v^{\mu} \partial_{\mu}X+X_{\text{\tiny{P}}}K-\frac{U}{2}(e^{\mu}\partial_{\mu}X)^{2}+ V\Big{)}. \tag{110}\] We have to identify these variables in terms of the ADM variables used before in adapted coordinates \((t,x)\). Let us first fix the boost frame such that \(\tau=N\,\mathrm{d}t\). The frame variables then read \[v^{\mu}=\Big{(}-\frac{1}{N},\frac{\mathcal{N}}{N}\Big{)}\qquad\tau_{\mu}= \Big{(}N,0\Big{)}\qquad e^{\mu}=\Big{(}0,\frac{1}{\mathfrak{c}}\Big{)}\qquad e _{\mu}=\Big{(}\mathcal{N}\mathfrak{c},\mathfrak{c}\Big{)}. \tag{111}\] This allows writing the torsionless part of the spin connection as \[\hat{\omega}_{t}=2e^{\mu}\partial_{[t}\tau_{\mu]}=-\frac{\partial_{x}N}{ \mathfrak{c}}\qquad\qquad\hat{\omega}_{x}=0 \tag{112}\] while we also have the relations \[K=2e^{\mu}v^{\nu}\partial_{[\mu}e_{\nu]}=\frac{1}{2N\mathfrak{e}}\Big{(}\partial_ {t}\mathfrak{e}^{2}-2D_{x}(\mathcal{N}\mathfrak{e}^{2})\Big{)}\qquad\qquad \det(\tau,e)=N\mathfrak{e}. \tag{113}\] Note that \(\mathcal{N}\) are the components of a spatial vector and \(\mathfrak{e}^{2}\mathcal{N}\) the components of a spatial covector such that the spatial covariant derivative in the first expression reads explicitly \[D_{x}(\mathcal{N}\mathfrak{e}^{2})=\partial_{x}(\mathcal{N}\mathfrak{e}^{2})- \gamma(\mathcal{N}\mathfrak{e}^{2})=\mathfrak{e}\partial_{x}(\mathcal{N} \mathfrak{e}) \tag{114}\] where the 1d Christoffel symbols are \(\gamma=\partial_{x}\ln\mathfrak{e}\). This allows writing the first term in the action as \[2Xe^{\mu}v^{\nu}\partial_{[\mu}\hat{\omega}_{\nu]}=-\frac{X}{N \mathfrak{e}^{2}}D_{x}^{2}N. \tag{115}\] In total, we arrive at \[I_{\rm 2nd}=\frac{k}{2\pi}\int\mathrm{d}t\,\mathrm{d}x\Big{(} -\frac{X}{\mathfrak{e}}D_{x}^{2}N-\mathfrak{e}\rho(\partial_{t} X-\mathcal{N}D_{x}X) \tag{116}\] \[+\frac{X_{\rm P}}{2\mathfrak{e}}\big{(}\partial_{t}\mathfrak{e}^ {2}-2D_{x}(\mathcal{N}\mathfrak{e}^{2})\big{)}-\frac{N}{2\mathfrak{e}}U(D_{x} X)^{2}+VN\mathfrak{e}\Big{)} \tag{117}\] which, by the change of variables \[-\frac{k}{2\pi}\mathfrak{e}\rho=\pi_{X} \frac{k}{4\pi}\frac{X_{\rm P}}{\mathfrak{e}}=\pi_{h} \tag{118}\] can be brought into the form \[I_{\rm 2nd}[X,\pi_{X},\mathfrak{e},\pi_{h},N,\mathcal{N}]=\int\mathrm{d}t\,L_ {\rm 2nd} \tag{119}\] with \[L_{\rm 2nd}=\int\mathrm{d}x\Big{(}\pi_{X}\dot{X}+\pi_{h}\partial_ {t}\mathfrak{e}^{2} -\mathcal{N}\big{(}\pi_{X}D_{x}X-2\mathfrak{e}^{2}D_{x}\pi_{h} \big{)} \tag{120}\] \[-N\mathfrak{e}\frac{k}{2\pi}\Big{(}\frac{1}{\mathfrak{e}^{2}}D_ {x}^{2}X+\frac{U}{2\mathfrak{e}^{2}}(D_{x}X)^{2}-V\Big{)}\Big{)}. \tag{121}\] Changing back to spatial component notation \[\mathfrak{e}\to e_{i}\qquad\qquad\mathfrak{e}^{-1}\to e^{i} \qquad\qquad\pi_{h}\to\pi^{ij}\qquad\qquad D_{x}\to D_{i}\qquad\quad \mathcal{N}\to N^{i} \tag{122}\] yields \[L_{\rm 2nd}=\int\mathrm{d}x\Big{(}\pi_{X}\dot{X}+\pi^{ij} \partial_{t}h_{ij} -N^{i}\big{(}\pi_{X}D_{i}X-2D_{j}\pi^{j}{}_{i}\big{)} \tag{123}\] \[-N\mathfrak{e}\frac{k}{4\pi}\Big{(}2h^{ij}D_{i}D_{j}X+Uh^{ij}D_{ i}XD_{j}X-2V\Big{)}\Big{)}. \tag{124}\] We thus find that the second-order formulation and the Hamiltonian formulation agree. \[I_{\rm 2nd}=I_{\rm mag}^{H} \tag{125}\] ### Solutions of Carroll dilaton gravity In order to derive all classical solutions, the first-order/PSM formulation is advantageous. In the end, we translate our solutions to the second-order formulation. #### 2.3.1 Constant dilaton vacua Constant dilaton vacua are defined to be solutions where \(X_{\text{\tiny H}}\) vanishes everywhere. The equation that normally determines the Carroll metric (8d) leads to constant dilaton (hence the name). The Carroll Casimir equation (8e) shows that this constant cannot be anything but has to solve the non-differential equation \[\mathcal{V}(X,\,0)=0\,. \tag{126}\] In particular, this means constant dilaton vacua need infinite finetuning of the dilaton field and may not even exist for some models (the simplest example is the Carroll CGHS model where \(\mathcal{V}=\Lambda\neq 0\)). The Carroll curvature of all constant dilaton vacua is a constant times the volume-form, \[\Omega=-\partial_{X}\mathcal{V}(X,\,0)\,\tau\wedge e\,. \tag{127}\] Similar remarks apply to torsion. Finally, the auxiliary field equation (8f) implies that also \(X_{\text{\tiny P}}\) is some (arbitrary) constant. Since constant dilaton vacua are non-generic, require infinite finetuning, and are not very rich in structure, we move on to the generic sector, the linear dilaton vacua. #### 2.3.2 Linear dilaton vacua Linear dilaton vacua are solutions where \(X_{\text{\tiny H}}\) does not vanish everywhere. Thus, let us start by assuming a patch where \(X_{\text{\tiny H}}\neq 0\).4 This allows to solve the Carroll metric equation (8d) as Footnote 4: Below, many statements implicitly come with the qualifier “assuming \(X_{\text{\tiny H}}\neq 0\)”. \[e=-\frac{\text{d}X}{X_{\text{\tiny H}}}\,. \tag{128}\] Inserting this result into the Carrollian Casimir equation (8e) yields \[\frac{1}{2}\ \text{d}(X_{\text{\tiny H}}^{2})-\mathcal{V}(X,\,X_{\text{\tiny H}}) \ \text{d}X=0 \tag{129}\] which allows expressing \(X_{\text{\tiny H}}\) as function of the dilaton \(X\) and of an integration constant \(M\). We refer to \(M\) as Carrollian mass or Carrollian Casimir. The latter nomenclature was chosen since in the PSM formulation, the function \(M(X,\,X_{\text{\tiny H}})\) spans precisely the kernel of the degenerate Poisson tensor (27). I.e., if we went to Casimir-Darboux coordinates the expression \(M(X,\,X_{\text{\tiny H}})\) would correspond to the Casimir direction in the Poisson manifold. To simplify the discussion, we assume, for now, \(\mathcal{V}=V(X)\) and return to more general cases in the end. It is useful to define the integrated potential \[w(X):=\int^{X}V(y)\,\text{d}y \tag{130}\] in terms of which the conserved Carrollian mass is given by \[M=w(X)-\frac{1}{2}\,X_{\text{\tiny H}}^{2}\qquad\qquad\text{d}M=0\,. \tag{131}\] The equation that establishes no intrinsic torsion, (8c), is solved trivially, \[\text{d}e=0\qquad\Longrightarrow\qquad e=\text{d}r\,. \tag{132}\] We use \(r\) as our Carrollian radial coordinate without loss of generality.5 Expressing \(X_{\text{\tiny{H}}}\) as a function of \(X\) using the Carrollian mass (131) and inserting it into (128) yields a simple differential equation for the dilaton Footnote 5: This is true only if we disregard edge modes, which we do for the time being. Once a boundary is considered with specific boundary conditions imposed on the fields, there could be a loss of generality in assuming \(e=\mathrm{d}r\). \[\boxed{\frac{\mathrm{d}X}{\mp\sqrt{2(w(X)-M)}}=\mathrm{d}r} \tag{133}\] where the signs refer to the two branches of the square-root function. To solve the remaining equations, it is convenient to fix the Carroll boosts such that \(X_{\text{\tiny{P}}}=0\), which is always possible locally. The auxiliary equation (8f) simplifies to a constraint \[\omega=-\frac{V}{X_{\text{\tiny{H}}}}\,\tau \tag{134}\] that renders the remaining two equations, for Carroll curvature (8a) and torsion (8b), identical to each other. Thus, there is only one more equation we need to solve, e.g., the Carrollian torsion equation (8b). By virtue of the constraint (134) it simplifies to \[\mathrm{d}\tau+\left(\partial_{X}\ln X_{\text{\tiny{H}}}\right)\tau\wedge \mathrm{d}X=0 \tag{135}\] solved by \[\tau=-X_{\text{\tiny{H}}}\ \mathrm{d}t\,. \tag{136}\] Here, we used the scaling ambiguity \(t\to\alpha\tilde{t}\) with \(\alpha\in\mathbb{R}^{+}\) to choose some time coordinate \(t\) and fixed the residual Carroll boost invariance by assuming \(\tau\) has no \(\mathrm{d}r\)-component. The result (136) implies \[\omega=V(X)\ \mathrm{d}t \tag{137}\] for the Carroll boost connection. As a consequence, the Carrollian curvature of our solutions is given by \[\boxed{\Omega=-V^{\prime}(X)\,\tau\wedge e=-V^{\prime}(X)\ \mathrm{d}t\wedge \mathrm{d}X\,.} \tag{138}\] In summary, in the chosen gauge, the solution reads \[X =\text{given by integrating (\ref{eq:X})} \omega =V(X)\ \mathrm{d}t \tag{139a}\] \[X_{\text{\tiny{H}}} =\pm\sqrt{2(w(X)-M)} \tau =-X_{\text{\tiny{H}}}\ \mathrm{d}t\] (139b) \[X_{\text{\tiny{P}}} =0 \text{e} =\mathrm{d}r\,. \tag{139c}\] Translating our solution to the second-order formulation as described in Section 2.1.3 yields the metric \[\boxed{\mathrm{d}s^{2}=h_{\mu\nu}\ \mathrm{d}x^{\mu}\,\mathrm{d}x^{\nu}=e_{\mu}e_{ \nu}\ \mathrm{d}x^{\mu}\,\mathrm{d}x^{\nu}=\mathrm{d}r^{2}} \tag{140}\] and the timelike vector field \[\boxed{v=v^{\mu}\partial_{\mu}=\frac{1}{X_{\text{\tiny{H}}}}\,\partial_{t}= \pm\frac{1}{\sqrt{2(w(X)-M)}}\,\partial_{t}\,.} \tag{141}\] The dilaton field \(X\) is still given by integrating (133). Finally, we come back to more general cases when the potential \(\mathcal{V}\) does not only depend on the dilaton \(X\) but additionally depends on the boost invariant scalar \(X_{\text{\tiny H}}\). We discuss here the family of models (16) and refer by analogy to [64] for further generalizations. We continue to fix the radial coordinate by \(e=\text{d}r\), so the Carroll metric is still given by (140). Exploiting the Weyl rescalings discussed after (9), we find that we need a modified definition of the function \(w\), namely \[w(X):=\int^{X}e^{Q(y)}V(y)\,\text{d}y\qquad\qquad e^{Q(X)}:=e^{\int^{X}U(y)\, \text{d}y}\,. \tag{142}\] The additive integration constant implicit in the function \(w(X)\) can be absorbed by shifting the mass \(M\). The multiplicative integration constant implicit in \(e^{Q(X)}\) can be chosen to give this expression the desired physical dimensions, discussed in Subsection 3.4 below. Relatedly, the Carrollian mass is changed slightly \[M=w(X)-\frac{1}{2}\,X_{\text{\tiny H}}^{2}\,e^{Q(X)} \tag{143}\] which changes also the boost invariant scalar \[X_{\text{\tiny H}}=\pm\sqrt{2e^{-Q(X)}(w(X)-M)}\,. \tag{144}\] The dilaton is obtained by integrating \[\frac{\text{d}X}{\mp\sqrt{2e^{-Q(X)}(w(X)-M)}}=\text{d}r\,. \tag{145}\] Fixing, again, Carroll boosts suitably recovers \(X_{\text{\tiny P}}=0\) and \[\tau=-e^{Q(X)}\,X_{\text{\tiny H}}\ \text{d}t\,. \tag{146}\] The analogue of the constraint (134) implies \[\omega=e^{Q(X)}\,\mathcal{V}(X,\,X_{\text{\tiny H}})\ \text{d}t\,. \tag{147}\] Finally, the timelike vector field is given by \[v=v^{\mu}\partial_{\mu}=\pm\frac{1}{\sqrt{2e^{Q(X)}(w(X)-M)}}\,\partial_{t}\,. \tag{148}\] Carroll curvature evaluates as \[\Omega=-\big{(}V^{\prime}(X)-\tfrac{1}{2}X_{\text{\tiny H}}^{2}U^{\prime}(X) \big{)}\,\tau\wedge e=e^{Q(X)}\big{(}\tfrac{1}{2}X_{\text{\tiny H}}^{2}U^{ \prime}(X)-V^{\prime}(X)\big{)}\ \text{d}t\wedge\text{d}X\,. \tag{149}\] #### 2.3.3 Carrollian Birkhoff theorem In Lorentzian dilaton gravity, there is a generalized Birkhoff theorem, in the sense that all solutions to all models have at least one Killing vector, see e.g. [55] and references therein. In the Carrollian case, we see similar features: in the constant dilaton sector, all solutions have constant curvature and constant scalar fields, so there is a sense in which these configurations are maximally symmetric. However, let us focus on the less trivial linear dilaton vacua. A minimal requirement to define a Carrollian Killing vector is that all boost-invariant fields are invariant under Lie transport along it. This establishes the conditions \[\mathcal{L}_{\xi}h_{\mu\nu}=\mathcal{L}_{\xi}v^{\mu}=\mathcal{L}_{\xi}X=0\,. \tag{150}\] The last condition implies \(\xi^{r}=0\), the second condition yields \(\partial_{t}\xi^{t}=0\), and the first condition imposes no further restriction. Thus, any vector field \[\xi^{\mu}\,\partial_{\mu}=f(r)\,\partial_{t} \tag{151}\] is a Carrollian Killing vector (including \(\xi^{\mu}=v^{\mu}\)), for all solutions of all 2d Carroll dilaton gravity models. We refer to this statement as "Carrollian Birkhoff theorem". Since all Carrollian Killing vectors (151) mutually commute, one can refer to them as radial supertranslations by analogy to BMS jargon. #### 2.3.4 Singularities of Carrollian manifolds It is not the purpose of our work to provide a comprehensive discussion of Carrollian singularities. Nevertheless, we need to confront three types of singularities since they naturally and rather generically occur in 2d Carroll dilaton gravity. 1. Carroll coordinate singularities 2. Carroll curvature singularities 3. Carrollian structure singularities The first type of coordinate singularity can arise much in the same way as in general relativity. The prototypical example is Schwarzschild-gauge, which in our context is obtained by using the radial coordinate \[\mathrm{d}\rho=e^{Q(X)}\ \mathrm{d}X \tag{152}\] in terms of which the Carroll metric reads \[h_{\mu\nu}\ \mathrm{d}x^{\mu}\,\mathrm{d}x^{\nu}=\frac{\mathrm{d}\rho^{2}}{2e^{Q(X )}(w(X)-M)}\,. \tag{153}\] There is a (coordinate-)singularity at zeros of \(w(X)-M\). We shall use this singular coordinate system when comparing with Schwarzschild-Tangherlini solutions in Section 5. Concerning curvature singularities, we merely note that they can (and quite often do) arise, typically in the strong-coupling region where the dilaton tends to zero. Perhaps there is nothing more to this type of singularity than the observation that curvature can become infinite if gravity is infinitely strong. The third type is a singularity reminiscent of what happens at the bifurcation sphere of the Schwarzschild black hole (see, e.g., [46]), in the following sense. The attentive reader will have realized already that there is a remaining issue in our classification of solutions: first, we assumed \(X_{\textsc{h}}=0\) everywhere (constant dilaton vacua) and then we assumed \(X_{\textsc{h}}\neq 0\) everywhere (linear dilaton vacua) but what if \(X_{\textsc{h}}\) vanishes or diverges at isolated loci? That this can happen is evident from the explicit solution (139) for linear dilaton vacua: The equation \(X_{\textsc{h}}=0\) can have solutions for certain values of the radial coordinate \(r\), depending on the choice of the potential and the value of the mass \(M\). Thus, we have potentially singular points in the interior or the boundary of linear dilaton vacua. The difference to the Lorentzian case is that there the singularity analogous to the one at \(X_{\textsc{h}}=0\) is merely a coordinate singularity of Eddington-Finkelstein patches and can be removed by going into, say, Kruskal coordinates, see e.g. [69]. By contrast, in the Carrollian case, there is no coordinate system where both the metric and the vector field are non-zero and finite at \(X_{\textsc{h}}=0\). The physical interpretation of these potential singularities is simple: the timelike vector field either blows up (\(X_{\textsc{h}}\to 0\)) or collapses to zero (\(|X_{\textsc{h}}|\to\infty\)). We call this a singularity in the Carrollian structure. Note that curvature (138) may remain finite at such loci. In Section 4, where we define Carroll extremal surfaces, we employ these intriguing loci where \(X_{\textsc{H}}\) vanishes, independently from the behaviour of curvature. \[X_{\textsc{H}}=0\qquad\leftrightarrow\qquad e^{-Q(X)}(w(X)-M)=0 \tag{154}\] We move on to the global properties of Carroll thermal manifolds next, where such loci also play a decisive role. #### 2.3.5 Global aspects of Carroll thermal solutions Before we provide further motivation and details, let us define a Carroll thermal (C-thermal) manifold: **C-thermal manifolds** (in 2d) are smooth manifolds \(\mathcal{M}\) carrying a Carrollian structure up to isolated points, with a boundary \(\partial\mathcal{M}\) diffeomorphic to \(S^{1}\). Thus, we are relaxing the original definition of a Carrollian manifold [4], which disallowed these isolated singularities in the Carrollian structure. As we do not investigate any form of matter coupled to this theory, the word "thermal" has to be understood in a broader sense than being able to read off an actual temperature from some detector. We shall see in Section 3 that it is still natural to use this terminology because of a corresponding term appearing in the first law. To amalgamate these conflicting indicators in favour and against a Carroll version of temperature, we refer to these geometries as C-thermal. In analogy to the Euclidean case, we compactify the orbits of a Carrollian Killing vector field \(\xi\) associated with time translations. Taking \(\xi=\partial_{t}\) we identify6 points \(p\in\mathcal{M}\) along the action Footnote 6: As in the Lorentzian case, we analytically continue to imaginary time and change to complexified frame variables and spin connection, see Section 6 for an explicit example. Unlike the Lorentzian case, this has no consequence on the signature, which remains \((0,+)\). To reduce clutter we refrain from introducing Wick rotated variables here. \[p\sim e^{\beta\xi}\cdot p\qquad\qquad\beta\in\mathbb{R}^{+} \tag{155}\] where \(e^{\beta\xi}\cdot\) means flowing the point \(p\) along the integral curve of \(\xi\) by a parameter difference \(\beta\). Having in mind a Carrollian analogue of Euclidean dilaton gravity, we introduce a (cut-off) boundary at some large positive value of the dilaton, \(X(r_{c})=X_{c}\). The two topologies of interest to us are cylinder and disk. C-thermal manifolds, by definition, require the latter. The natural topology for a 2d Carrollian manifold with compactified time direction is a cylinder, i.e., a direct product manifold of the spatial line and the temporal cycle. Assuming the temporal cycle to be finite and non-zero globally, only cylinders are possible. Thus, if we insisted on the absence of Carrollian structure singularities C-thermal manifolds, which have disk topology, would be impossible. A smooth disk is obtained by demanding the temporal cycle to shrink to a point at the locus \(X_{\textsc{H}}\to 0\) such that the manifold is smooth there. Before tackling this issue, we address general aspects of Carroll manifolds with a Carrollian structure singularity induced by \(X_{\textsc{H}}\to 0\) in the centre of the disk. The frame we define on such a manifold is given by \[v=\frac{1}{e^{Q}X_{\textsc{H}}}\,\partial_{t}\qquad\qquad e=\partial_{r} \tag{156}\] where the coordinate \(t\) is compactified, \(t\sim t+\beta\). Similarly to a frame adapted to polar coordinates on a Euclidean disk, it becomes singular at the origin. This is not only because of the divergence in \(v\) but also because the very notion of tangent and radial directions cannot be defined at the origin. The way to still obtain a global orthonormal frame field in the Euclidean case is to perform an \(SO(2)\)-rotation to a Cartesian frame that can be extended to the origin. In the general case, this could be done locally such that, e.g., asymptotically one still has a polar frame while one switches to a Cartesian one in a neighbourhood of the centre with corresponding transition functions on the overlap. In the Carrollian case, however, this is not possible: The transformations acting on the frame belong to the homogeneous Carroll group \(\mathrm{Carr}(2,\mathbb{R})=ISO(1)\), which leaves \(v\) invariant. So, starting with (156) asymptotically, one necessarily arrives at a singular description of the origin. This is just another way of stating the presence of a Carrollian structure singularity at this locus. One can quantify this singularity by picking a loop around the origin \(\gamma:[0,1]\to\mathcal{M}\) parametrized by \(\sigma\mapsto x^{\mu}(\sigma)\) and computing its associated holonomy \(H_{\gamma}(\omega)\) for the connection (147). The parallel transport equation \[\frac{\mathrm{d}}{\mathrm{d}\sigma}V^{a}+\frac{\mathrm{d}x^{\mu}}{\mathrm{d} \sigma}\omega^{a}{}_{\mu b}V^{b}=0 \tag{157}\] is solved by \[V^{a}\big{(}\gamma(\sigma=1)\big{)}=\exp\Big{[}-\int_{0}^{1} \mathrm{d}\sigma\,\dot{x}^{\mu}\omega_{\mu}\Big{]}^{a}{}_{b}\,V^{b}\big{(} \gamma(\sigma=0)\big{)}=\big{(}H_{\gamma}\big{)}^{a}{}_{b}V^{b}\big{(}\gamma( \sigma=0)\big{)} \tag{158}\] where \(\dot{x}^{\mu}\) denotes \(\frac{\mathrm{d}}{\mathrm{d}\sigma}x^{\mu}\) and \(\dot{x}^{\mu}\omega_{\mu}\) is a matrix with components \(\dot{x}^{\mu}\omega^{a}{}_{\mu b}\). Using the solution (147) and choosing the loop \(x^{\mu}(\sigma)=\big{(}\sigma\beta,r_{0}\big{)}\) with \(r_{0}=\mathrm{const.}\) we obtain \[H_{\gamma}(\omega)=\begin{pmatrix}1&-\beta e^{Q}\mathcal{V}(X,X_{ \mathrm{H}})\big{|}_{r=r_{0}}\\ 0&1\end{pmatrix}. \tag{159}\] Then, contracting \(\gamma\) to a point at the origin yields \[\lim_{r_{0}\to 0}H_{\gamma}(\omega)=\begin{pmatrix}1&-\beta w^{ \prime}(X)\\ 0&1\end{pmatrix}\Big{|}_{r=0} \tag{160}\] where, without loss of generality, the integration constant in (145) was chosen such that \(X_{\mathrm{H}}(r=0)=0\). We stress that while the Carroll spin connection is ambiguous and only defined up to the addition of a term \(\rho\,e\) (see Section 2.1.3), this ambiguity does not enter here because the loop is chosen such that \(\dot{x}^{\mu}e_{\mu}=0\). Let us return to the smoothness condition at the origin of the disk. In a Euclidean theory of gravity, one requires that the closer \(\gamma\) approaches the origin, the more the holonomy approaches the one of a flat disk. The geometry of the latter is given by \[v^{\mathrm{E,Disk}}=\frac{1}{r}\,\partial_{\varphi}\qquad\quad e ^{\mathrm{E,Disk}}=\partial_{r}\qquad\quad\big{(}\omega^{\mathrm{E,Disk}}\big{)} ^{a}{}_{b}=\begin{pmatrix}0&\mathrm{d}\varphi\\ -\,\mathrm{d}\varphi&0\end{pmatrix} \tag{161}\] with the torsion-free spin connection \(\omega^{\mathrm{E,Disk}}\) and \(\varphi\sim\varphi+2\pi\). Explicitly, the condition reads \[\lim_{r_{0}\to 0}H_{\gamma}(\omega^{\mathrm{E}})\overset{!}{=}H_{ \gamma}(\omega^{\mathrm{E,Disk}})=\exp\begin{pmatrix}0&-2\pi\\ 2\pi&0\end{pmatrix} \tag{162}\] where \(\omega^{E}\) is some curved connection. This condition fixes the Hawking temperature. We use the same definition for the Carrollian case: A flat Carrollian disk is given by \[v^{\rm C,Disk}=\frac{1}{r}\,\partial_{\varphi}\qquad\quad e^{\rm C,Disk}= \partial_{r}\qquad\quad\left(\omega^{\rm C,Disk}\right)^{a}_{\ b}=\begin{pmatrix} 0&{\rm d}\varphi\\ 0&0\end{pmatrix}\, \tag{163}\] where the ambiguity in the spin connection has again been neglected because it does not contribute to the holonomy integral. The condition we arrive at is \[\boxed{\lim_{r_{0}\to 0}H_{\gamma}(\omega)\stackrel{{!}}{{=}}H_{ \gamma}(\omega^{\rm C,Disk})=\exp\begin{pmatrix}0&-2\pi\\ 0&0\end{pmatrix}} \tag{164}\] Therefore, in Carrollian theories of gravity, the holonomy is never equal to the identity for contractible loops around the origin, which is precisely because of the Carrollian structure singularity. As another equivalent way to ensure a smooth disk, we use the Gauss-Bonnet formula rewritten in first-order variables, \[2\pi\chi=\int_{\mathcal{M}}{\rm d}\omega-\int_{\partial\mathcal{M}}\omega \tag{165}\] where implicitly, we assume the bulk term \({\rm d}\omega\) does not yield \(\delta\)-like contributions (corresponding to deficit angles).7 This assumption, in general, is incorrect unless the periodicity \(\beta\) takes certain values. Footnote 7: In general, there is an additional subtlety. Namely, depending on the frame, one may need to subtract an auxiliary connection \(\omega_{0}\) to ensure gauge invariance. The boundary integral is equivalent to an integral of the second fundamental form. However, in our chosen frame, this turns out to be unnecessary. In other words, while often a formula like (165) is used to compute \(\chi\) for given geometrical data on a manifold, we reverse the logic: Taking \(\chi=1\) and demanding \[\boxed{\quad 2\pi\stackrel{{!}}{{=}}\int_{\mathcal{M}}{\rm d} \omega-\int_{\partial\mathcal{M}}\omega} \tag{166}\] ensures that no conical defects appear while \(\mathcal{M}\) is topologically a disk provided the periodicity \(\beta\) is chosen appropriately, which we shall do in the next Section. Here, the boundary integral is understood along the surface \(X=X_{c}\) with outward-pointing unit normal form \(n=-X_{\rm n}^{-1}\,{\rm d}X\) and with a volume form \({\rm vol}_{\partial\mathcal{M}}\) induced by \(\tau\wedge e=n\wedge{\rm vol}_{\partial\mathcal{M}}\) such that Stokes' theorem holds. In particular, this implies \(\int_{\partial\mathcal{M}}\omega=-\int_{0}^{\beta}e^{\mathcal{O}}\mathcal{V} \,{\rm d}t\). While the spin connection is undefined at the origin of the disk, the integrand of (166) is defined up to this isolated point, and we can continuously extend \({\rm d}\omega=2e^{\mu}v^{\nu}\partial_{[\mu}\omega_{\nu]}\tau\wedge e\) to the origin. On-shell the limit \[\lim_{r\to 0}2e^{\mu}v^{\nu}\partial_{[\mu}\omega_{\nu]}\big{|}_{\rm EOM}=-w^{ \prime\prime}(X)\big{|}_{r=0} \tag{167}\] exists whenever \(w^{\prime\prime}(X)|_{r=0}\) is finite. Thus, we can continue the integrand to the origin in such cases. As the resulting contribution to the integral has measure zero, we find that the formula (166) is not even sensitive to the Carrollian structure singularity and therefore provides another suitable device to probe disk topology. We shall see in the next Section how (166) can be used to fix the Carrollian temperature in terms of the dilaton potential. ## 3 Carroll thermal properties In this Section, we discuss the C-thermal properties of the linear dilaton solutions derived in Section 2.3.2. In Subsection 3.1, we derive the energy from the usual boundary charges. In Subsection 3.2, we define two different notions of temperature and show that they coincide with each other. In Subsection 3.3, we address entropy and the first law of Carrollian thermodynamics. In Subsection 3.4, we perform a dimensional analysis to ensure dimensionless entropy. Finally, in Subsection 3.5, we calculate the specific heat. ### Energy The canonical codimension-2 charge variations for a generic PSM (25) are (see e.g. Eq. (6.1) in [70]) \[\not{\delta}\mathcal{Q}_{\lambda}=\frac{k}{2\pi}\,\lambda_{I}\,\delta X^{I} \Big{|}_{\partial\mathcal{M}} \tag{168}\] with boundary condition-preserving gauge parameters8\(\lambda_{I}\). We assume that the boundary conditions imposed on \(X\) and \(X_{\textsc{H}}\) are such that they allow arbitrary variations of the mass parameter \(M\). Since we do not want to be too specific about these boundary conditions at this stage, we just impose that diffeomorphisms generated by the Killing vector \(\xi=\partial_{t}\) are part of the asymptotic symmetries that preserve the boundary conditions. The associated gauge parameters are given by \(\lambda_{X}=\omega_{t}=e^{Q}\mathcal{V}\), \(\lambda_{H}=\tau_{t}=-e^{Q}X_{\textsc{H}}\), \(\lambda_{P}=e_{t}=0\). Footnote 8: These parameters, in general, depend on the fields, which can make this expression non-integrable in field space. To account for this possibility, we denote the charge variation by \(\not{\delta}\). The charge variation (168) associated with unit time translations is given by \[\delta\mathcal{Q}_{\partial_{t}}=\frac{k}{2\pi}\,\big{(}e^{Q}\mathcal{V}(X,X_ {\textsc{H}})\,\delta X-e^{Q}X_{\textsc{H}}\,\delta X_{\textsc{H}}\big{)} \big{|}_{\partial\mathcal{M}}=\frac{k}{2\pi}\,\delta M \tag{169}\] where in the last equality, we used the (variation of the) Casimir relation (131). Since \(M\) is totally conserved, \(\mathrm{d}M=0\), it does not matter where this quantity is evaluated, which is why we dropped the indicator \(|_{\partial\mathcal{M}}\). The charge (169) is integrable in field space and gives a simple expression for the energy, \(E=\mathcal{Q}_{\partial_{t}}\), in terms of the mass parameter: \[\boxed{\quad E=\frac{k}{2\pi}\,M} \tag{170}\] ### Temperature Following the discussion in Section 2.3.5 we impose equation (166) to ensure having a Carrollian disk without any defects. Inserting the solutions of Section 2.3.2 and choosing an orientation such that \(\tau\wedge e=:e^{Q}\,\mathrm{d}t\,\mathrm{d}X\) we find \[\int_{\mathcal{M}}\,\mathrm{d}\omega =-\int_{\mathcal{M}}\partial_{X}\mathcal{V}\,\tau\wedge e=\int_{0 }^{\beta}\int_{X_{\textsc{min}}}^{X_{c}}\,\mathrm{d}t\,\mathrm{d}X\partial_{X }\Big{(}U(w-M)-\partial_{X}w\Big{)} \tag{171}\] \[=\beta\Big{(}U(w-M)-\partial_{X}w\Big{)}\Big{|}_{X_{\textsc{min} }}^{X_{c}}\] (172) \[\int_{\partial\mathcal{M}}\omega =-\int_{0}^{\beta}e^{Q}\mathcal{V}\,\mathrm{d}t\Big{|}^{X_{c}}=- \beta\Big{(}\partial_{X}w-U(w-M)\Big{)}\Big{|}^{X_{c}} \tag{173}\] such that (166) reads \[2\pi\stackrel{{!}}{{=}}\beta\,\partial_{X}w\big{|}_{X_{\rm min}}. \tag{174}\] Here \(X_{\rm min}\) is the value of the dilaton at the locus \(X_{\rm H}=0\), taking the positive branch in (133). Interpreting \(\beta=T^{-1}\) as inverse Carrollian temperature establishes \[\boxed{\phantom{-}T=\frac{w^{\prime}(X_{\rm min})}{2\pi}\,.} \tag{175}\] The result for the Carrollian temperature (175) is equivalent to the corresponding Lorentzian result for the Hawking temperature of 2d dilaton gravity with the same potentials (16). In addition to this topological derivation of Carrollian temperature, there is also a definition in terms of Carrollian surface gravity. \[\nabla_{\mu}\big{(}e^{Q}e^{\nu}\partial_{\nu}X\big{)}\big{|}_{X_{\rm H}=0}=: \kappa\,e_{\mu}\big{|}_{X_{\rm H}=0} \tag{176}\] The quantity in parentheses is proportional to \(X_{\rm H}\) on-shell and thus vanishes at \(X_{\rm H}=0\). In this sense, the definition of \(\kappa\) in (176) is analogous to the definition of surface gravity in a Lorentzian theory. Taking the solutions (139) yields \(\kappa=w^{\prime}(X_{\rm min})\). Therefore, we recover the anticipated relation \[T=\frac{\kappa}{2\pi} \tag{177}\] between Carrollian temperature \(T\) and Carrollian surface gravity \(\kappa\). ### Entropy and first law As the last missing piece for the first law, let us inspect the definition of the entropy along the lines of Wald [71]. Working in the covariant phase space formalism of first-order Carroll dilaton gravity (see, e.g., [72]) the symplectic form is given by \[\omega(\delta_{1}\phi,\delta_{2}\phi)=\frac{k}{2\pi}\Big{(}\delta_{2}X^{I} \delta_{1}A_{I}-\delta_{1}X^{I}\delta_{2}A_{I}\Big{)} \tag{178}\] where we used PSM variables (26) for convenience and denote the collection of fields by \(\phi\). Contracting in a diffeomorphism generated by some vector field \(\xi\) on the worldsheet and evaluating on-shell yields the fundamental theorem of covariant phase space \[\omega(\delta\phi,\delta_{\xi}\phi)\approx{\rm d}\Big{(}\delta Q_{\xi}-Q_{ \delta\xi}-i_{\xi}\Theta(\delta\phi)\Big{)}=:{\rm d}\not{\delta}{\cal Q}_{\xi} \tag{179}\] where \(Q_{\xi}\) is the Noether-Wald charge. The variation of the codimension-2 charges is given by \[\not{\delta}{\cal Q}_{\xi}=\frac{k}{2\pi}\xi^{\mu}A_{I\,\mu}\,\delta X^{I}\, \tag{180}\] which just reproduces the special case \(\lambda_{I}=A_{I\mu}\xi^{\mu}\) of the more general result (168). We choose \(\xi\) to be the Carrollian Killing vector associated with unit time translations, \[\xi=\partial_{t} \tag{181}\] which implies \(\omega(\delta\phi,\delta_{\xi}\phi)=0\). Additionally, we pick a constant time hypersurface \(\Sigma\) extending from the point \({\cal E}=\{p\in{\cal M}:X_{\rm H}=0\}\) in the interior to a point \({\cal B}\in\partial{\cal M}\) on the asymptotic boundary such that \(\partial\Sigma={\cal E}\cup{\cal B}\). Integrating (179) over \(\Sigma\) leads to the on-shell identity \[\int_{\Sigma}\omega(\delta\phi,\delta_{\xi}\phi)\approx\int_{\Sigma}{\rm d} \not{\delta}{\cal Q}_{\xi}=\not{\delta}{\cal Q}_{\xi}\Big{|}_{\cal B}-\not{ \delta}{\cal Q}_{\xi}\Big{|}_{\cal E}=0. \tag{182}\] Explicitly, \(\oint\!\mathcal{Q}_{\xi}\) reads on-shell \[\oint\!\mathcal{Q}_{\xi}=\frac{k}{2\pi}\Big{(}e^{Q}\Big{(}V-\frac{U}{2}X_{\rm ii }^{2}\Big{)}\delta X-e^{Q}X_{\rm ii}\delta X_{\rm ii}\Big{)} \tag{183}\] and evaluates at the two points of \(\partial\Sigma\) to \[\oint\!\mathcal{Q}_{\xi}\Big{|}_{\mathcal{B}}=\frac{k}{2\pi}\,\delta M \qquad\qquad\qquad\quad\oint\!\mathcal{Q}_{\xi}\Big{|}_{\mathcal{E}}=\frac{k} {2\pi}e^{Q}V\,\delta X\Big{|}_{\mathcal{E}}. \tag{184}\] From (182) we therefore find \[\frac{k}{2\pi}\,\delta M=\Big{(}\frac{e^{Q(X)}V(X)}{2\pi}\,k\,\delta X\Big{)} \Big{|}_{\mathcal{E}} \tag{185}\] which together with the results (170) and (175) takes the form of a first law, \[\delta E=T\,\delta S. \tag{186}\] The new thermodynamic quantity defined here is the entropy \[\boxed{\,\,S:=k\,X_{\rm min}\,.\,} \tag{187}\] and arises as a Noether charge, just as in the relativistic case. Its functional form in terms of the dilaton also matches precisely with the one of relativistic dilaton gravity [73]. In words, entropy is given by the value of the dilaton at the Carroll extremal surface (defined in the next Section), times the coupling constant. ### A word about dimensions If we assign standard units to all variables in the action, then entropy turns out to have a non-standard dimension of velocity. The quickest way to see this is first to verify that the connection \(\omega\) necessarily has the dimension of inverse velocity, assuming that \(\tau\) has time dimension and \(e\) length dimension. This statement follows from the Carroll torsion equation (8b). The first term in the action (2) has a dimension of \([kX\omega]\) and, in units where \(\hbar=1\), this combination must be dimensionless. Thus, we find that the dimension of entropy (187), \([S]=[kX]=\frac{\rm length}{\rm time}=\rm velocity\), is unusual. If one wants to recover a dimensionless entropy (e.g. measured in \(e\)-bits), one has to introduce a velocity as a conversion factor. In Carrollian theories, there is no natural choice for such a conversion factor, but if viewed as an expansion of a Lorentzian theory, we have a velocity available, namely the speed of light. Even for intrinsic Carrollian theories, we shall assume the presence of some quantity with the dimension of velocity and convert time into length. This assumption permits \(\tau\) to have length dimension and hence \(\omega\) to be dimensionless. In the following, we denote the length dimensions by integers \([\bullet]=n\), meaning that the corresponding quantity \(\bullet\) has length dimension \(n\) (if \(n\) is negative, the quantity \(\bullet\) has corresponding inverse length dimension). For instance, our choice above means \([e]=[\tau]=[r]=[t]=1\) and \([\omega]=0\). The remaining freedom is which length dimension to assign to the dilaton field. From an intrinsic 2d perspective, the only natural choice is to assume dimensionless dilaton, \([X]=0\), implying also \([k]=0\). The dimensions of all other quantities follow from this assignment and compatibility with the equations of motion (8): \([\mathcal{V}]=-2\), \([X_{\rm H}]=[X_{\rm P}]=[M]=[w]=[v]=-1\), \([\Omega]=0\), \([e^{Q}]=+1\). The only subtlety (known already from the Lorentzian counterpart) is the last assignment and can be attributed to a length dimension carried by the (otherwise irrelevant) multiplicative integration constant inherent to the definition of \(e^{Q}\), see (142).9 Footnote 9: Since \([X]=0\) but \([\mathcal{V}]=-2\), the potential generically contains some relevant coupling constant. We can always use an appropriate power of that constant for the multiplicative integration constant in \(e^{Q}\). We deduce the dimensions of our thermodynamical quantities as \([E]=[T]=-1\) and \([S]=0\). In particular, the entropy is dimensionless, and inverse temperature \(\beta\) has length dimension consistently with our starting point of assigning time a length dimension. Our conclusions of this Section are in line with previous results in the literature: standard thermal partition functions in Carroll theories are not well-defined [13, 74], and Carroll quantum field theories suffer from infinite degeneracies [13, 74, 75] that persist for finite volumes [74], so applying the usual rules of statistical mechanics to simple Carroll systems (such as free quantum fields and gases of free particles) does not lead to sensible results. This is not to say that there is no notion of Carroll thermodynamics but the rules of the game are yet to be spelt out. Our notion of thermodynamics for Carroll black holes developed in this Section, including the dimensional analysis above, are steps in this direction. ### Specific heat With the quantities obtained so far, we define a Carrollian specific heat as \[C:=\frac{\mathrm{d}E}{\mathrm{d}T}=T\,\frac{\mathrm{d}S}{\mathrm{d}T} \tag{188}\] yielding \[C=k\,\frac{w^{\prime}(X_{\text{\tiny min.}})}{w^{\prime\prime}(X_{\text{\tiny min.}})}. \tag{189}\] The equivalence of this expression to the Lorentzian case [76] is worth highlighting. Assuming positive temperature, \(w^{\prime}(X_{\text{\tiny min.}})>0\), specific heat is positive if and only if \(w^{\prime\prime}(X_{\text{\tiny min.}})>0\). Having investigated the C-thermal properties of linear dilaton solutions with Carrollian structure singularities, we define Carroll extremal surfaces and Carroll black holes in the next Section. ## 4 Carroll extremal surfaces In this Section, we introduce the geometric notion of Carroll extremal surfaces, guided by corresponding Lorentzian results. We start in Subsection 4.1 with a translation of standard (relativistic) extremal surfaces into the PSM formulation. We copy this definition in Subsection 4.2 and apply it to define Carroll extremal surfaces. We translate back this definition into first- and second-order formulations of Carroll gravity in Subsection 4.3. Finally, we collect our results to define Carroll black holes in Subsection 4.4. ### Standard extremal surfaces in PSM formulation Our first task is to translate the notion of an extremal surface (both null expansions vanish, see, e.g. [46]) into the PSM formulation. For relativistic 2d dilaton gravity in the PSM formulation the Poisson tensor is given by [64] \[P^{IJ}=\begin{pmatrix}0&-X^{+}&X^{-}\\ P^{+\chi}=-P^{\chi+}&0&\mathcal{V}(X,\,X^{+}X^{-})\\ P^{-\chi}=-P^{\chi-}&P^{-+}=-P^{+-}&0\end{pmatrix} \tag{190}\] and the worldsheet metric is written in terms of the zweibein as \[\mathrm{d}s^{2}=2e^{+}e^{-}. \tag{191}\] On-shell the latter is given by \[\mathrm{d}s^{2}=2e^{Q}\,\mathrm{d}v\ \mathrm{d}X+2e^{2Q}X^{+}X^{-}\ \mathrm{d}v^{2} \tag{192}\] where \(Q\) is a known function of the dilaton \(X\) and of an integration constant, the mass \(M\). The Lorentz invariant combination \(X^{+}X^{-}\) also can be expressed as such a function, using the conserved Casimir inherent to the Poisson tensor (190). The metric (192) allows expressing worldsheet features in terms of conditions on the target space coordinates \(X^{I}\). In the table below we summarize the relativistic interpretation of various loci on the worldsheet in terms of the signs of \(X^{\pm}\). \begin{tabular}{c|c c c} signs & \(X^{+}>0\) & \(X^{+}<0\) & \(\mathbf{X}^{+}=\mathbf{0}\) \\ \hline \(X^{-}>0\) & anti-trapped & anti-normal & marginally anti-trapped \\ \(X^{-}<0\) & normal & trapped & marginally trapped \\ \(\mathbf{X}^{-}=\mathbf{0}\) & marginally anti-trapped & marginally trapped & **extremal** \\ \end{tabular} For Kruskal-type of spacetimes "normal" refers to the outside region, "anti-normal" to the second outside region, "trapped" to the black hole region, "anti-trapped" to the white hole region, "marginally trapped" and "marginally anti-trapped" to the bifurcate Killing horizon and "extremal" to the bifurcation point. See Fig. 2 for a reminder. A nice way of expressing what is special about extremal surfaces from a PSM perspective is to consider the action of relativistic boosts on the target space coordinates, \[\delta_{\lambda}X=0\qquad\qquad\delta_{\lambda}X^{\pm}=\mp X^{\pm}\,\lambda\,. \tag{193}\] Comparing with the table above, marginally trapped or anti-trapped surfaces are fixed lines (though not fixed-point lines) under boosts, since e.g. every locus \(X^{+}=0\) is mapped to another locus where \(X^{+}=0\). This provides a target space notion of marginally (anti-)trapped surfaces as fixed lines under boosts. Similarly, by inserting the definition of extremal loci from the table above, we see that extremal surfaces are fixed points with respect to boosts: all the target space coordinates are invariant under boosts on the extremal locus. PSM definition of relativistic extremal surfaces.Relativistic extremal surfaces are loci in the PSM target space that are fixed points under relativistic boosts. Figure 2: Kruskal diagram of Lorentzian eternal black hole This is the kind of property we were after. We have a definition of extremal surfaces as loci in the target space where the Poisson tensor is invariant under the gauge symmetries associated with boosts. Since we do not need the boosts to be relativistic for this definition to apply, it readily generalizes to Carroll boosts and thus allows defining Carroll extremal surfaces, which we shall do in the next Subsection. Before doing so, we translate back the definition above into a more familiar language. In the second-order formulation, the target space coordinates \(X^{\pm}\) do not exist but are replaced on-shell by directional derivatives of the dilaton field projected along the vielbein components. \[X^{\pm}\approx\pm e^{\mu}_{\pm}\partial_{\mu}X \tag{194}\] An extremal surface in the second-order formulation is thus a locus where the dilaton has a saddle point or an extremum. \[\text{relativistic extremal surface:}\qquad e^{\mu}_{\pm}\partial_{\mu}X=0 \qquad\qquad X>0 \tag{195}\] This is compatible with higher-dimensional intuition if 2d dilaton gravity is viewed as a dimensional reduction from higher-dimensional gravity. The condition of positive \(X\) was added to eliminate ludicrous cases of fake extremal surfaces, which from a higher-dimensional perspective are spherical coordinate singularities, and from a 2d perspective, are singular loci where the effective gravitational coupling diverges.10 Footnote 10: A simple example of such a fake extremal surface is the centre of 4d Minkowski space in spherical coordinates, with \(X=r^{2}\). The quantity \(e^{\mu}_{\pm}\partial_{\mu}X\propto\pm\partial_{r}X=2r\) vanishes at the origin \(r=0\). Similarly, we define (non-extremal) eternal black holes in terms of thermal and target space properties: PSM definition of relativistic eternal black holes.Relativistic eternal black holes are thermal states with finite entropy that have a relativistic extremal surface. Thermality is needed to exclude extremal black holes, finite entropy is needed to exclude constant dilaton solutions, and the presence of a relativistic extremal surface is needed to ensure there is a special locus in target space that lies on the bifurcate Killing horizon of the worldsheet geometry. We are ready for the Carrollian generalization of extremal surfaces and black holes. ### Carroll extremal surfaces in PSM formulation Trying to mimic the relativistic classification of loci is less rich in the Carroll case, since the target space coordinate \(X_{\text{\tiny P}}\) does not appear on the right-hand side of the Carroll boost transformation laws (4), which we re-display here for convenience. \[\delta_{\lambda}X=\delta_{\lambda}X_{\text{\tiny H}}=0\qquad\qquad\delta_{ \lambda}X_{\text{\tiny P}}=X_{\text{\tiny H}}\,\lambda \tag{196}\] As expected, there is no natural notion of a Carroll horizon (since "everything moves with the speed of light"). However, there still is the notion of an extremal surface, as evident from (196), namely when \(X_{\text{\tiny H}}=0\). The three different cases are summarized in the table below. (We label negative \(X_{\text{\tiny H}}\) as "normal" since, in most applications, \(X_{\text{\tiny H}}\) is negative between the asymptotic region and the extremal locus.) \begin{tabular}{c|c c c} signs & \(X_{\text{\tiny H}}>0\) & \(X_{\text{\tiny H}}<0\) & \(X_{\text{\tiny H}}=0\) \\ \hline & anti-normal & normal & **extremal** \\ \end{tabular} Thus, we have a similar definition of Carroll extremal surfaces as in the relativistic case.11 Footnote 11: Perhaps also \(|X_{\text{H}}|=\infty\) has a similar interpretation, but since such loci typically are not part of the physical Carroll spacetime, we disregard this possibility here. These loci could appear at asymptotic boundaries, for instance, separating future and past null infinity. PSM definition of Carroll extremal surfaces.Carroll extremal surfaces are loci in the PSM target space that are fixed points under Carroll boosts. Note that every line of constant \(X\) and \(X_{\text{\tiny H}}\) (but varying \(X_{\text{\tiny P}}\)) is a fixed-line under Carroll boosts, so in that sense, every such line is "marginally (anti-)trapped" and the whole Carroll geometry could be viewed as a "horizon". On this "horizon" there can still be an exceptional point, the Carroll extremal surface, reminiscent of the bifurcation surface of relativistic black holes. There is no Carrollian analogue of Carter-Penrose diagrams, but we can still draw diagrams similar to Fig. 2 to highlight different regions in the Carroll manifold with different signs of \(X_{\text{\tiny H}}\). Naturally, the diagrams in Fig. 3 are less rich in structure since there are fewer possibilities in the table above compared to the Lorentzian case. While we chose to draw the lines at \(45^{\circ}\) (as a reminder that null hypersurfaces have Carrollian structures), at this stage there is no significance to this angle. From a limiting perspective, one can understand these diagrams as emerging from infinite boosts of \(t=\text{const.}\) hypersurfaces ("wormholes") of Lorentzian black hole Carter-Penrose diagrams. The three diagrams in Fig. 3 represent the same entity and emphasise different ways of boosting the constant time slice of the parent Carter-Penrose diagram. For instance, in the right diagram, the \(t=\text{const.}\) hypersurface in the parent Carter-Penrose diagram is boosted all the way to the future event horizon to both sides of the extremal surface. We shall elaborate on the relation to the wormhole picture in a higher-dimensional context in Section 6. ### Carroll extremal surfaces in first- and second-order formulations In the second-order version, the quantity \(X_{\text{\tiny H}}\) does not exist, but through the equations of motion (8) it is related on-shell to the directional derivative of the dilaton field projected onto the spatial inverse vielbein, \[X_{\text{\tiny H}}\approx-e^{\mu}\,\partial_{\mu}X\,. \tag{197}\] Thus, in the second-order formulation, the criterion for an extremal surface is that the directional derivative of the dilaton field projected onto the spatial inverse vielbein vanishes. \[\boxed{\text{Carroll extremal surface:}}\qquad e^{\mu}\,\partial_{\mu}X=0 \qquad\qquad X>0 \tag{198}\] Figure 3: Three diagrams to visualize Carroll black holes This definition is analogous to the relativistic one (195) and is on-shell Carroll boost invariant, since \(\delta_{\lambda}e^{\mu}\,\partial_{\mu}X=-\lambda v^{\mu}\,\partial_{\mu}X\approx X _{\text{\tiny H}\lambda}\,v^{\mu}e_{\mu}=0\). Note that one could add to (198) for free the condition \(v^{\mu}\,\partial_{\mu}X=0\) since \(X\) does not depend on \(t\) on-shell. ### Carroll black holes Equipped with our definition of extremal surfaces, we define Carroll black holes. **Definition of Carroll black holes.** Carroll black holes are C-thermal states with finite entropy that have a Carroll extremal surface. In particular, we need the condition of finite entropy to exclude Carrollian constant dilaton solutions, which definitely should not be referred to as Carroll black holes. In the next three Sections, we apply the results and definitions above to several examples. ## 5 Examples for 2d Carroll black holes In this Section, we apply our general analysis to specific models, the Carroll JT model, the Carroll-Schwarzschild model, the Carroll CGHS model, and the Carroll Witten black hole. In each case, we ignore the constant dilaton vacua and focus exclusively on the linear dilaton sector. The Carrollian thermodynamic quantities of solutions in the black hole sector of these models are listed in Table 1. The table also includes some other, more generic cases like the Carroll-Schwarzschild-Tangherlini solution and the Carroll \(ab\)-family. In the first two Subsections 5.1-5.2, we show the spectrum, thermodynamics, and an example for boundary conditions of the CJT model. In Subsection 5.3, we present the 2d perspective of the Carroll-Schwarzschild black hole. In Subsection 5.4, we investigate the CCGHS model. In Subsection 5.5, we present the Carroll-Witten black hole. \begin{table} \begin{tabular}{||c||c c c|c c c c||} \hline Model & \(U(X)\) & \(V(X)\) & \(w(X)\) & \(E\) & \(T\) & \(S\) & \(C\) \\ \hline \hline CJT & 0 & \(\frac{X}{\ell^{2}}\) & \(\frac{X^{2}}{2\ell^{2}}\) & \(\frac{k}{2\pi}M\) & \(\frac{\sqrt{2M}}{2\pi\ell}\) & \(k\ell\sqrt{2M}\) & \(2\pi k\ell^{2}T\) \\ CS & \(-\frac{1}{2X}\) & \(\frac{\lambda^{2}}{4}\) & \(\frac{\lambda^{2}}{2}\sqrt{X}\) & \(\frac{k}{2\pi}M\) & \(\propto\frac{1}{M}\) & \(\propto M^{2}\) & \(\propto-T^{-2}\) \\ CST & (36) & (36) & \(\frac{\lambda^{2}}{2}X^{\frac{D-3}{D-2}}\) & \(\frac{k}{2\pi}M\) & \(\propto M^{\frac{1}{3D}}\) & \(\propto M^{\frac{D-2}{D-3}}\) & \(\propto-T^{2-D}\) \\ CCGHS & 0 & \(\Lambda>0\) & \(\Lambda X\) & \(\frac{k}{2\pi}M\) & \(\frac{\Lambda}{2\pi}\) & \(\frac{kM}{\Lambda}\) & \(\infty\) \\ CWBH & \(-\frac{1}{X}\) & \(\frac{\lambda^{2}}{2}X\) & \(\frac{\lambda^{2}}{2}X\) & \(\frac{k}{2\pi}M\) & \(\frac{\lambda^{2}}{4\pi}\) & \(\frac{2kM}{\lambda^{2}}\) & \(\infty\) \\ Cab & \(-\frac{a}{X}\) & \(\frac{B}{2}X^{a+b}\) & \(\frac{B}{2(b+1)}X^{b+1}\) & \(\frac{k}{2\pi}M\) & \(\propto M^{\frac{b}{b+1}}\) & \(\propto M^{\frac{1}{b+1}}\) & \(\frac{k}{b}\left(\frac{4\pi T}{B}\right)^{\frac{1}{b}}\) \\ \hline \end{tabular} \end{table} Table 1: Carrollian thermodynamic quantities for the Carroll JT model (CJT), Carroll–Schwarzschild (CS), Carroll–Schwarzschild–Tangherlini (CST) in \(D\) spacetime dimensions, Carroll CGHS (CCGHS), Carroll Witten black hole (CWBH), and the Carroll \(ab\)-family (Cab). As \(w^{\prime\prime}=0\) for CCGHS and CWBH the specific heat diverges for these models. In some expressions, we left out state-independent prefactors for brevity which is denoted by \(\propto\). ### Carroll JT model The Jackiw-Teitelboim (JT) model [49, 50] was the first 2d model of gravity. It is particularly elegant, as it allows a reformulation as non-abelian BF theory [62, 63], in contrast to nearly all other 2d dilaton gravity models. All its solutions are locally (A)dS\({}_{2}\) so that JT gravity is tailor-made for a holographic description [77, 78, 79, 80, 81, 82, 83]. Especially the SYK/JT correspondence [84, 85, 86, 87] has reinvigorated the interest in JT gravity and its holographic description. Due to its simple BF formulation, the JT model was the starting point for Carrollian limits of 2d dilaton gravity [43]. Here, we summarize the key results for the Carroll JT (CJT) model, and in Subsection 5.2, we discuss boundary conditions for CJT. The CJT model is given by the Lagrangian (3) with the potentials \[U_{\text{CJT}}(X)=0\hskip 28.452756ptV_{\text{CJT}}(X)=\frac{1}{\ell^{2}}\,X\,. \tag{199}\] The function \(w\) defined in (130) for CJT is given by \[w_{\text{CJT}}(X)=\frac{X^{2}}{2\ell^{2}}\,. \tag{200}\] Applying our general analysis of Section 3 to the choice (199) yields the linear dilaton solutions (we fix the integration constant coming from integrating (133) without loss of generality by a shift of the origin of the spatial coordinate \(r\), and we take the positive branch of the square-root function) \[X =\frac{1}{2}\,e^{r/\ell}+M\ell^{2}\,e^{-r/\ell} \omega =\frac{X}{\ell^{2}}\,\,\text{d}t \tag{201a}\] \[X_{\text{H}} =-\frac{1}{2\ell}\,e^{r/\ell}+M\ell\,e^{-r/\ell} \tau =-X_{\text{H}}\,\,\text{d}t\] (201b) \[X_{\text{P}} =0 e =\text{d}r\,. \tag{201c}\] Translating the 1-forms into second-order notation, the solution above reads \[\text{d}s^{2}=\text{d}r^{2}\hskip 56.905512ptv=\frac{2\ell\,e^{-r/\ell}}{2M \ell^{2}\,e^{-2r/\ell}-1}\,\partial_{t}\hskip 56.905512ptX=\frac{1}{2}\,e^{r/ \ell}+M\ell^{2}\,e^{-r/\ell}\,. \tag{202}\] In Fig. 4 we depict a constant \(X_{\text{P}}\) slice of the PSM target space associated with the CJT model.12 The spectrum of CJT falls into three classes, depending on the sign of the mass parameter \(M\): Footnote 12: For positive mass, at the Carroll extremal surface \(X=\ell\sqrt{2M}\) the solution can be joined to one where \(X_{\text{H}}\to-X_{\text{H}}\) and hence \(v\to-v\). * \(M<0\): no Carroll black hole, since \(X_{\text{H}}<0\) everywhere, reminiscent of the global AdS\({}_{2}\) solution of JT * \(M=0\): limiting case, where \(X_{\text{H}}\to 0\) as \(r\to-\infty\), reminiscent of the Poincare horizon of the massless JT solution * \(M>0\): Carroll black holes, since \(X_{\text{H}}=0\) has the solution \(X=\ell\sqrt{2M}\) or, equivalently, \(r=\frac{\ell}{2}\,\ln(2M\ell^{2})\), reminiscent of black hole solutions of JT We focus on the positive mass sector since it features Carroll extremal surfaces. Furthermore, to have positive entropy we restrict to the branches with \(X>0\). Energy, entropy, temperature, and specific heat of these solutions are given in Table 1, and the first law is satisfied, as shown in Section 3.3. Expressing the entropy as a function of the energy shows a relation similar to the Cardy formula for a chiral half of a 2d conformal field theory, \[S=\frac{\pi^{2}c\,T}{3}=2\pi\sqrt{\frac{c\,E}{6}} \tag{203}\] provided the central charge is chosen as \[c=\frac{6k\ell^{2}}{\pi}. \tag{204}\] This is again reminiscent of the relativistic case [70]. ### Example of boundary conditions for CJT One can interpret (202) as radial Gaussian coordinates and provide a "Fefferman-Graham" expansion for the vector field and the dilaton \[v=2\ell e^{-r/\ell}\,\big{(}-1+\mathcal{O}(e^{-2r/\ell})\big{)}\,\partial_{t} \qquad\qquad X=\frac{1}{2}\,e^{r/\ell}\,\big{(}1+\mathcal{O}(e^{-2r/\ell}) \big{)} \tag{205}\] where the leading terms are fixed, and the subleading terms contain state-dependent information. Similarly to JT gravity, there are numerous inequivalent choices for boundary conditions [70]. It is not our intention to exhaustively discuss the possibilities for CJT gravity. Instead, we provide just one example for boundary conditions and leave a more comprehensive study for future work. The Brown-Henneaux-like boundary conditions \[X =\frac{1}{2}\,e^{r/\ell}+M(t)\,\ell^{2}e^{-r/\ell} \omega =\frac{1}{2\ell^{2}}\,e^{r/\ell}\,\,\mathrm{d}t+\mathcal{O}(e^{-r /\ell}) \tag{206a}\] \[X_{\text{\tiny H}} =-\frac{1}{2\ell}\,e^{r/\ell}+M(t)\,\ell\,e^{-r/\ell} \tau =\frac{1}{2\ell}\,e^{r/\ell}\,\,\mathrm{d}t+\mathcal{O}(e^{-r/\ell})\] (206b) \[X_{\text{\tiny P}} =0 e =\mathrm{d}r \tag{206c}\] Figure 4: Target space picture of Carroll JT by plotting the level sets of (143), restricted to the region \(X\geq 0\). Extremal points are red circles and exist only for \(M>0\). The solutions given in (201) cover the lower half of this diagram. with \(\delta M\neq 0\) are preserved by the gauge transformations (4)-(6) with gauge parameters \(\lambda_{\text{\tiny P}}=0\) and \[\lambda =e^{r/\ell}\,\frac{\eta}{\ell}+2\ell M(t)\eta\,e^{-r/\ell} \lambda_{\text{\tiny H}} =e^{r/\ell}\,\eta-2\ell^{2}M(t)\eta\,e^{-r/\ell} \tag{207}\] where \(\eta\) is the transformation parameter. The equations of motion (8) are solved by the field configuration (206), up to subleading terms (which can be determined in closed form, if desired). On-shell the mass function \(M(t)\) is given by the Casimir \(M=\frac{X^{2}}{2\ell^{2}}-\frac{X_{\text{\tiny H}}^{2}}{2}\). The variation of the boundary charges \[\delta\mathcal{Q}[\lambda_{I}] =\frac{k}{2\pi}\left(\lambda\,\delta X+\lambda_{\text{\tiny H}} \,\delta X_{\text{\tiny H}}+\lambda_{\text{\tiny P}}\,\delta X_{\text{\tiny P}}\right) \tag{208}\] in the present case can be integrated (in field space) to a single boundary charge \[\mathcal{Q}[\eta] =\frac{k\ell}{\pi}\,\eta\,M(t) \tag{209}\] which is finite as the radial coordinate approaches the asymptotic boundary, \(r\to\infty\). On-shell it is also conserved, \(\partial_{t}\mathcal{Q}[\eta]\approx 0\). We assumed here a slicing of the phase space where \(\eta\) is state-independent (see, e.g., [88, 64] for a discussion of different phase space slicings in Lorentzian 2d dilaton gravity). The asymptotic symmetry algebra trivially is abelian in the present case since we have only one boundary charge, namely the Casimir \(M\). \[\{\mathcal{Q}[\eta_{1}],\,\mathcal{Q}[\eta_{2}]\} \approx\delta_{\eta_{2}}\mathcal{Q}[\eta_{1}] \approx\frac{k\ell}{\pi}\,\eta_{1}\,\delta_{\eta_{2}}M =0 \tag{210}\] ### Carroll-Schwarzschild black hole, 2d perspective As reviewed in Section 2.1.2, spherical reduction of Einstein gravity leads to a specific 2d dilaton gravity model, the solutions of which reproduce the Schwarzschild black hole. There is an expansive history of spherically reduced gravity [89, 90, 91, 92] that predates the developments of 2d dilaton gravity. Here, we consider the Carrollian limit of the Schwarzschild black hole from a 2d perspective. See Section 6 for a 4d perspective. The spherically reduced Carroll-Schwarzschild (CS) model is given by the potentials (36), which for \(D=4\) are \[U_{\text{\tiny CS}}(X) =-\frac{1}{2X} V_{\text{\tiny CS}}(X) =\frac{\lambda^{2}}{4}. \tag{211}\] The functions \(w_{\text{\tiny CS}}\) and \(e^{Q_{\text{\tiny CS}}}\) are \[e^{Q_{\text{\tiny CS}}} =\frac{1}{2\sqrt{X}} w_{\text{\tiny CS}}(X) =\frac{\lambda^{2}}{2}\sqrt{X} \tag{212}\] where we chose the integration constant of the second integral in (142) accordingly. This model is described by a target space diagram given in Fig. 5. The solutions never take negative values of the dilaton. Moreover, the black hole sector of the model is given by \(M>0\) as all other solutions do not lead to states with finite entropy \(S\sim X_{\text{\tiny cat.}}\). Let us choose \(\lambda=2\) for convenience. This implies that the dilaton measures the surface radius as seen from the higher-dimensional setting, i.e., the spherical part of the 4d metric reads \(X\,\mathrm{d}\Omega_{S^{2}}^{2}\) (see also (31), (34)). Applying our analysis from Section 3 yields \[X_{\mathrm{\,H}} =-\sqrt{4X-4M\sqrt{X}} \omega =\frac{\sqrt{X}-M}{2X}\,\mathrm{d}t \tag{213}\] \[X_{\mathrm{\,P}} =0 \tau =-\frac{X_{\mathrm{\,H}}}{2\sqrt{X}}\,\mathrm{d}t\] (214) \[e =\mathrm{d}r \tag{215}\] and a 2d Carrollian curvature scalar \[R =4e^{\mu}v^{\nu}\partial_{[\mu}\hat{\omega}_{\nu]}=-\frac{2M}{X^{ \frac{3}{2}}} \hat{\omega} =\omega-U(X)X_{\mathrm{\,H}}\tau \tag{216}\] where \(\hat{\omega}\) is defined as the torsion-free part of the Carrollian connection [see (19)]. The Carrollian second-order variables read \[v =-\frac{1}{\sqrt{1-\frac{M}{\sqrt{X}}}}\partial_{t} h =\frac{\mathrm{d}X^{2}}{4X-4M\sqrt{X}} \tag{217}\] where the vector field is normalized asymptotically, \(\lim_{X\to\infty}v=-\partial_{t}\). For simplicity, in these solutions, the ambiguity in the torsion-free spin connection was fixed to \(\rho=0\). To bring this into a more familiar form, we can define the radial coordinate \[\mathsf{r}^{2}=X \tag{218}\] which together with the Schwarzschild mass \(m=\frac{M}{2}\) leads to \[v =-\frac{1}{\sqrt{1-\frac{2m}{r}}}\partial_{t} h =\frac{\mathrm{d}\mathsf{r}^{2}}{1-\frac{2m}{r}}. \tag{219}\] Figure 5: Target space picture of spherically reduced Carroll–Schwarzschild black hole. Extremal points are red circles and exist only for \(M>0\). The other symplectic leaves do not exhibit such points as for \(M=0\) the point would be at \(X=0\) and for \(M<0\) the leaves do not contain points with \(X_{\mathrm{\,H}}=0\) at all (they are not simply connected). The black hole sector is thus given by \(M>0\). The solutions (213)-(215) describe the lower half of the diagram. The Carrollian thermodynamic quantities for the Carroll-Schwarzschild black hole \[E=\frac{k}{\pi}\,m T=\frac{1}{8\pi m} S=4km^{2} \tag{220}\] satisfy the first law,13 Footnote 13: From a 4d perspective, the coupling constant \(k\) is given by \(\pi/G_{M}\) in units where \(\lambda=2\), see the Carrollian version of Eq. (37). \[\delta E=T\,\delta S. \tag{221}\] Generalizing Carroll-Schwarzschild to Carroll-Schwarzschild-Tangherlini is straightforward, and the main results are summarized in Table 1. In Section 6, we provide the 4d perspective on these solutions. ### Carroll CGHS The Callan-Giddings-Harvey-Strominger (CGHS) model [93] is a 2d toy model for black hole evaporation. It consists of a Lorentzian 2d dilaton gravity action with potentials \(U=0\), \(V=\Lambda=\text{const.}\) plus some minimally coupled scalar fields as carriers of the Hawking quanta. In our work, we always neglect interactions with matter, so when we refer to the CGHS model or its Carrollian counterpart, we solely mean the geometric part of the model without matter. Besides the JT model, the CGHS model is arguably the simplest 2d dilaton gravity model. A more precise version of this statement is that only the JT and the CGHS model permit a reinterpretation of the corresponding PSM as non-abelian BF theory. This is the main reason why the CGHS model was the first one to receive a holographic interpretation [94] after the JT model. The Carrollian limit of the CGHS model (CCGHS) has the same potentials \[U_{\text{CCGHS}}=0 V_{\text{CCGHS}}=\Lambda=\text{const.}>0\,. \tag{222}\] The solutions of the linear dilaton sector are \[X =\frac{\Lambda}{2}r^{2}+\frac{M}{\Lambda} \omega =\Lambda\,\text{d}t \tag{223}\] \[X_{\text{H}} =-\Lambda r \tau =-X_{\text{H}}\,\text{d}t\] (224) \[X_{\text{P}} =0 e =\text{d}r \tag{225}\] leading to a flat Carrollian spacetime, i.e., \(R=0\). Here, the radial coordinate was fixed such that \(r=0\) corresponds to \(X_{\text{H}}=0\). Investigating the spectrum of this model shows that Carroll extremal surfaces exist only for positive values of \(M\) (see Fig. 6). The various Carrollian thermodynamic quantities of these solutions are given in Table 1. As the temperature is fixed to a single specific value in terms of the model-dependent constant \(\Lambda\), the specific heat diverges. ### Carroll Witten black hole The Witten black hole [51, 52, 53] emerges from 2d string theory. From the worldsheet perspective, it is described by an \(\text{SL}(2,\mathbb{R})/U(1)\) gauged WZW-model. Interpreting the vanishing of the \(\beta\)-functions of this conformal field theory as target space equations of motion yields as target space action a Lorentzian 2d dilaton gravity model with potentials \(U=-\frac{1}{X}\) and \(V=\frac{\lambda^{2}}{2}\,X\), where \(\lambda^{2}\propto 1/\alpha^{\prime}\) with the inverse string tension \(\alpha^{\prime}\). As common in the literature, we use the phrase "Witten black hole" as a label for the conformal field theory, the target space theory, and the positive mass spectrum of solutions to the latter. The Euclidean continuation of the Witten black hole is the famous cigar geometry, \(\mathrm{d}s^{2}=\mathrm{d}r^{2}+\tanh^{2}r\;\mathrm{d}\tau^{2}\). For more details on the Witten black hole, see Section 2.1.2 in [55] and Refs. therein. The Carroll limit of the Witten black hole features the same potentials \[U_{\mathrm{\tiny CWBH}}=-\frac{1}{X}\qquad\qquad V_{\mathrm{\tiny CWBH}}=\frac {\lambda^{2}}{2}\,X>0 \tag{226}\] and is referred to as Carroll Witten black hole. Analogously to its Lorentzian avatar (see e.g. [95]), it emerges as \(D\to\infty\) limit of the CST black hole and is conformally related to the CCGHS model by a dilaton-dependent Weyl rescaling. In the linear dilaton sector, the CWBH solutions \[X =\frac{2M}{\lambda^{2}}\,\cosh^{2}\frac{\lambda r}{2} \omega =\left(\lambda^{2}-\frac{M}{X}\right)\,\mathrm{d}t \tag{227}\] \[X_{\mathrm{\tiny H}} =-\sqrt{\lambda^{2}X^{2}-2MX} \tau =-\frac{X_{\mathrm{\tiny H}}}{X}\,\mathrm{d}t\] (228) \[X_{\mathrm{\tiny P}} =0 \qquad\qquad\qquad\qquad\qquad e=\mathrm{d}r \tag{229}\] lead to the same thermodynamical behaviour as the CCGHS model. In fact, all thermodynamic formulas are equivalent for CCGHS and CWBH upon replacing \(\Lambda\to\frac{\lambda^{2}}{2}\). ## 6 Carroll-Schwarzschild black hole, 4d perspective In this Section, we elaborate on the 4d perspective of the Carroll--Schwarzschild black hole and the associated wormhole picture. While the following discussion easily generalizes to higher dimensions (Section 5), we focus, for clarity, on \(3+1\) dimensions. The Schwarzschild line element is given by \[\mathrm{d}s^{2}=-\left(1-\frac{\mathsf{r}_{s}}{r}\right)c^{2}\,\mathrm{d}t^{2 }+\frac{\mathrm{d}r^{2}}{1-\frac{\mathsf{r}_{s}}{r}}+r^{2}\,\mathrm{d}\Omega_ {S^{2}}^{2} \tag{230}\] where \(\mathsf{r}_{s}=\frac{2mG}{c^{2}}\) and \(\mathrm{d}\Omega_{S^{2}}^{2}\) is the metric on the round 2-sphere. Figure 6: Target space picture of Carroll CGHS with \(X_{\mathrm{\tiny P}}=0\), restricted to the region \(X\geq 0\). Extremal points are red circles and exist only for \(M>0\). The lower half is described by the solutions (223)-(225). In order to take the magnetic Carroll limit, we again rescale Newton's constant as \(G_{M}=Gc^{-4}\) and we keep \(G_{M}\) and \(r_{\rm s}\) fixed as we expand around \(c=0\). The general \(c=0\) expansion of any Lorentzian metric takes the form [65] \[{\rm d}s^{2}=h_{MN}\,{\rm d}x^{M}\,{\rm d}x^{N}+c^{2}\left(-\tau_{M}\tau_{N}+ \Phi_{MN}\right){\rm d}x^{M}\,{\rm d}x^{N}+{\cal O}(c^{4}) \tag{231}\] where the Carroll metric \(h_{MN}\) has signature \((0,+,+,+)\) and \(v^{M}v^{N}\Phi_{MN}=0\). We define the Carroll vector field \(v^{M}\) to obey the usual conditions \(v^{M}\tau_{M}=-1\) and \(v^{M}h_{MN}=0\). (See Appendix A for details on global and local Carroll symmetries.) One can also find \(v^{M}\) from the leading-order term in the \(c=0\) expansion of the inverse metric. The magnetic limit of the Schwarzschild black hole [65, 96] \[v=-\frac{1}{\sqrt{1-\frac{r_{\rm s}}{r}}}\,\partial_{t} h=\frac{{\rm d}\mathsf{r}^{2}}{1-\frac{r_{\rm s}}{r}}+\mathsf{r}^{2} \ {\rm d}\Omega_{S^{2}}^{2} \tag{232}\] is the lifted version of (219). This configuration is a solution of magnetic Carroll gravity [96, 97]. The extension with a non-vanishing cosmological constant was described in [98]. It is instructive to rewrite this configuration in terms of isotropic coordinates obtained by the (double cover) coordinate transformation \(\mathsf{r}\mapsto\rho=\rho(\mathsf{r})\) given by \[\mathsf{r}=\frac{\mathsf{r}_{\rm s}}{4}\left(\rho+\frac{1}{\rho}+2\right) \tag{233}\] resulting in the Carrollian wormhole geometry \[v=-\frac{\rho+1}{\rho-1}\,\partial_{t} h=\mathsf{r}_{\rm s}^{2}\left(\frac{(\rho+1)^{2}}{4\rho^{2}}\right)^{2} \left({\rm d}\rho^{2}+\rho^{2}\ {\rm d}\Omega_{S^{2}}^{2}\right)\,. \tag{234}\] The spatial Carroll metric \(h\) is \(\rho\to 1/\rho\) symmetric, and \(v\) changes sign under this map, corresponding to the well-known fact that the Killing time runs opposite in the universe on the other side of the wormhole (see Fig. 7). Let us scan for Carroll extremal surfaces, cf. Section 4. By definition, they satisfy \(e_{a}^{M}\partial_{M}X=0\), which is the condition that these surfaces are invariant under any linear deviations, regardless of the direction. The dilaton \(X\) is the surface area of the 2-spheres that foliate our spherically symmetric spacetime. For (232) it is given by \(X=\mathsf{r}^{2}\) [with \(\lambda=2\) in (34)]. This means that, using the conventions of Section 2.2.3, only the radial part of the inverse vielbein \(e_{1}^{M}\partial_{M}=:e^{M}\partial_{M}\) leads to a nontrivial condition \[e^{M}\partial_{M}X=\left(1-\frac{\mathsf{r}_{s}}{\mathsf{r}}\right)\partial_ {t}X\stackrel{{!}}{{=}}0\,. \tag{235}\] The Carroll extremal surface is at \(\mathsf{r}=\mathsf{r}_{s}\) or, equivalently, at \(\rho=1\) (see Fig. 7). From the 4d perspective, it is natural to assign the dilaton the length dimension \([X]=2\), which implies \([\mathsf{r}]=1\) and \([k]=-2\) (cf. the discussion in Section 3.4). The relativistic entropy, temperature, and energy of the Schwarzschild black hole are given by \[S_{\mbox{\tiny rel}}=\frac{\pi c^{3}\,r_{\rm s}^{2}}{\hbar G} T_{\mbox{\tiny rel}}=\frac{\hbar c}{4\pi\,\mathsf{r}_{s}} E_{\mbox{\tiny rel}}=\frac{c^{4}}{2G}\,\mathsf{r}_{s} \tag{236}\] where we restored all conversion factors except for Boltzmann's constant, which we fix to one. Expanding in powers of \(c\) while keeping \(G_{M}=c^{-4}G\) and \(\mathsf{r}_{s}\) fixed leads to the leading-order Carrollian quantities \[S=\frac{\pi\,r_{\rm s}^{2}}{\hbar cG_{M}} T=\frac{\hbar c}{4\pi\,\mathsf{r}_{s}} E_{\mbox{\tiny rel}}=\frac{1}{2G_{M}}\,\mathsf{r}_{s}\,. \tag{237}\] The results (237) coincide with the general results in 2d derived in Section 3, using the 2d-4d dictionary (note that our choices imply \(e^{Q}=\hbar c/(2\sqrt{X})\)) \[k=\frac{\pi}{\hbar cG_{M}}\qquad\qquad X=\mathfrak{r}^{2}\qquad\qquad X_{\mbox{ \tiny{min}}}=\mathfrak{r}_{s}^{2}\qquad\qquad w(X)=\hbar c\,\sqrt{X}\,. \tag{238}\] These dimensions work as required since \(G_{M}\) has dimensions of metre/Joule, \(\hbar c\) metre times Joule, \(k\) has dimension of one over metre squared, and \(X\) is measured in square metres. So entropy is dimensionless, while temperature and energy are measured in Joule. We show next that the expressions (237) can be computed using 4d geometric arguments that are similar to what one does in general relativity. As shown above, the Carroll entropy is proportional to the area of the wormhole's throat. At this locus we have \(h|_{\rho=1}=\mathfrak{r}_{\rm s}^{2}\,\mathrm{d}\Omega_{S^{2}}^{2}\). As discussed above and in Section 3, we ensured that \(S\) is dimensionless. We next turn to temperature. The 4d Carroll boost and rotation connections \(\omega^{a}\) and \(\omega^{ab}=-\omega^{ba}\) are solutions to the Cartan zero torsion-equations \[\mathrm{d}\tau+\omega^{a}\wedge e^{a}=\mathrm{d}e^{a}+\omega^{ab}\wedge e^{b}= 0\,. \tag{239}\] For the Carroll wormhole solution (232), we choose the vielbeins \[\tau=f(\mathfrak{r})\ \mathrm{d}t\qquad\qquad e^{1}=f^{-1}(\mathfrak{r})\ \mathrm{d}\mathfrak{r}\qquad\qquad e^{l}=\mathfrak{r}\,\bar{e}^{l} \tag{240}\] where \(l=2,3\) correspond to the 2-sphere tangent space directions, \(\bar{e}^{l}\) are round unit 2-sphere vielbeins, and we defined \(f(\mathfrak{r})=(1-\mathfrak{r}_{\rm s}/\mathfrak{r})^{1/2}\). The most general solution to these equations is \[\omega^{1}=f^{\prime}\tau+\rho\,e^{1}+\rho^{l}e^{l}\qquad\qquad\omega^{l}= \rho^{l}e^{1}+\rho^{lm}e^{m}\qquad\qquad\omega^{1l}=f\bar{e}^{l} \tag{241}\] and \(\omega^{lm}=\bar{\omega}^{lm}\) is the connection on the unit 2-sphere. We next consider the pullback of (239) onto the 2d submanifold obtained by fixing a point on the 2-sphere. In order to avoid clutter, we denote the pullbacks of the vielbeins by the same symbols. If the 2-sphere has coordinates \(\theta,\phi\) we are considering the manifold \(\theta=\theta_{0}\) and \(\phi=\phi_{0}\) where \(\theta_{0}\) and \(\phi_{0}\) are constants. The equations (239) on this submanifold become \[\mathrm{d}\tau+\omega^{1}\wedge e^{1}=\mathrm{d}e^{1}=0 \tag{242}\] with \(\tau=f(\mathfrak{r})\,\mathrm{d}t\), \(e^{1}=f^{-1}\,\mathrm{d}\mathfrak{r}\) and \(\omega^{1}=f^{\prime}\tau+\rho\,e^{1}\). To define the Carroll temperature \(T\), we first Wick rotate by the prescription \[t=it_{\mathrm{W}}\qquad\qquad\tau=i\tau_{\mathrm{W}}\qquad\qquad\omega^{1}=i \omega_{\mathrm{W}}^{1}\qquad\qquad v=i\upsilon_{\mathrm{W}} \tag{243}\] Figure 7: Sketch of spatial Carroll wormhole geometry (234). It corresponds to the spatial wormhole geometry of the maximally extended Schwarzschild black hole that cuts through the bifurcation sphere. In red, where \(\rho=1\) (\(\mathfrak{r}=\mathfrak{r}_{s}\)), we have encircled the Carroll extremal surface at the wormhole’s throat. where it is understood that the conversion factor has already been absorbed such that the length dimensions are \([\tau_{\rm w}]=[t_{\rm w}]=1\) and \([\omega_{\rm w}]=0\) (again, see Section 3.4). The Wick rotation does not change the signature of the geometry and the holonomy of the manifold (unlike in general relativity) but it allows us to consider (242) for a periodic \(t_{\rm W}\). Now, \(t_{\rm W}\) has the dimension of a length such that it can be compactified with period \(\hbar c/T\). In the Wick rotated setting, (242) becomes \({\rm d}\tau_{\rm W}+\omega_{\rm W}^{1}\wedge e^{1}=0\), where \(\tau_{\rm W}=f(\mathfrak{r})\,{\rm d}t_{\rm W}\) has dimensions of length and \(\omega_{\rm W}^{1}=f^{\prime}\tau_{\rm W}+\bar{\rho}e^{1}\) is dimensionless. The 2d geometry described by \(t_{\rm w}\sim t_{\rm W}+\hbar c/T\) and \(\mathfrak{r}_{\rm s}<\mathfrak{r}<\mathfrak{r}_{\rm c}\) where \(\mathfrak{r}_{\rm c}\) is some arbitrary number is a cigar-like geometry that we will denote by \(\Sigma\). The boundary of \(\Sigma\) is given by the circle at \(\mathfrak{r}=\mathfrak{r}_{\rm c}\). The Carroll boost connection \(\omega_{\rm W}^{1}\) contains an undetermined function \(\bar{\rho}\). We assume that \(\bar{\rho}\) is globally well-defined on the cigar geometry so that it is periodic in \(t_{\rm W}\). Then, a direct calculation tells us \[\int_{\Sigma}{\rm d}\omega_{\rm W}^{1}-\int_{\partial\Sigma}\omega_{\rm W}^{1 }=\frac{1}{2\pi_{\rm s}}\frac{\hbar c}{T} \tag{244}\] where \(\partial\Sigma\) is the circle at \(\mathfrak{r}=\mathfrak{r}_{\rm c}\). The bulk orientation is chosen such that \({\rm d}t_{\rm W}\wedge{\rm d}\mathfrak{r}=:{\rm d}t_{\rm W}\,{\rm d}\mathfrak{r}\) and the boundary orientation is induced similarly to Section 2.3.5. The left-hand side is \(2\pi\) times the Euler character \(\chi\) of \(\Sigma\) which is topologically a disk so that \(\chi=1\). This procedure recovers the result for temperature announced in (237). In [96] the energy \(E\) of the Carroll solution discussed here was computed. The result is the same as for the Schwarzschild black hole, namely (in magnetic Carroll units) \[E=\frac{\mathfrak{r}_{\rm s}}{2G_{M}}=2TS \tag{245}\] in agreement with (237). Of course, the first law \[\delta E=T\,\delta S \tag{246}\] is obeyed. All the thermodynamical relations above follow immediately from the general analysis of Section 3, using the 2d-4d dictionary (238). ## 7 Charged and rotating Carroll black holes All examples so far can be understood entirely in terms of 2d Carroll dilaton gravity or one of its dimensional uplifts to higher dimensions. In this Section, we go beyond this case by considering charged or rotating Carroll black holes (mostly from a dimensionally reduced, 2d, perspective), where a generalization to 2d Carroll-Maxwell dilaton gravity is required. ### General remarks on charged Carroll black holes in 2d In the PSM formulation, adding a Maxwell field amounts to adding another target space coordinate \(Y\) and adding to the Poisson tensor an extra row and column of zero entries. The potential in (3) can depend on this additional target space coordinate as well, and the Lagrange 2-form acquires an additional term \(Y\,\,{\rm d}A\), where \(A=A_{\mu}\,\,{\rm d}x^{\mu}\) is the Maxwell gauge field 1-form. \[{\cal L}=Y\,\,{\rm d}A+X\,\,{\rm d}\omega+X_{\rm H}\,\big{(}\,{\rm d}\tau+ \omega\wedge e\big{)}+X_{\rm P}\,\,{\rm d}e+{\cal V}(X,\,X_{\rm H},\,Y)\,\tau \wedge e \tag{247}\] The additional \(U(1)\) gauge symmetry generated by some transformation parameter \(\Lambda\) acts trivially on all fields except on the Maxwell gauge field, \(\delta_{\Lambda}A={\rm d}\Lambda\). The equations of motion (8) are essentially unchanged, with the replacement \({\cal V}(X,\,X_{\rm H})\to{\cal V}(X,\,X_{\rm H},\,Y)\). There are two additional equations of motion from varying with respect to \(Y\) and \(A\): \[\delta Y: \mathrm{d}A =-\frac{\partial{\cal V}(X,\,X_{\rm H},\,Y)}{\partial Y}\,\tau\wedge e \tag{248}\] \[\delta A: \mathrm{d}Y =0 \tag{249}\] The quantity \(Y\) on-shell is a second conserved Casimir, \[Y=q=\mathrm{const}. \tag{250}\] and physically corresponds to a conserved \(U(1)\) charge (though in some applications, it might have a different interpretation, e.g., as angular momentum in higher dimensions). If the Lagrange 2-form (247) emerges from some Carrollian limit, it could happen that the first and last terms are multiplied by some powers of the speed of light \(c\). In that case, we can always first rescale \(Y\) with an appropriate factor of \(c\) to eliminate \(c\) from the last term and then rescale \(A\) with an appropriate factor of \(c\) to eliminate \(c\) from the first term. Thus, without loss of generality, we assume there are no explicit factors of \(c\) in the Lagrange 2-form (247). Solving the equations of motion can be done exactly as in Section 2. The solutions for the spatial metric and the temporal vector field will, in general, depend not only on the mass parameter \(M\) but also on the \(U(1)\) charge \(q\). As a consequence of this dependence, there can be BPS-like bounds and extremality conditions. (Since we already use the word "extremal" to denote Carroll extremal surfaces, we call the confluent case "degenerate" instead.) Relatedly, new constant dilaton vacua can emerge and typically have an interpretation as "near horizon extremal geometries". (To clarify also here the vocabulary, we refer to such geometries as "near-Carroll-extremal-surface degenerate geometries" in a Carrollian context.) Other than these marginal changes, our general discussion of Section 2 applies. A prototypical form of the potential is given by \[{\cal V}(X,\,X_{\rm H},\,Y)=\hat{\cal V}(X,\,X_{\rm H})-\frac{Y^{2}}{4F(X)}\,. \tag{251}\] In this case, the charge \(q\) on-shell is related to the electric field, \(E=*\,\mathrm{d}A\), and the dilaton, \[q=Y=2F(X)\,*\mathrm{d}A=2F(X)\,E \tag{252}\] where we used the Carroll-Hodge-\(*\) relation \(*(\tau\wedge e):=1\). For a definition of this operator, we refer to [99]. Charge conservation implies \[\mathrm{d}\big{(}F(X)\,*\mathrm{d}A\big{)}=0\,. \tag{253}\] Integrating out the scalar field \(Y\) by its own equation of motion yields a (non-minimally coupled) Maxwell term in the Lagrange 2-form (with the usual expression for the field strength, \(F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\)). \[{\cal L}=X\,\,\mathrm{d}\omega+X_{\rm H}\,\big{(}\,\mathrm{d}\tau+\omega\wedge e \big{)}+X_{\rm P}\,\,\mathrm{d}e+\hat{\cal V}(X,\,X_{\rm H})\,\tau\wedge e+F(X )\underbrace{(*\,\mathrm{d}A)\,\mathrm{d}A}_{\sim F_{\mu\nu}\widehat{F}^{\mu \nu}\,\mathrm{vol}} \tag{254}\] Consistently, varying (254) with respect to the Maxwell connection \(A\) yields the equation of motion (253). For models that come from a dimensional reduction of higher-dimensional Einstein-Maxwell theories the coupling function \(F(X)\) typically is linear in the dilaton since the Maxwell term gets the same volume factor as the curvature term \(X\)\(\mathrm{d}\omega\). Finally, we note that often it is sufficient to consider the charge \(q\) as a parameter in the action rather than as a constant of motion. In that case, one can use the potential (251) with \(Y\) replaced by its on-shell value \(q\). ### Carroll-Reissner-Nordstrom There are different paths to obtaining CRN black holes. We found it simplest to first reduce spherically Schwarzschild to 2d, then take the Carroll limit, and finally, add a Maxwell field. This means we take the Schwarzschild results for the potential \(\dot{\mathcal{V}}\) and set \(F(X)=X\), \[\mathcal{V}_{\text{\tiny CRN}}(X,\,X_{\text{\tiny H}},\,Y)=\frac{\lambda^{2}}{4 }+\frac{X_{\text{\tiny H}}^{2}}{4X}-\frac{Y^{2}}{4X}\,. \tag{255}\] Without loss of generality, we set \(\lambda=2\). In the coordinates introduced in Section 2, the CRN solution is given by \[\text{d}s^{2}=\text{d}r^{2}=\frac{\text{d}X^{2}}{X_{\text{\tiny H}}^{2}}\qquad \qquad v=\frac{2\sqrt{X}}{X_{\text{\tiny H}}}\,\partial_{t} \tag{256}\] and \[X_{\text{\tiny H}}=\pm\sqrt{4X+4\,q_{e}^{2}-8m\sqrt{X}}\,, \tag{257}\] where \(q_{e}=\frac{q}{2}\) is the electric charge and \(m=\frac{M}{2}\) is the mass. We again chose the integration constant in (142) such that \(e^{-Q}=2\sqrt{X}\) to achieve an asymptotic normalization of the vector field, \(\lim_{X\to\infty}v=-\partial_{t}\). The solution for the gauge field follows from (248), \[\text{d}A=\frac{q_{e}}{2X}\,\tau\wedge e \tag{258}\] which in Coulomb gauge leads to the usual Coulomb-potential \[A=\frac{q_{e}}{\sqrt{X}}\,\,\text{d}t\,. \tag{259}\] The Carroll limit on the gravity side is a magnetic limit, while the Carroll limit on the Maxwell side is an electric limit, so this is an example of electric Carroll Maxwell theory coupled to magnetic Carroll gravity. It would be pleasing to see that first taking such a limit in higher dimensions and then performing a spherical reduction leads to the same 2d solutions. In [74] the combined magnetic gravity and magnetic Maxwell limits were studied for a 4d RN black hole that carries both electric and magnetic charge. It would be interesting to explore what kind of 2d model this corresponds to after spherical reduction. Carroll extremal surfaces in the CRN geometry arise at two loci, \[\sqrt{X}_{\pm}=m\pm\sqrt{m^{2}-q_{e}^{2}} \tag{260}\] provided the mass is positive and the charge obeys the BPS-bound \[|q_{e}|\leq m\,. \tag{261}\] When saturated, \(q_{e}^{2}=m^{2}\), the two loci coalesce to a single degenerate Carroll extremal surface with vanishing Carroll temperature, similar to extremal Reissner-Nordstrom black holes. While the CS model does not have any constant dilaton vacuum solution, the CRN model has such a solution for the value of the dilaton \[X=q_{e}^{2}\,. \tag{262}\] Since this is the same value the dilaton takes at the degenerate extremal Carroll surface in the confluent case, one can interpret this constant dilaton vacuum analogously to the Robinson-Bertotti solution, i.e., as near-Carroll-extremal-surface degenerate geometry. Generalizations of CRN to arbitrary dimension is straightforward; one just has to replace the first two terms in the potential (255) by the corresponding CST potentials corresponding to the desired dimension. Generalizations to completely different types of charged Carroll black holes are possible as well and can be done on a case-by-case basis. Examples that come to mind are 2d type 0A string theory with equal number \(q_{e}\) of electric and magnetic D0 branes [100, 101] and the dimensionally reduced Chern-Simons term [102, 103]. In the next Subsection, we address a different pertinent example, namely Carroll BTZ. ### Carroll BTZ It is not obvious how to take a Carrollian limit of the BTZ black hole [104, 105]. We take the following route. First, we Kaluza-Klein reduce along the azimuthal angle \(\varphi\) to obtain the 2d Achucarro-Ortiz model [106] and only then we take the Carroll limit. The first step leads to a charged 2d dilaton gravity model, where the Maxwell field is the one appearing in the Kaluza-Klein ansatz \((\alpha,\beta,\gamma\in\{0,1\})\) \[\mathrm{d}s^{2}=g_{\alpha\beta}(x^{\gamma})\ \mathrm{d}x^{\alpha}\,\mathrm{d}x^{ \beta}+X^{2}(x^{\gamma})\,\big{(}\,\mathrm{d}\varphi+A_{\alpha}(x^{\gamma}) \ \mathrm{d}x^{\alpha}\big{)}^{2} \tag{263}\] and the associated \(U(1)\) charge is the BTZ angular momentum \(J\). The dimensionally reduced 2d model has the Achucarro-Ortiz potential (see Section 6.3 in [76]) \[V_{\mathrm{A}\mathrm{O}}(X,\,Y)=\frac{X}{\ell^{2}}-\frac{Y^{2}}{X^{3}} \tag{264}\] where \(\ell\) is the 3d AdS radius, which we set to one, \(\ell=1\). On-shell \(Y=J\). The second step consists of importing the potential (264) into generic 2d Carroll dilaton gravity (3). This yields Carroll black hole solutions we refer to as CBTZ. They are given by \[\mathrm{d}s^{2}=\mathrm{d}r^{2}=\frac{\mathrm{d}X^{2}}{X_{\mathrm{H}}^{2}}\qquad \qquad v=\frac{1}{X_{\mathrm{H}}}\,\partial_{t} \tag{265}\] with \[X_{\mathrm{H}}=\pm\sqrt{X^{2}-\frac{J^{2}}{X^{2}}-2M}\,. \tag{266}\] The solution for the gauge field follows from (248), \[\mathrm{d}A=\frac{2J}{X^{3}}\,\tau\wedge e \tag{267}\] which, in Coulomb gauge, leads to \[A=\frac{J}{X^{2}}\ \mathrm{d}t\,. \tag{268}\] As expected, there are two loci with Carroll extremal surfaces, \[X_{\pm}^{2}=M\pm\sqrt{M^{2}-J^{2}} \tag{269}\] provided the mass is positive and the angular momentum obeys the BPS bound \[|J|\leq M\,. \tag{270}\] When saturated, \(J^{2}=M^{2}\), the Carroll extremal surface degenerates and has vanishing Carroll temperature, similar to extremal BTZ. In summary, the Carroll BTZ black hole (265)-(268) is the Carroll limit of the Achucarro-Ortiz model, which in turn is a Kaluza-Klein reduction of the BTZ black hole. The Carroll BTZ black hole is a positive mass solution of 2d Carroll-Maxwell dilaton gravity (247) with the potential function (264), subject to the BPS bound (270). Summary and Outlook We have focused on a wide class of Carroll geometries which, despite the absence of a lightcone structure, possess black hole-like behaviour. We identified Carroll black holes as configurations exhibiting an extremal surface together with thermal properties such as finite entropy. The former is the analogue of a Lorentzian extremal surface. A crucial ingredient was incorporating the notion of Carroll thermal manifolds, introduced by relaxing the standard definition of a Carroll manifold so as to allow a vanishing "clock one-form" on isolated surfaces. Our strategy consisted of thoroughly analysing various formulations of 2d magnetic Carroll dilaton gravity models, being generic enough so as to accommodate the dimensional reduction of spherically symmetric configurations of higher-dimensional magnetic Carroll gravity. We have also shown that the processes of spherical reduction and taking the magnetic Carroll limit commute. We discussed examples in the context of magnetic Carroll gravity in diverse dimensions, including the Carroll versions of Schwarzschild, Reissner-Nordstrom, BTZ, as well as black hole solutions of generic Carroll dilaton gravity, including Carroll JT and Carroll Witten black holes in two spacetime dimensions. Some examples of rotating Carroll black holes were also briefly analyzed. There are various intriguing points for further exploration: **Mathematics of Carroll black holes**: We have uncovered a couple of unusual features that could benefit from further scrutiny, specifically, the issues addressed in Subsections 2.3.3-2.3.5. Physical intuition drove us to relax the original definition of Carrollian manifolds by allowing Carrollian structure singularities, which is necessary to accommodate Carroll extremal surfaces, key protagonists in our definition of Carroll black holes. Therefore, it seems worthwhile to relax the standard mathematical notion of Carrollian manifolds and to further investigate the role of loci where the Carrollian vector field is singular, but the geometry is regular otherwise. Besides Carroll extremal surfaces, this may also include loci where the Carroll vector field tends to zero, which happens, for instance, in the limit of approaching spatial infinity from null infinity in asymptotically flat spacetimes. Having a precise (and physically relevant) definition of Carrollian singularities could open the door to further developments, such as Carrollian singularity theorems. Additionally, it is possible that such an endeavour could provide sharper or alternative definitions of Carroll extremal surfaces and Carroll black holes. **Rotating Carroll black holes**: In Section 7.3, we have presented a first example of a rotating Carroll black hole, namely Carroll BTZ. We used an intrinsic 2d approach where rotation was turned into a \(U(1)\) charge after a Kaluza-Klein reduction and before taking the Carroll limit. It is natural to inquire about higher-dimensional descriptions of rotating Carroll black holes, (non-)commutativity of dimensional reduction and Carroll limit, and further questions along the lines of Fig. 1. Although (magnetic) Carroll gravity generically admits configurations with non-vanishing angular momentum [96, 98], finding higher-dimensional rotating Carroll black holes is an open task. The main obstruction comes from the Hamiltonian constraint in magnetic Carroll gravity, which requires a spatial metric with a vanishing Ricci scalar (or constant Ricci scalar, in the presence of a cosmological constant). For example, the Carrollian limit of the Kerr solution in Boyer-Lindquist or Kerr-Schild coordinates exhibits a non-vanishing spatial Ricci scalar, thus failing to satisfy the Hamiltonian constraint. It could be advantageous to seek an appropriate coordinate system that addresses this issue. **Supersymmetric Carroll black holes**: A seemingly straightforward generalization of our work is to define and investigate Carroll supergravity (see, e.g., [107]) and supersymmetric Carroll black holes, where possibly BPS-like bounds found in Section 7 play a decisive role. Technically, we expect the simplest models to be of supersymmetric BF-type, emerging as Carrollian contractions of Lorentzian models like the super-JT model, see e.g. [108] and Refs. therein. **Galilean black holes**: There is a first order action describing 2d Galilei dilaton gravity \[\mathcal{L}=X\,\mathrm{d}\omega+X_{H}\,\mathrm{d}\tau+X_{P}\big{(}\,\mathrm{d }e-\omega\wedge\tau\big{)}+\mathcal{V}_{\mbox{\tiny{Gal}}}(X,X_{P})\tau\wedge e\, \tag{271}\] which was previously considered in [43]. Given a potential analogous to the Carroll case considered in this work \[\mathcal{V}_{\mbox{\tiny{Gal}}}=-\frac{U(X)}{2}X_{\mbox{\tiny{P}}}^{2}+V(X) \tag{272}\] the model is in principle solvable along the same lines. Moreover, the PSM picture allows to identify Galilean extremal surfaces as loci where \(X_{\mbox{\tiny{P}}}=0\) (\(X_{\mbox{\tiny{P}}}\) and \(X_{\mbox{\tiny{H}}}\) switch roles in this case). However, these solutions cannot be assigned thermodynamic properties in the same way as in the Carroll case. One way to see this is by taking the simple example \(\mathcal{V}_{\mbox{\tiny{Gal}}}=X\) and partially fixing the diffeomorphism freedom such that the clock one-form is just \(\tau=\mathrm{d}t\). The equations of motion then imply that \(\partial_{r}X=0=\partial_{r}X_{\mbox{\tiny{P}}}\) meaning that if there is a nontrivial configuration with \(X_{\mbox{\tiny{P}}}\neq 0\) the Galilei extremal surface can only lie in the future or in the past instead of at a certain radius. This makes it impossible to compactify time such that the 2d spacetime is topologically a disk and has the extremal surface in its center at the same time. This is not to say that no notion of Galilean black holes exists, it is just not as straightforward as "switching time and space". It could be interesting to see whether there is an alternative sensible way to define these objects. **Fracton gravity**: Following the relation between Carroll and particles with conserved charge and dipole momentum ("fractons") [10, 11, 12, 13] we write down fracton BF gravity. The symmetries are spanned by \(\langle H,P,Q,D\rangle\) (energy, momentum, charge, dipole moment) with the only nontrivial commutator \[[D,P]=Q\,. \tag{273}\] The action of fracton BF gravity is given by \(I[X_{I},A^{I}]=\frac{k}{2\pi}\int_{\mathcal{M}}\mathcal{L}\) where \[\mathcal{L}=X_{H}\,\mathrm{d}A^{H}+X_{P}\,\mathrm{d}A^{P}+X_{Q}(\mathrm{d}A^{ Q}+A^{D}\wedge A^{P})+X_{D}\,\mathrm{d}A^{D}\,. \tag{274}\] The Lagrange-2-form (274) corresponds to (3) upon identifying \((X_{H},\ X_{\mbox{\tiny{P}}},\ X_{Q},\)\(X_{D})_{\mbox{\scriptsize{frac}}}\sim(-,X_{\mbox{\tiny{P}}},X_{\mbox{\tiny{H}}},X _{\mbox{\scriptsize{H}}},X)_{\mbox{\scriptsize{car}}}\) and \((A^{H},A^{P},A^{Q},A^{D})_{\mbox{\scriptsize{frac}}}\sim(-,e,\tau,\omega)_{ \mbox{\scriptsize{car}}}\), and adding the first term \(X_{H}\,\mathrm{d}A^{H}\) that has no Carroll counterpart (in particular, \(A^{H}\) is part of the geometry and not a Maxwell gauge field). While the potential in (274) is trivial, effective field theory arguments suggest it is natural to extend it to fracton dilaton gravity by adding \(\mathcal{V}(X_{D},\,X_{Q})\,A^{Q}\wedge A^{P}\), since such a term is allowed by consistent deformations of the BF theory (274), see Section 7.2 of [43]. If we insist on a metric BF theory, which would be closer to Carroll/dipole Chern-Simons gravity [16, 17], we can add two nontrivial central extensions, in which case the dipole algebra admits an invariant metric [43]. This can also be generalized [43] to nontrivial cosmological constant [12, 13] or to more general gravitational models. It could be illuminating to investigate these models and their boundary conditions/actions in more detail. **Quantum Carroll extremal surfaces**: In the Lorentzian case, the concept of extremal surfaces was generalized to quantum extremal surfaces by Engelhardt and Wall [54], which, for instance, feature prominently in the island proposal [109, 110, 111, 112, 113]. Instead of extremizing the classical area functional, a functional that consists of the sum of area and von Neumann entropy (associated with the matter fields outside the black hole) is extremized. In quantum theories of Carroll gravity it is therefore plausible to similarly extend our notion of Carroll extremal surfaces (see Sections 4.2 and 4.3) to quantum Carroll extremal surfaces. It could be rewarding to verify whether such quantum Carroll extremal surfaces obey similar properties and theorems as in the Lorentzian case [54]. **Intrinsically higher-dimensional Gauss-Bonnet**: Our main definition of Carroll temperature in Section 3.2 employs the 2d Carroll Gauss-Bonnet formula. It could be beneficial to obtain a similar result using higher-dimensional techniques, e.g., using higher-dimensional Carroll Gauss-Bonnet terms. **Cosmology**: Finally, it might be gratifying to check whether the tools we have developed in this work can be used to understand putative cosmological horizons of Carrollian cosmological geometries [19, 114, 115, 116, 74]. ## Acknowledgements We are grateful for discussions with the participants of the 1st Carroll workshop at TU Wien in February 2022 where the definition of Carroll extremal surfaces was presented for the first time. Moreover, we are grateful for discussions with participants of the 2nd Carroll workshop at UMons in September 2022, and of the workshop "Beyond Lorentzian Geometry II" at ICMS in February 2023, where some additional results from this paper were presented. In particular, we thank Jan de Boer, Laura Donnay, Adrien Fiorucci, Niels Obers, Romain Ruzziconi, Jakob Salzer, and Stefan Vandoren. DG additionally thanks Arjun Bagchi for a long-time collaboration on Carrollian physics and Dima Vassilevich for an even longer-time collaboration on 2d black holes. Funding informationFE and DG were supported by the Austrian Science Fund (FWF), projects P 32581, P 33789, P 36619, and W 1252. The final part of this research was conducted while DG was visiting the Okinawa Institute of Science and Technology (OIST) through the Theoretical Sciences Visiting Program (TSVP). JH was supported by the Royal Society University Research Fellowship Renewal "Non-Lorentzian String Theory" (grant number URF\(\backslash\)R\(\backslash\)221038). SP, and in part JH, were supported by the Leverhulme Trust Research Project Grant (RPG-2019-218) "What is Non-Relativistic Quantum Gravity and is it Holographic?". This research is partially supported by Fondecyt grants No 1211226, 1220910 (AP, RT), and 1230853 (AP). RT thanks the support of Vicerrectoria de Investigacion y Doctorados de la Universidad San Sebastian, Chile - fund "USS-FIN-23-PASI-10". ## Appendix A Carroll symmetries ### Global Carroll symmetries This Appendix provides a self-contained review of Carroll symmetries in any spacetime dimension \(D=1+d\). For \(d=1\), all indices can be dropped in all formulas below (rotations do not exist in this case). Carroll symmetries emerge as the \(c\to 0\) limit of Poincare symmetries (see [117] for some historical context). Temporal translations \(H=\partial_{t}\), spatial translations \(P_{i}=\partial_{i}\), and rotations \(J_{ij}=x_{i}\partial_{j}-x_{j}\partial_{i}\) are unaffected by this limit. Thus, the only generators that change are the boosts \[B_{i}=c^{2}\,t\,\partial_{i}-x_{i}\,\partial_{t}\qquad\stackrel{{ c\to 0}}{{\to}}\qquad B_{i}=-x_{i}\,\partial_{t}\,. \tag{275}\] Therefore, the only commutators that change as compared to the ones in the Poincare algebra involve Carroll boosts \(B_{i}\): \[[B_{i},\,H]=0\qquad[B_{i},\,B_{j}]=0\qquad[B_{i},\,P_{j}]=\delta_{ij}\,H\qquad[ B_{k},\,J_{ij}]=\delta_{ki}B_{j}-\delta_{kj}B_{i} \tag{276}\] The first commutator reveals that the Hamiltonian \(H\) is a central element of the Carroll algebra, in stark contrast to the Poincare algebra, where the Hamiltonian does not commute with Lorentzian boosts. The second commutator implies there is no Carrollian analogue of Thomas precession -- two Carroll boosts always commute, regardless of the directions into which the boosts are taken. The third commutator together with the fact that \(H\) commutes with the remaining Carroll generators show that \(H\) can be thought of as a nontrivial central extension. The Carroll energy/mass is, therefore, an important invariant [1] quite different from the Poincare energy. The last commutator merely shows that boosts transform as spatial co-vectors. Finite boosts (generated by some spatial co-vector \(b_{i}\)) leave invariant space but transform time. \[t^{\prime}=t-b_{i}x^{i}\qquad\qquad x^{i\,\prime}=x^{i} \tag{277}\] Thus, in Carrollian spacetimes, there is an absolute notion of space, which is the counterpart of the non-relativistic statement that in Galilean spacetimes, there is an absolute notion of time. In the \(c\to 0\) limit the Minkowski metric degenerates and acquires signature \((0,+,\ldots,+)\), i.e., it becomes a purely spatial metric \(h_{\mu\nu}\) with trace \(d\). Its inverse (multiplied by \(-c^{2}\)) degenerates into a bi-vector \(v^{\mu}v^{\nu}\) that is timelike and projects to zero with respect to the metric, \(h_{\mu\nu}v^{\nu}=0\). In cartesian coordinates, the Carrollian metric and vector are given by \[\mathrm{d}s^{2}=h_{\mu\nu}\,\,\mathrm{d}x^{\mu}\,\mathrm{d}x^{\nu}=\delta_{ij }\,\,\mathrm{d}x^{i}\,\mathrm{d}x^{j}\qquad\qquad v=v^{\mu}\,\partial_{\mu}= \partial_{t}\,. \tag{278}\] The Carroll vector fields \(\xi\in\{H,P_{i},B_{i},J_{ij}\}\) preserve this Carrollian structure, \[\mathcal{L}_{\xi}h_{\mu\nu}=0=\mathcal{L}_{\xi}v^{\mu}\,. \tag{279}\] Additionally, all "supertranslations" \(\xi=f(x^{i})\,\partial_{t}\) preserve this Carrollian structure. Thus, as opposed to Minkowski spacetimes, there are infinitely many Killing vectors. If we insist on the preservation of an invariant connection, we are led back to the original finite-dimensional Carroll symmetries [118]. A quick way to see this is to look at the infinitesimal action of a diffeomorphism generated by \(\xi^{\alpha}(x)\) on a generic connection, \[\delta_{\xi}\Gamma^{\lambda}{}_{\mu\nu}=\mathcal{L}_{\xi}\Gamma^{\lambda}{}_ {\mu\nu}+\partial_{\mu}\partial_{\nu}\xi^{\lambda}. \tag{280}\] The connection can therefore only be invariant if the inhomogeneous term vanishes which restricts the diffeomorphism parameter to be linear in the coordinates. In this case the supertranslations above reduce to \(f(x^{i})=b_{i}x^{i}+c\), reproducing (277) together with translations. Carroll gravity can be obtained when the Carroll algebra is gauged. For details on how to gauge the Carroll algebra, see [41, 119] and the next Section. ### Local Carroll symmetries Here we present essential aspects of local Carroll symmetries and how to relate first- and second-order formulations specialized to 1+1 dimensions. In some cases, we use the 2d Carroll dilaton gravity equations of motion for the scalar fields from the main text, see eqs. (8). Whenever we do so, we indicate this by the weakly-equal sign \(\approx\). Defining the full covariant derivative \(\mathcal{D}_{\mu}\) \[\mathcal{D}_{\mu}\tau_{\nu}=\partial_{\mu}\tau_{\nu}-\boldsymbol{\Gamma}^{ \lambda}{}_{\mu\nu}\,\tau_{\lambda}+\omega_{\mu}e_{\nu}\qquad\qquad\mathcal{D }_{\mu}e_{\nu}=\partial_{\mu}e_{\nu}-\boldsymbol{\Gamma}^{\lambda}{}_{\mu\nu} \,e_{\lambda} \tag{281}\] and imposing the Carroll vielbein postulates \[\mathcal{D}_{\mu}\tau_{\nu}=0=\mathcal{D}_{\mu}e_{\nu} \tag{282}\] yields on-shell vanishing torsion of the affine connection \[\boldsymbol{\Gamma}^{\rho}{}_{[\mu\nu]}=0 \tag{283}\] provided \(\partial_{\mu}\mathcal{V}=0\). If this is not the case, then only the spatial component of (283) vanishes (\(\rho=1\)), while the temporal component (\(\rho=0\)) is determined by \(\partial_{\mu}\mathcal{V}\neq 0\). If \(\omega\) is replaced by \(\hat{\omega}\) the connection \(\boldsymbol{\Gamma}^{\lambda}{}_{\mu\nu}\) reduces to \(\Gamma^{\lambda}{}_{\mu\nu}\) (see Section 2.1.3). The defining properties of the inverse vielbein are \[v^{\mu}\tau_{\mu}=-1\qquad\qquad v^{\mu}e_{\mu}=0\qquad\qquad e^{\mu}\tau_{ \mu}=0\qquad\qquad e^{\mu}e_{\mu}=1\,. \tag{284}\] Under boosts they transform (off-shell) as \[\delta_{\lambda}v^{\mu}=0\qquad\qquad\delta_{\lambda}e^{\mu}=-v^{\mu}\,\lambda \tag{285}\] and under diffeos they transform (on-shell) with the usual Lie-derivative, \[\delta_{\xi}v^{\mu}\approx\xi^{\nu}\partial_{\nu}v^{\mu}-v^{\mu}\partial_{\nu }\xi^{\nu}\qquad\qquad\delta_{\xi}e^{\mu}\approx\xi^{\nu}\partial_{\nu}e^{ \mu}-e^{\mu}\partial_{\nu}\xi^{\nu}\,. \tag{286}\] They are compatible with the inverse vielbein postulates. \[\mathcal{D}_{\mu}v^{\nu}=\partial_{\mu}v^{\nu}+\boldsymbol{\Gamma}^{\nu}{}_{ \mu\lambda}\,v^{\lambda}=0\qquad\qquad\mathcal{D}_{\mu}e^{\nu}=\partial_{\mu }e^{\nu}+\boldsymbol{\Gamma}^{\nu}{}_{\mu\lambda}e^{\lambda}+v^{\nu}\omega_{ \mu}=0 \tag{287}\] Defining the usual covariant derivative \(\boldsymbol{\nabla}_{\mu}\) in terms of the affine connection \(\boldsymbol{\Gamma}^{\lambda}{}_{\mu\nu}\) yields the compatibility conditions \[\boldsymbol{\nabla}_{\mu}v^{\nu}=0\qquad\qquad\boldsymbol{\nabla}_{\mu}g_{ \nu\lambda}=0 \tag{288}\] where we defined the spatial metric as bilinear in the spatial vielbein \[g_{\mu\nu}=e_{\mu}e_{\nu}\,. \tag{289}\] A metric with upper indices is similarly defined. \[g^{\mu\nu}=e^{\mu}e^{\nu} \tag{290}\] Since the metric with lower indices is Carroll boost invariant, \[\delta_{\lambda}g_{\mu\nu}=0 \tag{291}\] together with the Carroll boost invariant vector field \(v^{\mu}\) it defines a meaningful (i.e., boost invariant) notion of Carrollian geometry. In the context of 2d Carroll dilaton gravity, one should also consider the dilaton as part of the geometry, which is possible since the dilaton is also Carroll boost invariant. Defining the usual Riemann tensor \[\big{[}\boldsymbol{\nabla}_{\mu},\,\boldsymbol{\nabla}_{\nu}\big{]}\,k^{ \lambda}=\mathbf{R}^{\lambda}{}_{\rho\mu\nu}\,k^{\rho}-2\boldsymbol{\Gamma}^{ \rho}{}_{[\mu\nu]}\,\boldsymbol{\nabla}_{\rho}k^{\lambda} \tag{292}\] relates it through the vielbein postulates to the Carrollian first-order variables. \[\mathbf{R}^{\lambda}{}_{\rho\mu\nu}=-v^{\lambda}e_{\rho}\big{(}\partial_{\mu} \omega_{\nu}-\partial_{\nu}\omega_{\mu}\big{)} \tag{293}\] Note that there is no bilinear term in the connection since we are in 2d. Similarly to the behaviour of the connection, using \(\hat{\omega}\) instead of \(\omega\) in this expression reduces \(\mathbf{R}^{\lambda}{}_{\rho\mu\nu}\) to the Carrollian curvature tensor \(R^{\lambda}{}_{\rho\mu\nu}\) as used in the main part. We get as only non-vanishing components \[\mathbf{R}^{t}{}_{rtr}=-\mathbf{R}^{t}{}_{rtr}=-\partial_{X}\mathcal{V}(X,\,X _{\text{\tiny H}}) \tag{294}\] According to the general analysis of [41], the Carrollian affine connection in our case is given by \[\boldsymbol{\Gamma}^{\lambda}{}_{\mu\nu}=-v^{\lambda}\big{(}\partial_{\mu} \tau_{\nu}+\omega_{\mu}e_{\nu}\big{)}+e^{\lambda}\partial_{\mu}e_{\nu}\,. \tag{295}\] This result is compatible with torsion \[T^{\lambda}{}_{\mu\nu}=\boldsymbol{\Gamma}^{\lambda}{}_{[\mu\nu]}\qquad \Rightarrow\qquad T^{\lambda}{}_{\mu\nu}\,e_{\lambda}\approx 0\qquad T^{\lambda}{}_{\mu\nu}\,\tau_{ \lambda}\approx-\partial_{\text{\tiny H}}\mathcal{V}(X,\,X_{\text{\tiny H}}) \,\tau_{[\mu}e_{\nu]} \tag{296}\] which vanishes on-shell if and only if the potential is \(X_{\text{\tiny H}}\)-independent, \(\partial_{\text{\tiny H}}\mathcal{V}(X,\,X_{\text{\tiny H}})=0\), see (8b). The Riemann tensor can be computed as well, matching the result above. \[\mathbf{R}^{\lambda}{}_{\rho\mu\nu}=\partial_{\mu}\boldsymbol{\Gamma}^{ \lambda}{}_{\nu\rho}+\boldsymbol{\Gamma}^{\lambda}{}_{\mu\sigma}\boldsymbol{ \Gamma}^{\sigma}{}_{\nu\rho}-\big{(}\mu\leftrightarrow\nu\big{)}=-v^{\lambda} e_{\rho}\big{(}\partial_{\mu}\omega_{\nu}-\partial_{\nu}\omega_{\mu}\big{)} \tag{297}\] ## Appendix B Lorentzian and Carrollian PSMs The purpose of this appendix is to show that there is a target space diffeomorphism that maps Lorentzian PSMs to Carrollian PSMs. For a more detailed explanation of the connection between PSMs and Lorentzian 2d dilaton gravity we refer to [64]. The application of target space diffeomorphisms in the Lorentzian case was elaborated on in [120, 121]. The general form of a PSM is \[I_{\text{\tiny PSM}}[A_{I},X^{I}]=\frac{k}{2\pi}\,\int_{\mathcal{M}}\Big{(}X^{ I}\,\text{d}A_{I}+\frac{1}{2}\,P^{IJ}(X^{K})\,A_{I}\wedge A_{J}\Big{)}. \tag{298}\] It describes Lorentzian 2d dilaton gravity if a 3d target space coordinatized by \(X,X^{+},X^{-}\) as well as a Poisson tensor of the form \[P^{IJ}=\begin{pmatrix}0&-X^{+}&X^{-}\\ X^{+}&0&\dot{\mathcal{V}}(X,\,X^{+}X^{-})\\ -X^{-}&-\dot{\mathcal{V}}(X,\,X^{+}X^{-})&0\end{pmatrix} \tag{299}\] are chosen. The connection to gravitational variables is achieved by a background target space metric \[\eta^{IJ}=\begin{pmatrix}0&0&0\\ 0&0&1\\ 0&1&0\end{pmatrix} \tag{300}\] that allows constructing the 2d worldsheet metric from the PSM connection, \[g_{\mu\nu}=\eta^{IJ}A_{I\,\mu}\,A_{J\,\nu}=e^{+}_{\mu}e^{-}_{\nu}+e^{-}_{\mu}e^ {+}_{\nu}. \tag{301}\] The Lorentzian Poisson tensor (299) can now be mapped to the Carrollian Poisson tensor (27) by the target space diffeomorphism14 Footnote 14: We assume here \(X^{\pm}>0\). Similar considerations work for other signs. \[X_{\text{\tiny H}}=\sqrt{2X^{+}X^{-}}\qquad\qquad X_{\text{\tiny P}}=\frac{X_{ \text{\tiny H}}}{2}\ln\frac{X^{-}}{X^{+}}\,. \tag{302}\] Explicit calculation of the transformed Poisson tensor components yields \[P^{X\text{\tiny H}} =P^{X+}\frac{\partial X_{\text{\tiny H}}}{\partial X^{+}}+P^{X-} \frac{\partial X_{\text{\tiny H}}}{\partial X^{-}}=0 \tag{303}\] \[P^{X\text{\tiny P}} =P^{X+}\frac{\partial X_{\text{\tiny P}}}{\partial X^{+}}+P^{X-} \frac{\partial X_{\text{\tiny P}}}{\partial X^{-}}=X_{\text{\tiny H}}\] (304) \[P^{\text{\tiny HP}} =P^{+-}\bigg{(}\frac{\partial X_{\text{\tiny H}}}{\partial X^{+} }\frac{\partial X_{\text{\tiny P}}}{\partial X^{-}}-\frac{\partial X_{\text{ \tiny H}}}{\partial X^{-}}\frac{\partial X_{\text{\tiny P}}}{\partial X^{+}} \bigg{)}=\hat{\mathcal{V}}(X,\tfrac{1}{2}X_{\text{\tiny H}}^{2})\,. \tag{305}\] With the identification \(\hat{\mathcal{V}}(X,\tfrac{1}{2}X_{\text{\tiny H}}^{2})=\mathcal{V}(X,X_{\text {\tiny H}})\) this produces indeed the Carrollian Poisson tensor (27). Besides changing the Poisson tensor, we also need to change the map from PSM to worldsheet geometry variables, which in the Lorentzian case is given by the background target space metric (300). Taking the Carrollian limit thereof yields \[\eta^{IJ}_{\text{\tiny C}}=\begin{pmatrix}0&0&0\\ 0&0&0\\ 0&0&1\end{pmatrix} \tag{306}\] so that the worldsheet metric degenerates, as required. \[h_{\mu\nu}=\eta^{IJ}_{\text{\tiny C}}A_{I\,\mu}\,A_{J\,\nu}=e_{\mu}e_{\nu} \tag{307}\]
2308.01692
Functional shift-induced degenerate transcritical Neimark-Sacker bifurcation in a discrete hypercycle
In this article we investigate the impact of functional shifts in a time-discrete cross-catalytic system. We use the hypercycle model considering that one of the species shifts from a cooperator to a degradader. At the bifurcation caused by this functional shift, an invariant curve collapses to a point $P$ while, simultaneously, two fixed points collide with $P$ in a transcritical manner. All points of a line containing $P$ become fixed points at the bifurcation and only at the bifurcation. Hofbauer and Iooss~\cite{HofbauerIooss1984} presented and proved a result that provides sufficient conditions for a Neimark-Sacker bifurcation (the authors called it "Hopf") to occur in a special degenerate situation. They use it to prove the existence of an invariant curve for the model when a parameter related to the time discreteness of the system goes to infinity becoming a continuous-time system. Here we study the bifurcation that governs the functional shift and demonstrate the existence of an invariant curve when the cooperation parameter approaches zero and thus approaches the switch to degrading species. This invariant curve lives in a different domain and exists for a different set of values of the parameters described by these authors. In order to apply the mentioned result we uncouple the Neimark-Sacker and the transcritical bifurcations. This is accomplished by a preliminary singular change of coordinates that puts the involved fixed points at a fixed position, so that they stay at a fixed distance among them. Finally, going back to the original variables, we can describe mathematically the details of this bifurcation.
E. Fontich, A. Guillamon, J. Perona, J. Sardanyés
2023-08-03T11:21:45Z
http://arxiv.org/abs/2308.01692v1
Functional shift-induced degenerate transcritical Neimark-Sacker bifurcation in a discrete hypercycle ###### Abstract. In this article we investigate the impact of functional shifts in a time-discrete cross-catalytic system. We use the hypercycle model considering that one of the species shifts from a cooperator to a degradader. At the bifurcation caused by this functional shift, an invariant curve collapses to a point \(P\) while, simultaneously, two fixed points collide with \(P\) in a transcritical manner. All points of a line containing \(P\) become fixed points at the bifurcation and only at the bifurcation. Hofbauer and Iooss [29] presented and proved a result that provides sufficient conditions for a Neimark-Sacker bifurcation (the authors called it "Hopf") to occur in a special degenerate situation. They use it to prove the existence of an invariant curve for the model when a parameter related to the time discreteness of the system goes to infinity becoming a continuous-time system. Here we study the bifurcation that governs the functional shift and demonstrate the existence of an invariant curve when the cooperation parameter approaches zero and thus approaches the switch to degrading species. This invariant curve lives in a different domain and exists for a different set of values of the parameters described by these authors. In order to apply the mentioned result we uncouple the Neimark-Sacker and the transcritical bifurcations. This is accomplished by a preliminary singular change of coordinates that puts the involved fixed points at a fixed position, so that they stay at a fixed distance among them. Finally, going back to the original variables, we can describe mathematically the details of this bifurcation. ## Introduction Hypercycles are catalytic sets of macromolecules, where each replicator catalyzes the replication of the next species of the set. This concept was first introduced by Manfred Eigen and Peter Schuster in 1977 [2] and has played a pivotal role in the study of prebiotic evolution and the overcoming of the so-called information crisis [3, 4, 5]. Research in hypercycles primarily investigates cooperative interactions among replicators [1]. Hypercycle theory has been also applied to ## 1. Introduction The study of the dynamics of a system of cooperation parameter that drives the functional shift. We first introduce the model and, for the sake of completeness, we compute the fixed points and their stability as a function of the parameters. Next, we recall the Neimark-Sacker bifurcation for discrete-time dynamical systems and a version of it due to Hofbauer and Iooss [29] that proves the existence of a family of attracting invariant curves in a family of maps that can be expressed as a step of the Euler's integration method of a differential equation; the corresponding vector field has a fixed point with a pair of purely imaginary eigenvalues while the other eigenvalues have negative real part and, moreover, the real part of the coefficient of the resonant term of lowest degree is negative. Finally, we apply the theorem by Hofbauer and Iooss to our discrete-time hypercycle with four species, \(n=4\), for \(k_{1}\to 0^{+}\). To do so, we make a singular change of coordinates to make our system less degenerate. We also carry out a translation to place the fixed point at the origin and we rewrite the system in the form stated in the hypothesis of the theorem. Then, we prove both hypotheses of the theorem to conclude that the four-species system has an attracting invariant curve that appears when \(k_{1}=0\) through a degenerate Neimark-Sacker bifurcation. ## 1. Hofbauer's discrete-time hypercycle model In this section we present the discrete-time model for the hypercycle, introduced by Hofbauer in [28], and we relate it to a continuous-time model. We also compute the basic elements of the dynamics such as the fixed points of the system and their stability. This dynamical system consists of a set of \(n\) species \(s_{i}\), \(1\leqslant i\leqslant n\), such that the species \(s_{i-1}\) catalyzes only the next one \(s_{i}\), in a cyclic manner, with a strength \(k_{i}\). Let \(x_{i}\) be the concentration of the \(i\)-th species. For convenience of notation, we write \(x_{0}:=x_{n}\) and \(x_{n+1}:=x_{1}\) and similarly for \(k_{0}\), \(k_{n+1}\). The model assumes that if the total population is normalized to \(1\), it remains constant. This is accomplished by the introduction of a flux \(\phi(x)\), which also introduces competition between all the hypercycle species. This fact implies that the system will be defined on the \(n\)-simplex \[S_{n}=\{x=(x_{1},\ldots,x_{n})\in\mathbb{R}^{n}|\ \sum_{i=1}^{n}x_{i}=1,\,x_{i} \geqslant 0,\,1\leqslant i\leqslant n\}. \tag{1}\] We introduce the hyperplane \[\Delta_{n}=\{x\in\mathbb{R}^{n}|\ \sum_{i=1}^{n}x_{i}=1\},\] and the set \(\widetilde{\Delta}_{n}=\{x\in\Delta_{n}|\ x_{i}\neq 0,\,1\leqslant i\leqslant n\}\). The system is determined by the map \(F=(F_{1},\ldots,F_{n}):S_{n}\to S_{n}\), where \[F_{i}(x)=\frac{C+k_{i}x_{i-1}}{C+\phi(x)}x_{i},\qquad 1\leqslant i\leqslant n, \tag{2}\] \(C>0\) is a constant of proportionality and \[\phi(x)=\sum_{i=1}^{n}k_{i}x_{i}x_{i-1}. \tag{3}\] In [28] the map (2) is related to the corresponding continuous-time system \[\dot{x_{i}}=x_{i}(k_{i}x_{i-1}-\phi(x)),\qquad 1\leqslant i\leqslant n, \tag{4}\] which also satisfies that if the initial total population is \(1\), then it remains constant. In particular we can write \[\frac{F_{i}(x)-x_{i}}{C^{-1}}=C\,\left(\frac{C+k_{i}x_{i-1}}{C+\phi(x)}x_{i}-x _{i}\right)=x_{i}\left(k_{i}x_{i-1}-\phi(x)\right)\frac{C}{C+\phi(x)}. \tag{5}\] This expression allows us to compare the map \(F\) with the continuous time model, since \(C^{-1}\) can be interpreted as the time interval between two generations and \(F(x)\) can be seen as the Euler step of length \(C^{-1}\) of the continuous-time system (4) since \[\lim_{C^{-1}\to 0}\frac{x_{i}(t+C^{-1})-x_{i}(t)}{C^{-1}}=x_{i}(t)(k_{i}\,x_{i-1}(t)- \phi(x(t))),\] identifying \(F_{i}(x)(t)\) with \(x_{i}(t+C^{-1})\) provided that \(x_{i}=x_{i}(t)\). Then, for large values of \(C\), the discrete-time model (2) approximates the differential equation (4) and therefore we expect that, in such case, both models have similar properties. If we keep the constants \(k_{i}\) positive and bounded away from zero, and we let one of them, say \(k_{\ell}\), go to zero and then become negative, the model can be interpreted biologically as there has been a _functional shift_ meaning that the role of the species \(s_{\ell-1}\) changes from cooperation (\(k_{\ell}>0\)) to degradation (\(k_{\ell}<0\)). Note that, due to the cyclic character of our model, we can assume, without loss of generality, that the parameter that tends to zero is \(k_{1}\) while all \(k_{i}\) with \(i>1\) are bounded away from zero. In Ref. [15] this bifurcation was studied and it was found numerically that for four species there is an attracting invariant curve that tends to the point \(Q=(0,0,0,1)\) when \(k_{1}\to 0^{+}\). Also, when \(k_{1}>0\) there is a unique fixed point \(P\) in the interior of the simplex \(S_{n}\cap\widetilde{\Delta}_{n}\), already described in [28]. The point \(P\) collides with \(Q\) when \(k_{1}=0\), and goes out of \(S_{n}\) in a transcritical-like bifurcation. Moreover, in [15] it is shown that for any number of species, when \(k_{1}\leq 0\), the basin of attraction of \(Q\) contains \(S_{n}\cap\widetilde{\Delta}_{n}\). This fact implies that, in this case, the system has not invariant curves in \(S_{n}\). As we stated above, the main goal of this contribution is to prove the existence of an invariant curve when \(k_{1}>0\) generated through a Neimark-Sacker bifurcation that occurs at the same time that both a transcritical bifurcation and the appearance of a new line of fixed points. We remark that in [28] it is proved the existence of an invariant curve of amplitude \(O(1/\sqrt{C})\) when \(C\to\infty\). Here, instead, we are looking for an invariant curve in a different region of the space of parameters, and focus our attention on the above-mentioned functional shift. Since we assume \(C>0\), we can rewrite \(F_{i}(x)\) as \(\frac{1+\widetilde{k}_{i}x_{i-1}}{1+\phi(x)}x_{i}\) with \(\widetilde{\phi}(x)=\sum_{i=1}^{n}\widetilde{k}_{i}x_{i}x_{i-1}\) and \(\widetilde{k}_{i}=k_{i}/C\). In this way we can get rid of \(C\). We write \(k_{i}\) again instead of \(\widetilde{k}_{i}\). Notice that letting \(C\) go to \(\infty\) in (2) results in letting the new parameters \(k_{i}\) tend to zero. Here, we will only let \(k_{1}\) go to zero keeping the other parameters fixed. Concretely, we will take \(k_{j}>0\), for \(2\leq j\leq n\), arbitrary and \(k_{1}\) variable such that \(k_{1}>k_{1}^{*}\), with \(k_{1}^{*}=-(\sum_{j=2}^{n}\frac{1}{k_{j}})^{-1}<0\). ### Fixed points and stability As a first step to understand the dynamics and for the sake of completeness, we give a brief description of the fixed points of system (2) and their stability. The unique fixed point \(P\) in the interior of the simplex \(S_{n}\cap\widetilde{\Delta}_{n}\) was studied in [28]. In [15] the fixed points in the boundary of the simplex were also studied. Since the fixed points must satisfy \(F_{i}(x)=x_{i}\) for all \(i\), for the points in \(\widetilde{\Delta}_{n}\), from (2), we get the condition \(k_{i}x_{i-1}=\phi(x)\), \(1\leq i\leq n\), or equivalently, \[k_{2}x_{1}=k_{3}x_{2}=\cdots=k_{n}x_{n-1}=k_{1}x_{n}=\phi(x).\] For \(k_{1}\leq 0\), there are no fixed points in \(S_{n}\cap\widetilde{\Delta}_{n}\). If \(k_{1}>0\), the last set of equations gives \(x_{i}=\frac{k_{1}}{k_{i+1}}x_{n}\), \(1\leq i\leq n-1\). Using \(\sum_{i=1}^{n}x_{i}=1\), we get that the fixed point is \[P:=(p_{1},\ldots,p_{n}),\qquad\text{with}\quad\,p_{i}=\frac{1}{k_{i+1}M_{1}}, \quad\,1\leq i\leq n,\] where \(M_{1}=\sum_{j=1}^{n}\frac{1}{k_{j}}\). When \(k_{1}=0\), \(P\) coincides with \(Q=(0,0\ldots,0,1)\) and when \(k_{1}^{*}<k_{1}<0\), \(M_{1}<0\) and therefore \(p_{j}<0\), \(2\leqslant j\leqslant n\). Moreover, in Proposition 1 of [15] it was proved that \(x\in\Delta_{n}\cap\widetilde{\Delta}_{n}\) is a fixed point if and only if \(k_{i}x_{i}x_{i-1}=0\)\(\forall i\). In the four species case, for \(k_{1}\neq 0\) in the boundary of the simplex we have the segments of fixed points \(\{\,(\alpha,0,1-\alpha,0)\,|\,\alpha\in[0,1]\,\}\) and \(\{\,(0,\alpha,0,1-\alpha)\,|\,\alpha\in[0,1]\,\}\). If \(k_{1}=0\) we have the additional segment of fixed points \(\{\,(\alpha,0,0,1-\alpha)\,|\,\alpha\in[0,1]\,\}\). In particular, the vertices \(q^{(m)}:=(\delta_{m,1},\ldots,\delta_{m,4})\), \(1\leqslant m\leqslant 4\), of the simplex \(S_{4}\) are always fixed points. Here \(\delta_{k,l}\) is the Kronecker delta. When \(k_{1}\to 0^{+}\) the inner fixed point \(P\) tends to the fixed point \(Q=q^{(4)}=(0,0,0,1)\). Again, in the four species case, we have that the eigenvalues of the inner fixed point \(P\) are \[\lambda_{j}=1+\frac{1}{M_{1}+1}e^{i\theta_{j}},\qquad\theta_{j}=e^{\frac{2\pi ij }{4}},\qquad 1,2,3,\] together with \(\lambda_{0}=1+\frac{1}{M_{1}+1}\) which has the eigenvector \((1,1,1,1)\) orthogonal to \(S_{4}\). Therefore, concerning the dynamics in \(S_{4}\) the relevant eigenvalues are \(\lambda_{1}\), \(\lambda_{2}\) and \(\lambda_{3}\) (see [15, 28]). We also have that \[|\lambda_{1}|^{2}= |\lambda_{3}|^{2}=1+\Big{(}\frac{1}{M_{1}+1}\Big{)}^{2}>1,\] \[|\lambda_{2}|^{2}= \Big{(}1-\frac{1}{M_{1}+1}\Big{)}^{2}<1.\] Therefore, \(P\) is unstable. The eigenvalues of \(q^{(m)}\) are given in [15]. They are \(1\) (double) and \(1+k_{m+1}\). In particular \(Q=q^{(4)}\) has one eigenvalue that goes from bigger than \(1\) to less than \(1\) when \(k_{1}\) goes from positive to negative. ## 2. A non-generic Neimark-Sacker bifurcation theorem In this section, we recall a result by Hofbauer and Iooss in [29], that studies a Neimark-Sacker bifurcation for difference equations. In that paper the authors call it _Hopf bifurcation_ but we prefer to refer to it as _Neimark-Sacker_ since it seems to us that nowadays this term is more used for maps, see for instance [31]. The result deals with a discretization of a differential equation near a fixed point with two purely imaginary eigenvalues and the remaining ones with negative real part. The final goal is to prove that an invariant curve appears around the fixed point of the discrete time system. First, we consider an autonomous differential equation \[\dot{x}=f(x) \tag{6}\] defined in an open set of \(\mathbb{R}^{n}\) and we assume that the origin is an equilibrium point, i.e. \(f(0)=0\). As in the Euler's method of integration, we consider the following family of maps \[T_{\varepsilon}:x\mapsto x+\varepsilon f(x),\qquad\varepsilon>0\quad\text{ small.} \tag{7}\] Since \(f(0)=0\) we can write \[f(x)=Ax+O(\|x\|^{2}), \tag{8}\] where \(A=Df(0)\). We immediately have that the maps \(T_{\varepsilon}\) have the form \[T_{\varepsilon}(x)=(\text{Id}+\varepsilon A)x+\varepsilon O(\|x\|^{2}).\] It is clear that \(\lambda\) is an eigenvalue of \(A\) if and only if \(1+\varepsilon\lambda\) is an eigenvalue of \(DT_{\varepsilon}(0)\). If \(A\) has a pair of purely imaginary eigenvalues, then we have that the fixed point is unstable for the map (7) for every \(\varepsilon>0\), although \(x=0\) could be asymptotically stable for system (6). **Definition 1**.: _Assume that the origin is an equilibrium point of system (6) and it has a pair of purely imaginary eigenvalues \(\pm i\,\omega\). Suppose \(f\) is sufficiently differentiable and that (6) can be transformed, around the origin, by a change of coordinates, into the form_ \[\begin{cases}\dot{z}=i\omega z+\sum_{j=1}^{2k}\alpha_{j}z|z|^{2j}+O(|z|+|v|)^{ 4k+2},\\ \dot{v}=Av+O(|z|^{2}+|v|^{2}),\end{cases} \tag{9}\] _where \(|z|^{2}=z\overline{z}\), and \(\alpha_{j}=a_{j}+ib_{j}\). We say that the origin is a weakly stable equilibrium point of order \(k\) if there exists \(k\geq 1\) such that \(a_{1}=\cdots=a_{k-1}=0\) and \(a_{k}<0\)._ In [29] the following theorem is proved. **Theorem 1**.: _Consider equation (6) in an open neighbourhood of \(x=0\) in \(\mathbb{R}^{n}\) with \(f\) sufficiently differentiable_ 1. \(f(0)=0\) _and_ \(Df(0)\) _has two purely imaginary eigenvalues_ \(\pm i\omega\)_, and the rest of the eigenvalues have negative real part, and_ 2. _the equilibrium point_ \(x=0\) _is a weakly stable equilibrium point of order_ \(k\)_, with_ \(4k+2\leq r\)_._ _Then, for any family of maps \(T_{\varepsilon}\) of class \(C^{r}\), of the form_ \[T_{\varepsilon}(x)=x+\varepsilon f(x)+O(\varepsilon^{2}\|x\|^{2}),\qquad \varepsilon>0, \tag{10}\] _there exists an \(\varepsilon\)-dependent family of invariant and attracting closed curves around the fixed point \(x=0\) of radius \(O(\varepsilon^{1/2k})\)._ ## 3. A degenerate transcritical Neimark-Sacker bifurcation In this section, our goal is to prove analytically the existence of an invariant curve applying Theorem 1 to our discrete-time system. In other words, we will prove that an invariant curve born when \(k_{1}=0\) persists for \(k_{1}\) positive and sufficiently small. Since the bifurcation is very degenerate we will uncouple the Neimark-Sacker and the transcritical bifurcations. For this purpose, we force the inner fixed point to be located at the "center" of the simplex for all values of \(k_{1}\). This is accomplished using barycentric coordinates. Let \[y_{i}=\frac{k_{i+1}x_{i}}{\sum_{j=1}^{4}k_{j+1}x_{j}},\qquad 1\leq i\leq 4.\] Indeed, this change allows to separate the inner fixed point \(P\) from the vertex \(Q=(0,0,0,1)\) transforming the fixed point \(p\) into the "center point" of the simplex \(S_{4}\): \[p=\Big{(}\frac{1}{4},\frac{1}{4},\frac{1}{4},\frac{1}{4}\Big{)}.\] This transformation is singular at \(k_{1}=0\), but it facilitates the study of the system near to the fixed point \(p\), since it brings it far from the other fixed points. Now, we are going to compute the new system in barycentric coordinates. First, we notice that \[x_{i}=\frac{y_{i}\sum_{j=1}^{4}k_{j+1}x_{j}}{k_{i+1}}.\] Since this change of coordinates sends \(\Delta_{n}\) to \(\Delta_{n}\) we can write \[\sum_{i=1}^{4}x_{i}=1\quad\Longleftrightarrow\quad\sum_{i=1}^{4}\frac{y_{i}\sum _{j=1}^{4}k_{j+1}x_{j}}{k_{i+1}}=1\quad\Longleftrightarrow\quad\sum_{j=1}^{4}k _{j+1}x_{j}=\frac{1}{\sum_{i=1}^{4}\frac{y_{i}}{k_{i+1}}},\] and we obtain \[x_{i}=\frac{y_{i}}{N(y)k_{i+1}},\qquad\text{where}\quad N(y)=\sum_{j=1}^{4} \frac{y_{j}}{k_{j+1}}.\] We can express our map \(F\) in (2) in the new variables \(y_{i}\) as \[F_{i}(y)= \frac{k_{i+1}F_{i}(x)}{\sum_{j=1}^{4}k_{j+1}F_{j}(x)}=\frac{k_{i+ 1}\Big{(}\frac{1+k_{i}x_{i-1}}{1+\phi(x)}x_{i}\Big{)}}{\sum_{j=1}^{4}\Big{(} \frac{1+k_{i}x_{i-1}}{1+\phi(x)}x_{j}\Big{)}k_{j+1}}=\frac{k_{i+1}\Big{(}1+k_ {i}\frac{y_{i-1}}{k_{i}N(y)}\Big{)}\frac{y_{i}}{k_{i+1}N(y)}}{\sum_{j=1}^{4} \Big{(}1+k_{j}\frac{y_{i-1}}{k_{j}N(y)}\Big{)}\Big{(}\frac{y_{i}}{k_{j+1}N(y) }\Big{)}k_{j+1}}\] \[= \frac{1+\frac{y_{i-1}}{N(y)}}{\sum_{j=1}^{4}\Big{(}1+\frac{y_{i-1 }}{N(y)}\Big{)}y_{j}}y_{i}.\] Next, we perform a translation to have the fixed point at the origin: \[z_{i}=y_{i}-\frac{1}{4},\qquad 1\leqslant i\leqslant 4.\] In these new coordinates, \(\sum_{j=1}^{4}z_{j}=0\) and the value of \(N(z)\) is given by \[N(z)=\sum_{j=1}^{4}\frac{(z_{j}+\frac{1}{4})}{k_{j+1}}=\sum_{j=1}^{4}\frac{z_{ j}}{k_{j+1}}+\frac{1}{4}\sum_{j=1}^{4}\frac{1}{k_{j+1}}.\] Moreover, the components of \(F\) become: \[F_{i}(z) =F_{i}(y)-\frac{1}{4}=\frac{1+\frac{y_{i-1}}{N(y)}}{\sum_{j=1}^{4 }\Big{(}1+\frac{y_{j-1}}{N(y)}\Big{)}y_{j}}y_{i}-\frac{1}{4}\frac{N(y)+y_{i-1} }{\sum_{j=1}^{4}\Big{(}N(y)+y_{j-1}\Big{)}y_{j}}y_{i}-\frac{1}{4}\] \[=\frac{N(z)+z_{i-1}+\frac{1}{4}}{W(z)}(z_{i}+\frac{1}{4})-\frac{1 }{4}=z_{i}+\Big{(}\frac{N(z)+z_{i-1}+\frac{1}{4}}{W(z)}-1\Big{)}(z_{i}+\frac{1 }{4})\] \[=z_{i}+\frac{N(z)+z_{i-1}+\frac{1}{4}-W(z)}{W(z)}(z_{i}+\frac{1}{ 4}),\] where \(W(z):=\sum_{j=1}^{4}\Big{(}N(z)+z_{j-1}+\frac{1}{4}\Big{)}(z_{j}+\frac{1}{4})\). Note that \[W(z)=\sum_{j=1}^{4}N(z)(z_{j}+\frac{1}{4})+\sum_{j=1}^{4}(z_{j-1}+\frac{1}{4}) (z_{j}+\frac{1}{4})=N(z)+\sum_{j=1}^{4}z_{j-1}z_{j}+\frac{1}{4}.\] Therefore, \[F_{i}(z)=z_{i}+\frac{z_{i-1}-\sum_{j=1}^{4}z_{j}z_{j-1}}{\frac{1}{4}\Big{(}1+ \sum_{j=1}^{4}\frac{1}{k_{j+1}}\Big{)}+\sum_{j=1}^{4}\frac{z_{j}}{k_{j+1}}+ \sum_{j=1}^{4}z_{j-1}z_{j}}\left(z_{i}+\frac{1}{4}\right).\] Now, keeping the lower order terms, we can rewrite the components of the system as: \[\begin{array}{rl}F_{i}(z)&=z_{i}+\delta(z_{i-1}-\sum_{j=1}^{4}z_{j}z_{j-1}) \frac{1/4+z_{i}}{1/4+z_{4}}+O(\delta^{2})O(|z|^{2})\\ &=z_{i}+\delta(z_{i-1}-\sum_{j=1}^{4}z_{j}z_{j-1})(1+4\,z_{i})(1-4\,z_{4}+16 \,z_{4}^{2}+O(z_{4}^{3}))+O(\delta^{2})O(|z|^{2}),\end{array} \tag{11}\] where \[\delta=\frac{k_{1}}{1+k_{1}(1+M_{2})},\qquad\text{with}\quad\ M_{2}=\frac{1}{k_{2 }}+\frac{1}{k_{3}}+\frac{1}{k_{4}}.\] Note that \(O(\delta)=O(k_{1})\) and, more importantly, that \(F\) expands exactly as \(T_{\varepsilon}\) in (10) of Theorem 1. In order to apply this theorem, we first reduce the dimension of the map \(F\) by \(1\) using that \(\sum_{j=1}^{4}z_{j}=0\). We choose to eliminate \(z_{2}\); as a consequence, we also have that \[\sum_{j=1}^{4}z_{j}z_{j-1}=(z_{1}+z_{3})(z_{2}+z_{4})=-(z_{1}+z_{3})^{2}.\] Let us call \(G\) the new map, so that \(G_{i}(z_{1},z_{3},z_{4})=F_{j}(z_{1},-z_{1}-z_{3}-z_{4},z_{3},z_{4})\), with \(G_{1}=F_{1}\), \(G_{2}=F_{3}\) and \(G_{3}=F_{4}\). Then, we can express the system as a family of maps \(G(z)\) of the form \[G(z)=z+\delta g(z)+O(\delta^{2})O(|z|^{2}),\] where the components of \(g\) are obtained from (11) by substituting \(z_{2}=-z_{1}-z_{3}-z_{4}\): \[\begin{cases}g_{1}(z)=(1+4z_{1})(z_{4}+(z_{1}+z_{3})^{2})(1-4z_{4}+16z_{4}^{2} +O(z_{4}^{3})),\\ g_{2}(z)=(1+4z_{3})(-z_{1}-z_{3}-z_{4}+(z_{1}+z_{3})^{2})(1-4z_{4}+16z_{4}^{2} +O(z_{4}^{3})),\\ g_{3}(z)=z_{3}+(z_{1}+z_{3})^{2}+O(z_{4}^{3}).\end{cases} \tag{12}\] Expanding in powers of \(z\) we can write \[\begin{cases}g_{1}(z)=z_{4}+P_{12}(z)+P_{13}(z)+O(|z|^{4}),\\ g_{2}(z)=(-z_{1}-z_{3}-z_{4})+P_{22}(z)+P_{23}(z)+O(|z|^{4}),\\ g_{3}(z)=z_{3}+P_{32}(z)+P_{33}(z)+O(|z|^{4}),\end{cases} \tag{13}\] where \(P_{ij}\) indicates the term of degree \(j\) in the \(i\)-th component of the vector field, and \[\begin{array}{l}P_{12}(z)=-4z_{4}^{2}+4z_{4}z_{1}+(z_{1}+z_{3})^{2},\\ P_{13}(z)=4\,(z_{1}-z_{4})\,(z_{1}+z_{3}+2\,z_{4})\,(z_{1}+z_{3}-2\,z_{4}),\\ P_{22}(z)=4z_{4}(z_{1}+z_{3}+z_{4})-4z_{3}(z_{1}+z_{3}+z_{4})+(z_{1}+z_{3})^{ 2},\\ P_{23}(z)=4\,(z_{3}-z_{4})\,(z_{1}+z_{3}+2\,z_{4})^{2},\\ P_{32}(z)=(z_{1}+z_{3})^{2},\qquad P_{33}(z)=0.\end{array}\] Now, we have to check that \(g\) satisfies the two hypotheses of Theorem 1. Clearly, we have that both \(g(0)=0\) and the derivative of \(g\) at the origin, \[Dg(0)=\left(\begin{array}{ccc}0&0&1\\ -1&-1&-1\\ 0&1&0\end{array}\right)\!,\] has two purely imaginary eigenvalues and a third one whose real part is negative; more precisely, the eigenvalues are \(\lambda_{1,2}=\pm i\), and \(\lambda_{3}=-1\). Thus, the first hypothesis of the theorem follows. The corresponding eigenvectors are \(v_{1}=(1,-1,i)\), \(v_{2}=(1,-1,-i)\) and \(v_{3}=(1,\ \ 1,-1)\). For the second hypothesis of the theorem, we need to compute the normal form for the system \(\dot{z}=g(z)\). First, we diagonalize \(Dg(0)\) using the linear change \(z=C\,\zeta\), where \[C=\left(\begin{array}{ccc}1&1&1\\ -1&-1&1\\ i&-i&-1\end{array}\right)\quad\text{and}\quad\ C^{-1}=\frac{1}{4}\left( \begin{array}{ccc}1-i&-1-i&-2i\\ 1+i&-1+i&2i\\ 2&2&0\end{array}\right)\!.\] In the new set of variables \(\zeta=(\xi,\overline{\xi},\eta)\), system \(\dot{z}=g(z)\) is transformed into \[\dot{\zeta}=g^{(1)}(\zeta):=C^{-1}g(C\zeta).\] Observe that, by construction, the linear term of \(g^{(1)}\) becomes \(\left(\begin{array}{c}i\xi\\ -i\overline{\xi}\\ -\eta\end{array}\right)\). Next, we have to compute \(g^{(1)}\) for quadratic and cubic terms. We first compute the corresponding terms in \(g(C\zeta)\); writing each component \(i\) in the form \(g_{i2}(C\zeta)+g_{i3}(C\zeta)\), we have \[g_{12}(C\zeta) =4(-\eta^{2}+\xi^{2}(1+i)-2\xi\overline{\xi}+\xi\eta(-1+3i)+ \overline{\xi}\eta(-1-3i)+\overline{\xi}^{2}(1-i)),\] \[g_{13}(C\zeta) =16\,i\,(\xi\,i-\overline{\xi}\,i-2\,\eta-\xi-\overline{\xi})\,( \xi\,i-\overline{\xi}\,i-2\,\eta)\,(\xi-\overline{\xi}),\] \[g_{22}(C\zeta) =4(-\eta^{2}+\xi^{2}(-1+i)+2\xi\overline{\xi}+\xi\eta(1-i)+ \overline{\xi}\eta(1+i)-\overline{\xi}^{2}(1+i)),\] \[g_{23}(C\zeta) =16\,(\xi\,i-\overline{\xi}\,i-2\,\eta+\xi+\overline{\xi})\,( \xi-\overline{\xi})^{2},\] \[g_{32}(C\zeta) =4\eta^{2},\] \[g_{33}(C\zeta) =0.\] Once we have \(g(C\zeta)\), we then compute \(g^{(1)}(\zeta)=C^{-1}g(C\zeta)\): \[g^{(1)}(\zeta)=\frac{1}{4}\left(\begin{array}{c}(1-i)g_{1}(C\zeta)+(-1-i)g_ {2}(C\zeta)-2ig_{3}(C\zeta)\\ (1+i)g_{1}(C\zeta)+(-1+i)g_{2}(C\zeta)+2ig_{3}(C\zeta)\\ 2g_{1}(C\zeta)+2g_{2}(C\zeta)\end{array}\right)=:\left(\begin{array}{c}g_{ 1}^{(1)}(\zeta)\\ g_{2}^{(1)}(\zeta)\\ g_{3}^{(1)}(\zeta)\end{array}\right).\] We decompose the three components as \(g_{i}^{(1)}=g_{i1}^{(1)}+g_{i2}^{(1)}+g_{i3}^{(1)}\), for \(i=1,2,3\). Clearly \(g_{11}^{(1)}(\zeta)=i\xi\), \(g_{21}^{(1)}(\zeta)=-i\overline{\xi}\) and \(g_{31}^{(1)}(\zeta)=-\eta\), and \[g_{12}^{(1)}(\zeta)= 4\left(\xi^{2}-\xi\overline{\xi}+i\xi\eta-(1+i)\overline{\xi} \eta\right),\] \[g_{13}^{(1)}(\zeta)= 16\left(-i\xi^{3}+2i\xi^{2}\overline{\xi}+2\xi^{2}\eta-i\xi \overline{\xi}^{2}+(-3+i)\xi\overline{\xi}\eta+(1+i)\xi\eta^{2}+(1-i) \overline{\xi}^{2}\eta\right.\] \[\qquad\left.-(1+i)\overline{\xi}\eta^{2}\right),\] \[g_{22}^{(1)}(\zeta)= 4\left(-\xi\overline{\xi}+(-1+i)\xi\eta+\overline{\xi}^{2}-i \overline{\xi}\eta\right),\] \[g_{23}^{(1)}(\zeta)= 16\left(i\overline{\xi}^{3}+i\xi^{2}\overline{\xi}+(1+i)\xi^{2 }\eta-2i\xi\overline{\xi}^{2}-(3+i)\xi\overline{\xi}\eta+(-1+i)\xi\eta^{2}+2 \overline{\xi}^{2}\eta\right.\] \[\qquad\left.+(1-i)\overline{\xi}\eta^{2}\right),\] \[g_{32}^{(1)}(\zeta)= 4\left(i\xi^{2}+i\xi\eta-i\overline{\xi}^{2}-i\overline{\xi} \eta-\eta^{2}\right),\] \[g_{33}^{(1)}(\zeta)= 16\left(\xi^{3}-\xi^{2}\overline{\xi}+(1+i)\xi^{2}\eta-\xi \overline{\xi}^{2}-2\xi\overline{\xi}\eta+2i\xi\eta^{2}+\overline{\xi}^{3}+( 1-i)\overline{\xi}^{2}\eta-2i\overline{\xi}\eta^{2}\right).\] We now proceed to compute the normal form of \(g^{(1)}(\zeta)\) by means of a generic change of coordinates of quadratic order that kills all quadratic terms (which are non-resonant) of \(g^{(1)}(\zeta)\) and preserves the linear ones. Let \[h(\mathrm{x})=\mathrm{x}+\widetilde{h}(\mathrm{x}),\] where \[\widetilde{h}(\mathrm{x})=\left(\begin{array}{c}a_{200}x^{2}+a_{020}y^{2}+a _{002}z^{2}+a_{110}xy+a_{101}xz+a_{011}yz\\ b_{200}x^{2}+b_{020}y^{2}+b_{002}z^{2}+b_{110}xy+b_{101}xz+b_{011}yz\\ c_{200}x^{2}+c_{020}y^{2}+c_{002}z^{2}+c_{110}xy+c_{101}xz+c_{011}yz\end{array} \right),\] and consider the change of variables \(\zeta=h(\mathrm{x})\), with \(\mathrm{x}=(x,y,z)\). We have that \[\dot{\zeta}=Dh(\mathrm{x})\dot{\mathrm{x}},\] and so \[\dot{\mathrm{x}}=Dh(\mathrm{x})^{-1}g^{(1)}(h(\mathrm{x}))=:g^{(2)}(\mathrm{x}). \tag{14}\] **Remark 1**.: _To do the computations, we only keep track the terms up to degree \(3\) and we take advantage of the degree-structure presented in the previous steps to discard terms of degree \(4\) or higher in \(g^{(1)}(h(\mathrm{x}))\). Moreover, we can approximate \(Dh(\mathrm{x})^{-1}\) by_ \[Dh(\mathrm{x})^{-1}\approx\mathrm{Id}-D\widetilde{h}(\mathrm{x})+D\widetilde{ h}(\mathrm{x})^{2}.\] _Note that, when we substitute this approximation in (14), the \(\mathrm{Id}\) applies to the expression of \(g^{(1)}(h(\mathrm{x}))\) up to degree \(3\), but \(-D\widetilde{h}(\mathrm{x})\) applies only up to quadratic terms and \(D\widetilde{h}(\mathrm{x})^{2}\) only to the linear terms._ Following the strategy commented in Remark 1, the quadratic terms of the new system (14) have the following components \[g^{(2)}_{1}(\mathrm{x})= (4-ia_{200})x^{2}+3ia_{020}\,y^{2}+(ia_{002}+2a_{002})z^{2}+(ia_{ 110}-4)xy\] \[+(4i+a_{101})xz+(-4-4i+2ia_{011}+a_{011})yz,\] \[g^{(2)}_{2}(\mathrm{x})= (-3ib_{200})x^{2}+(ib_{020}+4)y^{2}+(-ib_{002}+2b_{002})z^{2}+(-ib _{110}-4)xy\] \[+(-2ib_{101}-4+4i+b_{101})xz+(-4i+b_{011})yz,\] \[g^{(2)}_{3}(\mathrm{x})= (-2ic_{200}+4i-c_{200})x^{2}+(2ic_{020}-4i-c_{020})y^{2}+(c_{002}- 4)z^{2}+(-c_{110})xy\] \[+(-ic_{101}+4i)xz+(ic_{011}-4i)yz.\] In order to kill every quadratic term, we must take \[a_{200}=-4i, a_{020}=0, a_{002}=0, a_{110}=-4i, a_{101}=-4i, a_{011}=-4i, a_{011}=\frac{4}{5}(3-i),\] \[b_{200}=0, b_{020}=4i, b_{002}=0, b_{110}=4i, b_{101}=\frac{4}{5}(3+i), b_{011}=4i,\] \[c_{200}=\frac{4}{5}(2+i), c_{020}=\frac{4}{5}(2-i), c_{002}=4, c_{110}=0, c_{101}=4, c_{011}=4.\] Next, we substitute the above values of the coefficients of \(\tilde{h}(\mathrm{x})\) into the cubic terms of \(g^{(2)}\). It is worth mentioning that, in principle, this cubic terms can have non-zero coefficients for all the monomials. Thus, in order to have the cubic normal form, we should continue with a new change of variables that would kill all cubic terms but the resonant ones. However, by the normal form theory, we know that this new change would keep all resonant terms invariant. Since we are only interested in the sign of the real part of one specific resonant term, we do not need to perform the full change of variables. Therefore, if we call \((z,\bar{z},\nu)\) the new set of variables, we can assert that the system writes as \[\begin{cases}\dot{z}=iz+\Big{(}-\frac{16}{5}-\frac{48}{5}i\Big{)}z^{2}\bar{z}+ \ldots,\\ \dot{\bar{z}}=-i\bar{z}+\Big{(}-\frac{16}{5}+\frac{48}{5}i\Big{)}z\bar{z}^{2}+ \ldots,\\ \dot{\nu}=-\nu+\frac{64}{5}z\bar{z}w+\ldots.\end{cases} \tag{15}\] Observe that (15) corresponds to the normal form (9) with \(n=3\), \(A=-1\) and, most importantly, \(\alpha_{1}=-\frac{16}{5}-\frac{48}{5}i\). Since \(\mathrm{Re}(\alpha_{1})\) is negative, from Definition 1 we can ensure that the origin is a weakly stable equilibrium point of order \(1\) and so we have checked the second hypothesis of Theorem 1 for our system. Therefore, we conclude that the four-dimensional discrete-time hypercycle (2) presents a family of attracting invariant curves depending on the parameter \(\varepsilon=k_{1}\), when \(k_{1}>0\). Going back to the original variables we have that, for \(k_{1}>0\) small, the system has a closed invariant curve which arises from \(Q=q^{(4)}=(0,0,0,1)\), while, at the bifurcation value \(k_{1}=0\), a line of fixed points appears and this corner point \(Q=q^{(4)}\) collides with the inner fixed point \(P\) in a transcritical bifurcation. ## 4. Conclusions The main goal of this work was to provide an analytical proof of the existence of an attracting invariant curve in the four member discrete-time hypercycle when a cooperation coefficient approaches the functional shift, motivated by numerical evidences that were described in [15]. In the discrete-time hypercycle model, this phenomenon is reflected in the fact that the parameter \(k_{1}\) goes from positive to negative: the invariant curve shrinks to a corner of the domain and disappears throughout a Neimark-Sacker bifurcation when \(k_{1}=0\). We have studied analytically this degenerate bifurcation. For this purpose, we have followed a result by Hofbauer and Iooss [29] that provides sufficient conditions for a Neimark-Sacker bifurcation. In fact, the theorem by Hofbauer and Iooss was introduced to prove the existence of another invariant curve in the same model. However, the application of this theorem is not straightforward for the case \(k_{1}=0\). The coincidence of the Neimark-Sacker bifurcation with a transcritical one forced us to decouple them. For this purpose, we performed a singular change of coordinates that ensured a constant distance between the fixed points that are relevant in each bifurcation. Subsequently, in order to prove the hypotheses of the theorem by Hofbauer and Iooss, we brought the system to its normal form by making a new change of variables that eliminates all quadratic terms and reveals the resonant cubic term. By undertaking this analytical exploration, we have been able to provide a complete understanding of how the invariant curve arises in the scenario of transition from cooperation to degradation. The presence of invariant attracting curves ensures the survival of all species; the dynamics within these invariant curves is an interesting continuation of this problem that would shed light on how oscillations in the model are structured. ## Acknowledgments EF has been funded by the Spanish grant PID2021-125535NB-I00 (MICINN/FEDER,UE). AG has been under by MCIN/AEI/10.13039/501100011033 and by ERDF "A way of making Europe" grants PID-2021-122954NB-I00 and PID2022-137708NB-I00, and the AGAUR project 2021SGR1039. JS has been supported by the Ramon y Cajal grant RYC-2017-22243 funded by MCIN/AEI/10.13039/501100011033 "FSE invests in your future", and by the 2020-2021 Biodiveras+ and Water JPI joint call under the BiodivRestore ERA-NET Cofund (GA N\({}^{\circ}\)101003777) project MPA4Sustainability with funding organizations: Innovation Fund Denmark (IFD), Agence Nationale de la Recherche (ANR), Fundacao para a Ciencia e a Tecnologia (FCT), Swedish Environmental Protection Agency (SEPA), and grant PCI2022-132926 funded by MCIN/AEI/10.13039/501100011033 and by the European Union NextGeneration EU/PRTR. This work has been also funded through the Severo Ochoa and Maria de Maeztu Program for Centers and Units of Excellence in R&D (CEX2020-001084-M). We thank CERCA Programme/Generalitat de Catalunya for institutional support.
2305.09232
A Cuntz--Krieger Uniqueness theorem for C*-algebras of relative generalized Boolean dynamical systems
We prove a version of the Cuntz--Krieger Uniqueness Theorem for $C^*$-algebras of arbitrary relative generalized Boolean dynamical systems. We then describe properties of a $C^*$-algebra of a relative generalized Boolean dynamical system when the underlying Boolean dynamical system satisfies Condition (K). We also define a notion of minimality of a Boolean dynamical system and give sufficient and necessary conditions for the minimality. Using these results, we characterize the generalized Boolean dynamical systems who's $C^*$-algebra is simple.
Toke Meier Carlsen, Eun Ji Kang
2023-05-16T07:21:04Z
http://arxiv.org/abs/2305.09232v1
A Cuntz-Krieger uniqueness theorem for \(C^{*}\)-algebras of relative generalized Boolean dynamical systems ###### Abstract. We prove a version of the Cuntz-Krieger Uniqueness Theorem for \(C^{*}\)-algebras of arbitrary relative generalized Boolean dynamical systems. We then describe properties of a \(C^{*}\)-algebra of a relative generalized Boolean dynamical system when the underlying Boolean dynamical system satisfies Condition (K). We also define a notion of minimality of a Boolean dynamical system and give sufficient and necessary conditions for the minimality. Using these results, we characterize the generalized Boolean dynamical systems who's \(C^{*}\)-algebra is simple. Key words and phrases:Generalized Boolean Dynamical Systems, Partially defined topological graph, Cuntz-Krieger uniqueness theorem, Simple \(C^{*}\)-algebra 2000 Mathematics Subject Classification: 46L05, 46L55 ## 1. Introduction In [6], Cuntz and Krieger constructed a \(C^{*}\)-algebra \(\mathcal{O}_{A}\) generated by \(n\) partial isometries satisfying certain algebraic conditions arising from an \(n\times n\)-matrix \(A\) with entries in \(\{0,1\}\), and they proved the uniqueness theorem of \(\mathcal{O}_{A}\)[6, Theorem 2.13]. This results says that if the matrix \(A\) satisfies a fullness condition (I), then any two families of non-zero partial isometries satisfying the above-mentioned algebraic conditions generate isomorphic \(C^{*}\)-algebras. The theorem is now known as the _Cuntz-Krieger uniqueness theorem_. It is fundamental for the theory of Cuntz-Krieger algebras (as the algebras \(\mathcal{O}_{A}\) are now called) as it was used to prove a simplicity result for Cuntz-Krieger algebras [6, Theorem 2.14] and a description of the primitive ideal space of \(\mathcal{O}_{A}\)[14, Theorem 4.7]. When studying a new class of \(C^{*}\)-algebras that contains the class of Cuntz-Krieger algebras, it is therefore one of the main topics to prove a result that extend the above-mentioned Cuntz-Krieger uniqueness theorem to every \(C^{*}\)-algebra in the new class. For example, graph algebras, topological graph algebras, higher rank graph algebras, labeled graph \(C^{*}\)-algebras and \(C^{*}\)-algebras of Boolean dynamical systems are generalizations of Cuntz-Krieger algebras, and generalizations of the Cuntz-Krieger uniqueness theorem have been proven for these classes of algebras ([13, Corollary 2.12], [16, Theorem 5.12], [23, Corollary 4.6], [4, Theorem 5.5], [9, Theorem 9.9]). Recalling specifically the case of \(C^{*}\)-algebras of Boolean dynamical systems, if a Boolean dynamical system \((\mathcal{B},\mathcal{L},\theta)\) such that \(\mathcal{B}\) and \(\mathcal{L}\) are countable satisfies Condition (L), then any two Cuntz-Krieger representations consisting of nonzero partial isometries generate isomorphic \(C^{*}\)-algebras ([9, Theorem 9.9]). A relative generalized Boolean dynamical system \((\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha};\mathcal{J})\) consists of a Boolean dynamical system \((\mathcal{B},\mathcal{L},\theta)\) together with a family \((\mathcal{I}_{\alpha})_{\alpha\in\mathcal{L}}\) of ideals in \(\mathcal{B}\) such that ###### Contents * 1 Introduction * 2 Some properties of the Cuntz-Krieger uniqueness theorem * 2.1.1 The Cuntz-Krieger uniqueness theorem * 2.1.2 The Cuntz-Krieger uniqueness theorem * 2.2.1 The Cuntz-Krieger uniqueness theorem * 2.2.2 The Cuntz-Krieger uniqueness theorem * 2.2.3 The Cuntz-Krieger uniqueness theorem * 2.3 The Cuntz-Krieger uniqueness theorem * 2.3.1 The Cuntz-Krieger uniqueness theorem * 2.3.2 The Cuntz-Krieger uniqueness theorem * 2.3.3 The Cuntz-Krieger uniqueness theorem * 2.3.4 The Cuntz-Krieger uniqueness theorem * 2.3.5 The Cuntz-Krieger uniqueness theorem * 2.4 The Cuntz-Krieger uniqueness theorem * 2.4 The Cuntz-Krieger uniqueness theorem * 2.5 The Cuntz-Krieger uniqueness theorem * 2.6 The Cuntz-Krieger uniqueness theorem * 2.7 The Cuntz-Krieger uniqueness theorem * 2.8 The Cuntz-Krieger uniqueness theorem * 2.9 The Cuntz-Krieger uniqueness theorem * 2.1 The Cuntz-Krieger uniqueness theorem * 2.1 The Cuntz-Krieger uniqueness theorem * 2.1 The Cuntz-Krieger uniqueness theorem * 2.2.1 The Cuntz-Krieger uniqueness theorem * 2.2.2 The Cuntz-Krieger uniqueness theorem * 2.2.3 The Cuntz-Krieger uniqueness theorem * 2.3.4 The Cuntz-Krieger uniqueness theorem * 2.3.5 The Cuntz-Krieger uniqueness theorem * 2.4 The Cuntz-Krieger uniqueness theorem * 2.5 The Cuntz-Krieger uniqueness theorem * 2.6 The Cuntz-Krieger uniqueness theorem * 2.7 The Cuntz-Krieger uniqueness theorem * 2.8 The Cuntz-Krieger uniqueness theorem * 2.9 The Cuntz-Krieger uniqueness theorem * 2.1 The Cuntz-Krieger uniqueness theorem * 2.1.1 The Cuntz-Krieger uniqueness theorem * 2.1.2 The Cuntz-Krieger uniqueness theorem * 2.1.3 The Cuntz-Krieger uniqueness theorem * 2.1.4 The Cuntz-Krieger uniqueness theorem * 2.1.5 The Cuntz-Krieger uniqueness theorem * 2.1.6 The Cuntz-Krieger uniqueness theorem * 2.1.7 The Cuntz-Krieger uniqueness theorem * 2.1.8 The Cuntz-Krieger uniqueness theorem * 2.1.9 The Cuntz-Krieger uniqueness theorem * 2.2.1 The Cuntz-Krieger uniqueness theorem * 2.2.2 The Cuntz-Krieger uniqueness theorem * 2.2.3 The Cuntz-Krieger uniqueness theorem * 2.2.4 The Cuntz-Krieger uniqueness theorem * 2.2.5 The Cuntz-Krieger uniqueness theorem * 2.2.6 The Cuntz-Krieger uniqueness theorem * 2.2.7 The Cuntz-Krieger uniqueness theorem * 2.2.8 The Cuntz-Krieger uniqueness theorem * 2.2.9 The Cuntz-Krieger uniqueness theorem * 2.3.1 The Cuntz-Krieger uniqueness theorem * 2.3.2 The Cuntz-Krieger uniqueness theorem * 2.3.3 The Cuntz-Krieger uniqueness theorem * 2.3.1 The Cuntz-Krieger uniqueness theorem * 2.3.2 The Cuntz-Krieger uniqueness theorem * 2.3.4 The Cuntz-Krieger uniqueness theorem * 2.3.5 The Cuntz-Krieger uniqueness theorem * 2.3.6 The Cuntz-Krieger uniqueness theorem * 2.3.7 The Cuntz-Krieger uniqueness theorem * 2.3.8 The Cuntz-Krieger uniqueness theorem * 2.3.9 The Cuntz-Krieger uniqueness theorem * 2.4 The Cuntz-Krieger uniqueness theorem * 2.4.1 The Cuntz-Krieger uniqueness theorem * 2.4.2 The Cuntz-Krieger uniqueness theorem * 2.4.3 The Cuntz-Krieger uniqueness theorem * 2.4.4 The Cuntz-Krieger uniqueness theorem * 2.4.5 The Cuntz-Krieger uniqueness theorem * 2.4.6 The Cuntz-Krieger uniqueness theorem * 2.4.7 The Cuntz-Krieger uniqueness theorem * 2.4.8 The Cuntz-Krieger uniqueness theorem * 2.4.9 The Cuntz-Krieger uniqueness theorem * 2.5.1 The Cuntz-Krieger uniqueness theorem * 2.5.2 The Cuntz-Krieger uniqueness theorem * 2.6.1 The Cuntz-Krieger uniqueness theorem * 2.6.2 The Cuntz-Krieger uniqueness theorem * 2.6.3 The Cuntz-Krieger uniqueness theorem * 2.6.4 The Cuntz-Krieger uniqueness theorem * 2.6.5 The Cuntz-Krieger uniqueness theorem * 2.6.7 The Cuntz-Krieger uniqueness theorem * 2.6.8 The Cuntz-Krieger uniqueness theorem * 2.7.1 The Cuntz-Krieger uniqueness theorem * 2.7.2 The Cuntz-Krieger uniqueness theorem * 2.8.1 The Cuntz-Krieger uniqueness theorem * 2.8.2 The Cuntz-Krieger uniqueness theorem * 2.9.1 The Cuntz-Krieger uniqueness theorem * 2.9.2 The Cuntz-Krieger uniqueness theorem * 2.1.1 The Cuntz-Krieger uniqueness theorem * 2.1.2 The Cuntz-Krieger uniqueness theorem * 2.1.1 The Cuntz-Krieger uniqueness theorem * 2.1.2 The Cuntz-Krieger uniqueness theorem * 2.1.3 The Cuntz-Krieger uniqueness theorem * 2.1.4 The Cuntz-Krieger uniqueness theorem * 2.1.5 The Cuntz-Krieger uniqueness theorem * 2.1.6 The Cuntz-Krieger uniqueness theorem * 2.1.7 The Cuntz-Krieger uniqueness theorem * 2.1.8 The Cuntz-Krieger uniqueness theorem * 2.1.9 The Cuntz-Krieger uniqueness theorem * 2.1.1 The Cuntz-Krieger uniqueness theorem * 2.1.1 The Cuntz-Krieger uniqueness theorem * 2.1.1 The Cuntz-Krieger uniqueness theorem * 2.1.1 The Cuntz-Krieger uniqueness theorem * 2.1.2 The Cuntz-Krieger uniqueness theorem * 2.1.1 The Cuntz-Krieger uniqueness theorem * 2.1.2 The Cuntz-Krieger uniqueness theorem * 2.1.3 The Cuntz-Krieger uniqueness theorem * 2.1.4 The Cuntz-Krieger uniqueness theorem * 2.1.5 The Cuntz-Krieger uniqueness theorem * 2.1.6 The Cuntz-Krieger uniqueness theorem * 2.1.7 The Cuntz-Krieger uniqueness theorem * 2.1.8 The Cuntz-Krieger uniqueness theorem * 2.1.9 The Cuntz-Krieger uniqueness theorem * 2.1.1 The Cuntz-Krieger uniqueness theorem * 2.1.1 The Cuntz-Krieger uniqueness theorem * 2.1.1 The Cuntz-Krieger uniqueness theorem * 2.1.2 The Cuntz-Krieger uniqueness theorem * 2.1.1 The Cuntz-Krieger uniqueness theorem * 2.1.1 The Cuntz-Krieger uniqueness theorem * 2.1.2 The Cuntz-Krieger uniqueness theorem * 2.1.1 The Cuntz-Krieger uniqueness theorem * 2.1.2 The Cuntz-Krieger uniqueness theorem * 2.1.3 The Cuntz-Krieger uniqueness theorem * 2.1.4 The Cuntz-Krieger uniqueness theorem * 2.1.5 The Cuntz-Krieger uniqueness theorem * 2.1.6 The Cuntz-Krieger uniqueness theorem * 2.1.7 The Cuntz-Krieger uniqueness theorem * 2.1.8 The Cuntz-Krieger uniqueness theorem * 2.1.9 The Cuntz-Krieger uniqueness theorem * 2.1.1 The Cuntz-Krieger uniqueness theorem * 2.1.1 The Cuntz-Krieger uniqueness theorem * 2.1.1.1 The Cuntz-Krieger uniqueness theorem * 2.1.2 The Cuntz-Krieger uniqueness theorem * 2.1.2 The Cuntz-Krieger uniqueness theorem * 2.1.1 The Cuntz-Krieger uniqueness theorem * 2.1.2 The Cuntz-Krieger uniqueness theorem * 2.1.3 The Cuntz-Krieger uniqueness theorem * 2.1.4 The Cuntz-Krieger uniqueness theorem * 2.1.5 The Cuntz-Krieger uniqueness theorem * 2.1.6 The Cuntz-Krieger uniqueness theorem * 2.1.7 The Cuntz-Krieger uniqueness theorem * 2.1.8 The Cuntz-Krieger uniqueness theorem * 2.1.9 The Cuntz-Krieger uniqueness theorem * 2.1.1 The Cuntz-Krieger uniqueness theorem * 2.1.1 The Cuntz-Krieger uniqueness theorem * 2.1.1 The Cuntz-Krieger uniqueness theorem * 2.1.1 The Cuntz-Krieger uniqueness theorem * 2.1.2 The Cuntz-Krieger uniqueness theorem * 2.1.1 The Cuntz-Krieger uniqueness theorem * 2.1.1 The Cuntz-Krieger uniqueness theorem * 2.1.2 The Cuntz-Krieger uniqueness theorem * 2.1.3 The Cuntz-Krieger uniqueness theorem * 2.1.4 The Cuntz-Krieger uniqueness theorem * 2.1.1 The Cuntz-Krieger uniqueness theorem * 2.1.1.1 The Cuntz-Krieger uniqueness theorem * 2.1.2 The Cuntz-Krieger uniqueness theorem * 2.1.1 The Cuntz-Krieger uniqueness theorem * 2.1.2 The Cuntz-Krieger uniqueness theorem * 2.1.3 The Cuntz-Krieger uniqueness theorem * 2.1.4 The Cuntz-Krieger uniqueness theorem * 2.1.5 The Cuntz-Krieger uniqueness theorem * 2.1.6 The Cuntz-Krieger uniqueness theorem * 2.1.7 The Cuntz-Krieger uniqueness theorem * 2.1.8 The Cuntz-Krieger uniqueness theorem * 2.1.9 The Cuntz-Krieger uniqueness theorem * 2.1.1 The Cuntz-Krieger uniqueness theorem * 2.1.1 The Cuntz-Krieger uniqueness theorem * 2.1.1 The Cuntz-Krieger uniqueness theorem * 2.1.1 The Cuntz-Krieger uniqueness theorem * 2.1.2 The Cuntz-Krieger uniqueness theorem * 2.1.1.1 The Cuntz-Krieger uniqueness theorem * 2.1.2 The Cuntz-Krieger uniqueness theorem * 2.1.1.2 The Cuntz-Krieger uniqueness theorem * 2.1.2 The Cuntz-Krieger uniqueness theorem * 2.1.3 The Cuntz-Krieger uniqueness theorem * 2.1.1.3 The Cuntz-Krieger uniqueness theorem * 2.1.1.4 The Cuntz-Krieger uniqueness theorem * 2.1.1.5 The Cuntz-Krieger uniqueness theorem * 2.1.6 The Cuntz-Krieger uniqueness theorem * 2.1.7 The Cuntz-Krieger uniqueness theorem * 2.1.8 The Cuntz-Krieger uniqueness theorem * 2.1.9 The Cuntz-Krieger uniqueness theorem * 2.1.1.6 The Cuntz-Krieger uniqueness theorem * 2.1.1.7 The Cuntz-Krieger uniqueness theorem * 2.1.1.8 The Cuntz-Krieger uniqueness theorem * 2.1.9 The Cuntz-Krieger uniqueness theorem * 2.1.1.9 The Cuntz-Krieger uniqueness theorem * 2.1.1.1 The Cuntz-Krieger uniqueness theorem * 2.1.1.1 The Cuntz-Krieger uniqueness theorem * 2.1.1.1 The Cuntz-Krieger uniqueness theorem * 2.1.2 The Cuntz-Krieger uniqueness theorem * 2.1.2.2 The Cuntz-Krieger uniqueness theorem * 2.1.1.1 The Cuntz-Krieger uniqueness theorem * 2.1.2.3 The Cuntz-Krieger uniqueness theorem * 2.1.1.1 The Cuntz-Krieger uniqueness theorem * 2.1.2.4 The Cuntz-Krieger uniqueness theorem * 2.1.1.5 The Cuntz-Krieger uniqueness theorem * 2.1.2.6 The Cuntz-Krieger uniqueness theorem * 2.1.7 The Cuntz-Krieger uniqueness theorem * 2.1.8 The Cuntz-Krieger uniqueness theorem * 2.1.9 The Cuntz-Krieger uniqueness theorem * 2.1.1.1 The Cuntz-Krieger uniqueness theorem * 2.1.1.2 The Cuntz-Krieger uniqueness theorem * 2.1.1.1 The Cuntz-Krieger uniqueness theorem * 2.1.1.2 The Cuntz-Krieger uniqueness theorem * 2.1.2.7 The Cuntz-Krieger uniqueness theorem * 2.1.1.8 The Cuntz-Krieger uniqueness theorem * 2.1.1.9 The Cuntz-Krieger uniqueness theorem * 2.1.2.1.9 The Cuntz-Krieger uniqueness of [5, Theorem 5.1]. We in this paper give necessary and sufficient conditions for the simplicity of \(C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})\) without any countability conditions, which generalizes both [9, Theorem 9.16] and [12, Theorem 3.6]. The directness of its proof is one of the advantage of our result. Another advantage is that we give a new characterization of the simplicity of \(C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})\) in terms of maximal tails. This paper is organized as follows. Section 2 contains necessary background on relative generalized Boolean dynamical systems, partially defined topological graphs and their \(C^{*}\)-algebras. In Section 3.1, we review the way to define a partially defined topological graph from a generalized Boolean dynamical system, and define an isomorphism between the \(C^{*}\)-algebra of the partially defined topological graph and the \(C^{*}\)-algebra associated to the generalized Boolean dynamical system (Proposition 3.3). Also, we prove that the Condition (L) of a generalized Boolean dynamical system is equivalent to the topological freeness of the associated partially defined topological graph (Proposition 3.5). We then apply these results to prove our Cuntz-Krieger uniqueness theorem. In Section 3.2, we recall that for a relative generalized Boolean dynamical system \((\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha};\mathcal{J})\), there is a generalized Boolean dynamical system \((\mathcal{B}^{\prime},\mathcal{L},\theta^{\prime}_{\alpha},\mathcal{I}^{ \prime}_{\alpha})\) such that \(C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha};\mathcal{J})\) and \(C^{*}(\mathcal{B}^{\prime},\mathcal{L},\theta^{\prime},\mathcal{I}^{\prime}_ {\alpha})\) are isomorphic, and show that \((\mathcal{B},\mathcal{L},\theta)\) satisfies Condition (L) if and only if \((\mathcal{B}^{\prime},\mathcal{L},\theta^{\prime})\) satisfies Condition (L). Then we apply the Cuntz-Krieger uniqueness theorem of \(C^{*}(\mathcal{B}^{\prime},\mathcal{L},\theta^{\prime},\mathcal{I}^{\prime}_ {\alpha})\) to have our uniqueness theorem. In Section 4, we state equivalent conditions for a \(C^{*}\)-algebra of a relative generalized Boolean dynamical system that satisfies Condition (K). In Section 5, we define a minimality of a Boolean dynamical system and give a number of equivalent conditions to a Boolean dynamical system being minimal. We then characterize the generalized Boolean dynamical systems which have simple \(C^{*}\)-algebras. ## 2. Preliminaries We will in this section recall some notation and terminology from [7] and [8]. We let \(\mathbb{N}_{0}\) denote the set of nonnegative integers, \(\mathbb{N}\) denote the set of positive integers, and let \(\mathbb{T}=\{z\in\mathbb{C}:|z|=1\}\). ### Boolean algebras A _Boolean algebra_ is a relatively complemented distributive lattice \((\mathcal{B},\cap,\cup)\) with least element \(\emptyset\). (A Boolean algebra is often called a _generalized Boolean algebra_.) If \(\mathcal{B}\) is a Boolean algebra, one can define a binary operation \(\setminus:\mathcal{B}\times\mathcal{B}\rightarrow\mathcal{B}\) such that \(A\cap(B\setminus A)=\emptyset\), \(A\cup(B\setminus A)=A\cup B\) for \(A,B\in\mathcal{B}\). Given \(A,B\in\mathcal{B}\), \(A\cup B\) is called the _union_ of \(A\) and \(B\), \(A\cap B\) is called the _intersection_ of \(A\) and \(B\), and \(B\setminus A\) is called the _relative complement_ of \(A\) relative to \(B\). A Boolean algebra \(\mathcal{B}\) is called _unital_ if it has a greatest element \(1\), namely there exists \(1\in\mathcal{B}\) such that \(1\cup A=1\) and \(1\cap A=A\) for all \(A\in\mathcal{B}\). (Often, Boolean algebras are assumed to be unital, but, we in this paper do not assume that \(\mathcal{B}\) is unital.) A partial order \(\subseteq\) on \(\mathcal{B}\) is the relation \(A\subseteq B\iff A\cap B=A\) for \(A,B\in\mathcal{B}\). We say \(A\) is a _subset_ of \(B\) if \(A\subseteq B\). Note that \(A\cup B\) and \(A\cap B\) are the least upper-bound and the greatest lower-bound of \(A\) and \(B\) with respect to the partial order \(\subseteq\). A non-empty subset \(\mathcal{I}\) of \(\mathcal{B}\) is called an _ideal_ if \(A\cup B\in\mathcal{I}\) whenever \(A,B\in\mathcal{I}\), and \(\mathcal{I}\) is lower closed, that is, if \(A\in\mathcal{I}\) and \(B\subseteq A\), then \(B\in\mathcal{I}\). For \(A\in\mathcal{B}\), we define \(\mathcal{I}_{A}:=\{B\in\mathcal{B}:B\subseteq A\}\), that is the ideal generated by \(A\). Let \(\mathcal{I}\) be an ideal of \(\mathcal{B}\). For \(A,B\in\mathcal{B}\), we define an equivalent relation by \[A\sim B\iff A\cup A^{\prime}=B\cup B^{\prime}\text{ for some }A^{\prime},B^{ \prime}\in\mathcal{I}.\] We denote by \([A]_{\mathcal{I}}\) the equivalent class of \(A\in\mathcal{B}\) under \(\sim\). If there is no confusion, we just write \([A]\) instead of \([A]_{\mathcal{I}}\). The set of all equivalent classes of \(\mathcal{B}\) is denoted by \(\mathcal{B}/\mathcal{I}\). Then, \(\mathcal{B}/\mathcal{I}\) is a Boolean algebra with operations defined by \[[A]\cap[B]=[A\cap B],\ [A]\cup[B]=[A\cup B]\text{ and }[A]\setminus[B]=[A \setminus B].\] A non-empty subset \(\eta\subseteq\mathcal{B}\) is called a _filter_ if \(\emptyset\notin\eta\), \(A\cap B\in\eta\) whenever \(A,B\in\eta\) and \(\xi\) is upper closed, that is, if \(A\in\eta\) and \(A\subseteq B\), then \(B\in\eta\). A filter is an _ultrafilter_ if it is a maximal element in the set of filters with respect to inclusion of filter. For a filter \(\xi\subseteq\mathcal{B}\), \(\xi\) is an ultrafilter if and only if it is prime, that is, if \(B,B^{\prime}\in\mathcal{B}\) with \(B\cup B^{\prime}\in\eta\), then either \(B\in\eta\) or \(B^{\prime}\in\eta\). We denote by \(\widehat{\mathcal{B}}\) the set of all ultrafilters of \(\mathcal{B}\). For \(A\in\mathcal{B}\), we let \(Z(A):=\{\xi\in\widehat{\mathcal{B}}\colon A\in\xi\}\) and we equip \(\widehat{\mathcal{B}}\) with the topology generated by \(\{Z(A):A\in\mathcal{B}\}\). Then \(\widehat{\mathcal{B}}\) is a totally disconnected locally compact Hausdorff space such that each \(Z(A)\) is compact and open. ### Relative generalized Boolean dynamical systems A map \(\phi:\mathcal{B}\to\mathcal{B}^{\prime}\) between two Boolean algebras \(\mathcal{B}\) and \(\mathcal{B}^{\prime}\) is called a _Boolean homomorphism_ if \[\phi(A\cap B)=\phi(A)\cap\phi(B),\phi(A\cup B)=\phi(A)\cup\phi(B)\text{ and }\phi(A\setminus B)=\phi(A)\setminus\phi(B)\] for all \(A,B\in\mathcal{B}\). A map \(\theta:\mathcal{B}\to\mathcal{B}\) is called an _action_ on \(\mathcal{B}\) if it is a Boolean homomorphism with \(\theta(\emptyset)=\emptyset\). Let \(\mathcal{L}\) be a set. We define \(\mathcal{L}^{0}:=\{\emptyset\}\), \(\mathcal{L}^{n}:=\{(\beta_{1},\ldots,\beta_{n}):\beta_{i}\in\mathcal{L}\}\) for \(n\in\mathbb{N}\), and \(\mathcal{L}^{*}:=\cup_{n\in\mathbb{N}_{0}}\mathcal{L}^{n}\). For \(\beta=(\beta_{1},\ldots,\beta_{n})\in\mathcal{L}^{n}\), we denote \(|\beta|:=n\) and write \(\beta_{1}\cdots\beta_{n}\) instead of \((\beta_{1},\ldots,\beta_{n})\). Also, for \(1\leq i\leq j\leq|\beta|\), we denote by \(\beta_{i,j}\) the sub-word \(\beta_{i}\cdots\beta_{j}\) of \(\beta\), where \(\beta_{i,i}=\beta_{i}\). For \(\beta=\beta_{1}\cdots\beta_{n}\), \(\gamma=\gamma_{1}\cdots\gamma_{m}\in\mathcal{L}^{*}\setminus\{\emptyset\}\), we denote by \(\beta\gamma\) the word \(\beta_{1}\cdots\beta_{n}\gamma_{1}\cdots\gamma_{m}\). If \(\beta=\emptyset\), then \(\beta\gamma:=\gamma\), and if \(\gamma=\emptyset\), then \(\beta\gamma:=\beta\). For \(k\in\mathbb{N}\), we let \(\beta^{k}:=\beta\beta\cdots\beta\) where the concatenation on the right has \(k\) terms, and let \(\beta^{0}:=\emptyset\). By \(\mathcal{L}^{\infty}\) we mean the set of sequences with entries in \(\mathcal{L}\). If \(x=(x_{1},x_{2},\ldots)\in\mathcal{L}^{\infty}\) and \(n\in\mathbb{N}\), then we let \(x_{1,n}\) denote the word \(x_{1}x_{2}\cdots x_{n}\in\mathcal{L}^{n}\). We also let \(x_{1,0}=\emptyset\). We say that a triple \((\mathcal{B},\mathcal{L},\theta)\) is a _Boolean dynamical system_ if \(\mathcal{B}\) is a Boolean algebra, \(\mathcal{L}\) is a set, and \(\theta:=(\theta_{\alpha})_{\alpha\in\mathcal{L}}\) is a family of actions on \(\mathcal{B}\). If \((\mathcal{B},\mathcal{L},\theta)\) is a Boolean dynamical system and \(\beta=\beta_{1}\cdots\beta_{n}\in\mathcal{L}^{*}\setminus\{\emptyset\}\), then we let \(\theta_{\beta}:\mathcal{B}\to\mathcal{B}\) be the action defined by \(\theta_{\beta}:=\theta_{\beta_{n}}\circ\cdots\circ\theta_{\beta_{1}}\). We also let \(\theta_{\emptyset}:=\mathrm{Id}\). For \(B\in\mathcal{B}\), we define \(\Delta_{B}^{(\mathcal{B},\mathcal{L},\theta)}:=\{\alpha\in\mathcal{L}:\theta_{ \alpha}(B)\neq\emptyset\}.\) We will often just write \(\Delta_{B}\) instead of \(\Delta_{B}^{(\mathcal{B},\mathcal{L},\theta)}\). We say that \(A\in\mathcal{B}\) is _regular_ if for any \(\emptyset\neq B\in\mathcal{I}_{A}\), we have \(0<|\Delta_{B}|<\infty\). We denote by \(\mathcal{B}_{\mathrm{reg}}\) the set of all regular sets. Note that \(\emptyset\in\mathcal{B}_{reg}\) and \(\mathcal{B}_{\mathrm{reg}}\) is an ideal of \(\mathcal{B}\). **Definition 2.1**.: A _generalized Boolean dynamical system_ ([8, Definition 3.2]) is a quadruple \((\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})\) where \((\mathcal{B},\mathcal{L},\theta)\) is a Boolean dynamical system and \(\{\mathcal{I}_{\alpha}\}_{\alpha\in\mathcal{L}}\) is a family of ideals in \(\mathcal{B}\) such that \(\mathcal{R}_{\alpha}\subseteq\mathcal{I}_{\alpha}\) for each \(\alpha\in\mathcal{L}\), where \[\mathcal{R}_{\alpha}:=\{A\in\mathcal{B}:A\subseteq\theta_{\alpha}(B)\text{ for some }B\in\mathcal{B}\}.\] A _relative generalized Boolean dynamical system_ is a pentamerous \((\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha};\mathcal{J})\) where \((\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})\) is a generalized Boolean dynamical system and \(\mathcal{J}\) is an ideal of \(\mathcal{B}_{\mathrm{reg}}\). A relative Boolean dynamical system_ is a quadruple \((\mathcal{B},\mathcal{L},\theta;\mathcal{J})\) where \((\mathcal{B},\mathcal{L},\theta)\) is a Boolean dynamical system and \(\mathcal{J}\) is an ideal of \(\mathcal{B}_{\mathrm{reg}}\). ### Saturated hereditary ideals and quotient Boolean dynamical systems Suppose \((\mathcal{B},\mathcal{L},\theta)\) is a Boolean dynamical system. An ideal \(\mathcal{H}\) of \(\mathcal{B}\) is _hereditary_ if \(\theta_{\alpha}(A)\in\mathcal{H}\) whenever \(A\in\mathcal{H}\) and \(\alpha\in\mathcal{L}\), and _saturated_ if \(A\in\mathcal{H}\) whenever \(A\in\mathcal{B}_{\mathrm{reg}}\) and \(\theta_{\alpha}(A)\in\mathcal{H}\) for every \(\alpha\in\Delta_{A}\). If \((\mathcal{B},\mathcal{L},\theta;\mathcal{J})\) is a relative Boolean dynamical system, then an ideal \(\mathcal{H}\) of \(\mathcal{B}\) is _\(\mathcal{J}\)-saturated_ if \(A\in\mathcal{H}\) whenever \(A\in\mathcal{J}\) and \(\theta_{\alpha}(A)\in\mathcal{H}\) for every \(\alpha\in\Delta_{A}\). Suppose that \((\mathcal{B},\mathcal{L},\theta;\mathcal{J})\) is a relative Boolean dynamical system and \(\mathcal{H}\) is a hereditary \(\mathcal{J}\)-saturated ideal of \(\mathcal{B}\). If we define \(\theta_{\alpha}([A]_{\mathcal{H}})=[\theta_{\alpha}(A)]_{\mathcal{H}}\) for all \([A]_{\mathcal{H}}\in\mathcal{B}/\mathcal{H}\) and \(\alpha\in\mathcal{L}\), then \((\mathcal{B}/\mathcal{H},\mathcal{L},\theta)\) becomes a Boolean dynamical system. We let \[\mathcal{B}_{\mathcal{H}}:=\big{\{}A\in\mathcal{B}:[A]_{\mathcal{H}}\in( \mathcal{B}/\mathcal{H})_{\mathrm{reg}}\big{\}}\] (notice that there is a mistake in the definition of \(\mathcal{B}_{\mathcal{H}}\) given on Page 24 of [8]). Then \(\mathcal{B}_{\mathcal{H}}\) is an ideal of \(\mathcal{B}\) and \(\mathcal{H}\cup\mathcal{J}\subseteq\mathcal{B}_{\mathcal{H}}\). If \(\mathcal{S}\) is an ideal of \(\mathcal{B}_{\mathcal{H}}\) such that \(\mathcal{H}\cup\mathcal{J}\subseteq\mathcal{S}\) and we let \([\mathcal{S}]:=\{[A]_{\mathcal{H}}:A\in\mathcal{S}\}\), then \((\mathcal{B}/\mathcal{H},\mathcal{L},\theta;[\mathcal{S}])\) is a relative Boolean dynamical system. Moreover, if \((\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})\) is a generalized Boolean dynamical system and we for each \(\alpha\in\mathcal{B}\) let \([\mathcal{I}_{\alpha}]:=\{[A]_{\mathcal{H}}:A\in\mathcal{I}_{\alpha}\}\), then \((\mathcal{B}/\mathcal{H},\mathcal{L},\theta,[\mathcal{I}_{\alpha}])\) is a generalized Boolean dynamical system and \((\mathcal{B}/\mathcal{H},\mathcal{L},\theta,[\mathcal{I}_{\alpha}];[\mathcal{S}])\) is a relative generalized Boolean dynamical system. ### The \(C^{*}\)-algebra of a relative generalized Boolean dynamical system Let \((\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha};\mathcal{J})\) be a relative generalized Boolean dynamical system. A \((\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha};\mathcal{J})\)_-representation_ ([8, Definition 3.3]) consists of a family of projections \(\{P_{A}:A\in\mathcal{B}\}\) and a family of partial isometries \(\{S_{\alpha,B}:\alpha\in\mathcal{L},\ B\in\mathcal{I}_{\alpha}\}\) in a \(C^{*}\)-algebra such that for \(A,A^{\prime}\in\mathcal{B}\), \(\alpha,\alpha^{\prime}\in\mathcal{L}\), \(B\in\mathcal{I}_{\alpha}\) and \(B^{\prime}\in\mathcal{I}_{\alpha^{\prime}}\), 1. \(P_{\emptyset}=0\), \(P_{A\cap A^{\prime}}=P_{A}P_{A^{\prime}}\), and \(P_{A\cup A^{\prime}}=P_{A}+P_{A^{\prime}}-P_{A\cap A^{\prime}}\); 2. \(P_{A}S_{\alpha,B}=S_{\alpha,B}P_{\theta_{\alpha}(A)}\); 3. \(S^{*}_{\alpha,B}S_{\alpha^{\prime},B^{\prime}}=\delta_{\alpha,\alpha^{\prime }}P_{B\cap B^{\prime}}\); 4. \(P_{A}=\sum_{\alpha\in\Delta_{A}}S_{\alpha,\theta_{\alpha}(A)}S^{*}_{\alpha, \theta_{\alpha}(A)}\) for all \(A\in\mathcal{J}\). The \(C^{*}\)_-algebra of \((\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha};\mathcal{J})\)_, which we denote by \(C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha};\mathcal{J})\), is defined to be the \(C^{*}\)-algebra generated by a universal \((\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha};\mathcal{J})\)-representation. A \((\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha};\mathcal{B}_{reg})\)-representation is called a \((\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})\)_-representation_. We write \(C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})\) for \(C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha};\mathcal{B}_{reg})\) and call it the \(C^{*}\)_-algebra of \((\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})\)_. Let \((\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha};\mathcal{J})\) be a relative generalized Boolean dynamical system. By the universal property of \(C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha};\mathcal{J})=C^{*}(p_{ A},s_{\alpha,B})\), there is a strongly continuous action \(\gamma:\mathbb{T}\to\mathrm{Aut}(C^{*}(\mathcal{B},\mathcal{L},\theta, \mathcal{I}_{\alpha};\mathcal{J}))\), which we call the _gauge action_, such that \[\gamma_{z}(p_{A})=p_{A}\ \text{ and }\ \gamma_{z}(s_{\alpha,B})=zs_{\alpha,B}\] for \(A\in\mathcal{B}\), \(\alpha\in\mathcal{L}\) and \(B\in\mathcal{I}_{\alpha}\). We say that an ideal \(I\) of \(C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha};\mathcal{J})\) is _gauge-invariant_ if \(\gamma_{z}(I)=I\) for every \(z\in\mathbb{T}\). For \(\alpha=\alpha_{1}\alpha_{2}\cdots\alpha_{n}\in\mathcal{L}^{*}\setminus\{\emptyset\}\), we define \[\mathcal{I}_{\alpha}:=\{A\in\mathcal{B}:A\subseteq\theta_{\alpha_{2}\cdots \alpha_{n}}(B)\text{ for some }B\in\mathcal{I}_{\alpha_{1}}\}.\] For \(\beta=\emptyset\), we let \(\mathcal{I}_{\emptyset}:=\mathcal{B}\). If \(\{P_{A},\ S_{\alpha,B}:A\in\mathcal{B},\ \alpha\in\mathcal{L},\ B\in \mathcal{I}_{\alpha}\}\) be a \((\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha};\mathcal{J})\)-representation, we define for \(\alpha=\alpha_{1}\alpha_{2}\cdots\alpha_{n}\in\mathcal{L}^{*}\setminus\{\emptyset\}\) and \(A\in\mathcal{I}_{\alpha}\), \[S_{\alpha,A}:=S_{\alpha_{1},B}S_{\alpha_{2},\theta_{\alpha_{2}}(B)}S_{\alpha_{ 3},\theta_{\alpha_{2}\alpha_{3}}(B)}\cdots S_{\alpha_{n},A},\] where \(B\in\mathcal{I}_{\alpha_{1}}\) is such that \(A\subseteq\theta_{\alpha_{2}\cdots\alpha_{n}}(B)\). For \(\alpha=\emptyset\), we also define \(S_{\emptyset,A}:=P_{A}\). It then is known that \(C^{*}(P_{A},S_{\alpha,B})=\overline{\operatorname{span}}\{S_{\alpha,A}S_{ \beta,A}^{*}:\alpha,\beta\in\mathcal{L}^{*}\ \text{and}\ A\in\mathcal{I}_{\alpha}\cap \mathcal{I}_{\beta}\}\) (see [8, Remark 3.11]). ### Gauge-invariant ideals If \((\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha};\mathcal{J})\) is a relative generalized Boolean dynamical system, \(\mathcal{H}\) is a hereditary \(\mathcal{J}\)-saturated ideal of \(\mathcal{B}\), and \(\mathcal{S}\) is an ideal of \(\mathcal{B}_{\mathcal{H}}\) such that \(\mathcal{H}\cup\mathcal{J}\subseteq\mathcal{S}\), then we let \(I_{(\mathcal{H},\mathcal{S})}\) be the ideal of \(C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha};\mathcal{J})\) generated by \[\bigg{\{}p_{A}-\sum_{\alpha\in\Delta_{[A]_{\mathcal{H}}}}s_{\alpha,\theta_{ \alpha}(A)}s_{\alpha,\theta_{\alpha}(A)}^{*}:A\in\mathcal{S}\bigg{\}}.\] If \(I\) is an ideal of \(C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha};\mathcal{J})\), then we let \[\mathcal{H}_{I}:=\{A\in\mathcal{B}:p_{A}\in I\}\] and \[\mathcal{S}_{I}:=\bigg{\{}A\in\mathcal{B}_{\mathcal{H}_{I}}:p_{A}-\sum_{ \alpha\in\Delta_{[A]_{\mathcal{H}_{I}}}}s_{\alpha,\theta_{\alpha}(A)}s_{\alpha,\theta_{\alpha}(A)}^{*}\in I\bigg{\}}.\] Then \(\mathcal{H}_{I}\) is a hereditary \(\mathcal{J}\)-saturated ideal of \(\mathcal{B}\), \(\mathcal{S}_{I}\) is an ideal of \(\mathcal{B}_{\mathcal{H}_{I}}\) such that \(\mathcal{H}_{I}\cup\mathcal{J}\subseteq\mathcal{S}_{I}\), \(I_{(\mathcal{H}_{I},\mathcal{S}_{I})}\subseteq I\), and \(I_{(\mathcal{H}_{I},\mathcal{S}_{I})}=I\) if and only if \(I\) is gauge-invariant. Moreover, the map \((\mathcal{H},\mathcal{S})\mapsto I_{(\mathcal{H},\mathcal{S})}\) is a lattice isomorphism between the lattice of pairs \((\mathcal{H},\mathcal{S})\) where \(\mathcal{H}\) is a hereditary \(\mathcal{J}\)-saturated ideal of \(\mathcal{B}\) and \(\mathcal{S}\) is an ideal of \(\mathcal{B}_{\mathcal{H}}\) such that \(\mathcal{H}\cup\mathcal{J}\subseteq\mathcal{S}\), with order given by \((\mathcal{H}_{1},\mathcal{S}_{1})\subseteq(\mathcal{H}_{2},\mathcal{S}_{2}) \iff\mathcal{H}_{1}\subseteq\mathcal{H}_{2}\) and \(\mathcal{S}_{1}\subseteq\mathcal{S}_{2}\), and the lattice of gauge-invariant ideals of \(C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha};\mathcal{J})\), and there is for each pair \((\mathcal{H},\mathcal{S})\) an isomorphism \(\phi:C^{*}(\mathcal{B}/\mathcal{H},\mathcal{L},\theta,\mathcal{I}_{\alpha}; \mathcal{[}\mathcal{S}\mathrm{]})\to C^{*}(\mathcal{B},\mathcal{L},\theta, \mathcal{I}_{\alpha};\mathcal{J})/I_{(\mathcal{H},\mathcal{S})}\) such that \(\phi(p_{[A]})=p_{A}+I_{(\mathcal{H},\mathcal{S})}\) for \(A\in\mathcal{B}\), and \(\phi(s_{\alpha,[B]})=s_{\alpha,B}+I_{(\mathcal{H},\mathcal{S})}\) for \(\alpha\in\mathcal{L}\) and \(B\in\mathcal{I}_{\alpha}\). ### Condition (L) Let \((\mathcal{B},\mathcal{L},\theta)\) be a Boolean dynamical system and let \(\beta=\beta_{1}\cdots\beta_{n}\in\mathcal{L}^{*}\setminus\{\emptyset\}\) and \(A\in\mathcal{B}\setminus\{\emptyset\}\). 1. A pair \((\beta,A)\) is called a _cycle_ ([9, Definition 9.5]) if \(B=\theta_{\beta}(B)\) for \(B\in\mathcal{I}_{A}\). 2. A cycle \((\beta,A)\) has an _exit_ ([7]) if there is a \(t\leq n\) and a \(B\in\mathcal{B}\) such that \(\emptyset\neq B\subseteq\theta_{\beta_{1,t}}(A)\) and \(\Delta_{B}\neq\{\beta_{t+1}\}\) (where \(\beta_{n+1}:=\beta_{1}\)). 3. A cycle \((\beta,A)\) has _no exits_ ([9, Definition 9.5]) if for \(t\in\{1,2,\ldots,n\}\) and \(\emptyset\neq B\in\mathcal{I}_{\theta_{\beta_{1,t}}(A)}\), we have \(B\in\mathcal{B}_{reg}\) with \(\Delta_{B}=\{\beta_{t+1}\}\) (where \(\beta_{n+1}:=\beta_{1}\)). 4. \((\mathcal{B},\mathcal{L},\theta)\) is said to satisfy _Condition (L)_ ([9, Definition 9.5]) if it has no cycle with no exits. The following lemma will be used to prove Proposition 3.5. **Lemma 2.2**.: _Let \((\mathcal{B},\mathcal{L},\theta)\) be a Boolean dynamical system. If \((\beta,A)\) is a cycle with no exits, where \(\beta=\beta_{1}\cdots\beta_{n}\in\mathcal{L}^{*}\setminus\{\emptyset\}\) and \(A\in\mathcal{B}\setminus\{\emptyset\}\), then \((\beta_{k+1,n}\beta_{1,k},\theta_{\beta_{1,k}}(A))\) is a cycle for any \(k\in\{1,\cdots,n\}\)._ Proof.: Let \(k\in\{1,\cdots,n\}\). We prove that \(B=\theta_{\beta_{k+1,n}\beta_{1,k}}(B)\) for all \(B\subseteq\theta_{\beta_{1,k}}(A)\). Take \(B\subseteq\theta_{\beta_{1,k}}(A)\). Since \(B\subseteq\theta_{\beta_{1,k}}(A)\), we have \(\theta_{\beta_{k+1,n}\beta_{1,k}}(B)\subseteq\theta_{\beta_{k+1,n}\beta_{1,k }}(\theta_{\beta_{1,k}}(A))\). Here, \(\theta_{\beta_{k+1,n}\beta_{1,k}}(\theta_{\beta_{1,k}}(A))=\theta_{\beta_{1,k }\beta_{k+1,n}\beta_{1,k}}(A)=\theta_{\beta_{1,k}}(\theta_{\beta}(A))=\theta_{ \beta_{1,k}}(A)\). So, we have \(\theta_{\beta_{k+1,n}\beta_{1,k}}(B)\subseteq\theta_{\beta_{1,k}}(A)\). On the other hand, since \((\beta,A)\) is a cycle and \(\theta_{\beta_{k+1,n}}(B)\subseteq A\), we have \[\theta_{\beta}(\theta_{\beta_{k+1,n}}(B))=\theta_{\beta_{k+1,n}}(B).\] Here, \(\theta_{\beta}(\theta_{\beta_{k+1,n}}(B))=\theta_{\beta_{k+1,n}\beta}(B)= \theta_{\beta_{k+1,n}\beta_{1,k}\beta_{k+1,n}}(B)=\theta_{\beta_{k+1,n}}(\theta _{\beta_{k+1,n}\beta_{1,k}}(B))\). So, \[\theta_{\beta_{k+1,n}}(\theta_{\beta_{k+1,n}\beta_{1,k}}(B))=\theta_{\beta_{k +1,n}}(B). \tag{1}\] If \(B\setminus\theta_{\beta_{k+1,n}\beta_{1,k}}(B)\neq\emptyset\), then \(B\setminus\theta_{\beta_{k+1,n}\beta_{1,k}}(B)\in\mathcal{B}_{reg}\) and \(\Delta_{B\setminus\theta_{\beta_{k+1,n}\beta_{1,k}}(B)}=\{\beta_{k+1}\}\) since \((\beta,A)\) is a cycle with no exits. So, \(\emptyset\neq\theta_{\beta_{k+1}}(B\setminus\theta_{\beta_{k+1,n}\beta_{1,k}}( B))\subseteq\theta_{\beta_{1,k+1}}(A)\). Then again, since \((\beta,A)\) is a cycle with no exits, \(\theta_{\beta_{k+1}}(B\setminus\theta_{\beta_{k+1,n}\beta_{1,k}}(B))\in \mathcal{B}_{reg}\) and \(\Delta_{\theta_{\beta_{k+1}}(B\setminus\theta_{\beta_{k+1,n}\beta_{1,k}}(B))} =\{\beta_{k+2}\}\). Continuing this process, we have \(\theta_{\beta_{k+1,n}}(B\setminus\theta_{\beta_{k+1,n}\beta_{1,k}}(B))\neq\emptyset\). This contradicts to (1). Thus, \(B\subseteq\theta_{\beta_{k+1,n}\beta_{1,k}}(B)\). If \(\theta_{\beta_{k+1,n}\beta_{1,k}}(B)\setminus B\neq\emptyset\), the same arguments gives \(\theta_{\beta_{k+1,n}}(\theta_{\beta_{k+1,n}\beta_{1,k}}(B)\setminus B)\neq\emptyset\), which contradicts to (1). Thus, \(B=\theta_{\beta_{k+1,n}\beta_{1,k}}(B)\). ### Maximal tails A _maximal tail_ ([7, Definition 4.1]) of a Boolean dynamical system \((\mathcal{B},\mathcal{L},\theta)\) is a non-empty subset \(\mathcal{T}\) of \(\mathcal{B}\) such that 1. \(\emptyset\notin\mathcal{T}\); 2. if \(A\in\mathcal{B}\) and \(\theta_{\alpha}(A)\in\mathcal{T}\) for some \(\alpha\in\mathcal{L}\), then \(A\in\mathcal{T}\); 3. if \(A\cup B\in\mathcal{T}\), then \(A\in\mathcal{T}\) or \(B\in\mathcal{T}\); 4. if \(A\in\mathcal{T}\), \(B\in\mathcal{B}\) and \(A\subseteq B\), then \(B\in\mathcal{T}\); 5. if \(A\in\mathcal{T}\cap\mathcal{B}_{\text{reg}}\), then there is an \(\alpha\in\mathcal{L}\) such that \(\theta_{\alpha}(A)\in\mathcal{T}\); 6. if \(A,B\in\mathcal{T}\) then there are \(\beta,\gamma\in\mathcal{L}^{*}\) such that \(\theta_{\beta}(A)\cap\theta_{\gamma}(B)\in\mathcal{T}\). **Remark 2.3**.: A notion of maximal tail was first introduced in [7, Definition 4.1]. The condition (T6) above is equivalent to (T5) in [7, Definition 4.1]. **Remark 2.4**.: If \(\mathcal{T}\) is a maximal tail, then \(\mathcal{H}_{\mathcal{T}}:=\mathcal{B}\setminus\mathcal{T}\) is a hereditary \(\mathcal{J}\)-saturated ideal of \(\mathcal{B}\) for any ideal \(\mathcal{J}\) of \(\mathcal{B}_{reg}\). An _ultrafilter cycle_ ([7, Definition 3.1]) of a Boolean dynamical system \((\mathcal{B},\mathcal{L},\theta)\) is a pair \((\beta,\eta)\), where \(\beta\in\mathcal{L}^{*}\setminus\{\emptyset\}\) and \(\eta\in\mathcal{B}\), such that \(\theta_{\beta}(A)\in\eta\) for all \(A\in\eta\). A maximal tail is _cyclic_ ([7, Definition 4.6]) if there is an ultrafilter cycle \((\beta,\eta)\) such that \[\mathcal{T}=\{B\in\mathcal{B}:\theta_{\gamma}(B)\in\eta\text{ for some }\gamma\in \mathcal{L}^{*}\}\] and an \(A\in\eta\) such that if \(\gamma\in\mathcal{L}^{*}\setminus\emptyset\), \(B\in\mathcal{I}_{A}\) and \(\theta_{\gamma}(B)\in\eta\), then \(B\in\eta\) and \(\gamma=\beta^{k}\) for some \(k\in\mathbb{N}\). In [7, Proposition 6.2], the following result is stated for Boolean dynamical systems that has compact range and closed domain (see [7, Subsection 2.2]). However, the proof of [7, Proposition 6.2] works without this assumption and once we replace elements of the form \(s_{\mu}p_{[C]}\) by \(s_{\mu,[C]}\) in the proof of [7, Proposition 6.2], we can have the following. For further reference, we record these results here and provide a proof of the parts that needed to be modified. **Proposition 2.5**.: _Let \((\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})\) be a generalized Boolean dynamical system. Suppose \((\mathcal{B},\mathcal{L},\theta)\) has a cyclic maximal tail \(\mathcal{T}\). Then \(C^{*}(\mathcal{B}/(\mathcal{B}\setminus\mathcal{T}),\mathcal{L},\theta,[ \mathcal{I}_{\alpha}])\) contains an ideal that is not gauge-invariant, and there is a \(B\in\mathcal{T}\) such that \(p_{[B]}C^{*}(\mathcal{B}/(\mathcal{B}\setminus\mathcal{T}),\mathcal{L}, \theta,[\mathcal{I}_{\alpha}])p_{[B]}\) is isomorphic to \(M_{n}(C(\mathbb{T}))\) for some \(n\in\mathbb{N}\), where we let \([\mathcal{I}_{\alpha}]:=\{[A]_{\mathcal{B}\setminus\mathcal{T}}:A\in\mathcal{ I}_{\alpha}\}\)._ Proof.: Choose a cyclic maximal tail \(\mathcal{T}\) in \((\mathcal{B},\mathcal{L},\theta)\). Then there is an ultrafilter cycle \((\alpha,\eta)\) such that \(\mathcal{T}=\{B\in\mathcal{B}:\theta_{\beta}(B)\in\eta\text{ for some }\beta\in\mathcal{L}^{*}\}\) and an \(A\in\eta\) such that if \(\beta\in\mathcal{L}^{*}\setminus\{\emptyset\}\), \(B\in\mathcal{I}_{A}\) and \(\theta_{\beta}(B)\in\eta\), then \(B\in\eta\) and \(\beta=\alpha^{k}\) for some \(k\in\mathbb{N}\). One then can see that \(\mathcal{B}\setminus\mathcal{T}\) is a hereditary saturated ideal of \(\mathcal{B}\) and that a minimal set \([A]\) admits a cycle \(\alpha\) with no exit in \((\mathcal{B}/(\mathcal{B}\setminus\mathcal{T}),\mathcal{L},\theta)\). We also have by [7, Lemma 6.1] that \[[\theta_{\alpha_{1,i}}(A)]\cap[\theta_{\alpha_{1,j}}(A)]=\emptyset\text{ for all }1\leq i<j\leq n. \tag{2}\] Put \(B:=\cup_{k=1}^{n}\theta_{\alpha_{1,k}}(A)\) with \(n=|\alpha|\). Then, for \(s_{\mu,[C]}s_{\nu,[C]}^{*}\in C^{*}(\mathcal{B}/(\mathcal{B}\setminus\mathcal{ T}),\mathcal{L},\theta,[\mathcal{I}_{\alpha}])\) where \([C]\in[\mathcal{I}_{\mu}]\cap[\mathcal{I}_{\nu}]\), if \[p_{[B]}(s_{\mu,[C]}s_{\nu,[C]}^{*})p_{[B]}=s_{\mu,[\theta_{\mu}(B)]\cap[C]\cap[ \theta_{\nu}(B)]}s_{\nu,[\theta_{\mu}(B)]\cap[C]\cap[\theta_{\nu}(B)]}^{*}\neq 0,\] then \([\theta_{\mu}(B)]\cap[\theta_{\nu}(B)]\neq\emptyset.\) Thus \([\theta_{\mu}(B)]\neq\emptyset\) and \([\theta_{\nu}(B)]\neq\emptyset\), and hence we see that the paths \(\mu\), \(\nu\) are of the form \[\mu=\alpha_{i,n}\alpha^{l}\alpha_{1,k},\ \nu=\alpha_{j,n}\alpha^{m}\alpha_{1,k^{ \prime}}\] for some \(i,j,l,m\geq 0\) and \(1\leq k,k^{\prime}\leq n\) since \((\alpha,[A])\) is a cycle with no exit. Then \(\emptyset\neq[\theta_{\mu}(B)]\cap[\theta_{\nu}(B)]=[\theta_{\alpha_{1,i-1} \mu}(A)]\cap[\theta_{\alpha_{1,j-1}\nu}(A)]=[\theta_{\alpha_{1,k}}(A)]\cap[ \theta_{\alpha_{1,k^{\prime}}}(A)]\). Thus we have \(k=k^{\prime}\). It then follows that \[s_{\mu,[\theta_{\mu}(B)]\cap[C]\cap[\theta_{\nu}(B)]}s_{\nu,[ \theta_{\mu}(B)]\cap[C]\cap[\theta_{\nu}(B)]}^{*}\] \[=s_{\alpha_{i,n}\alpha^{l}\alpha_{1,k},[\theta_{\alpha_{1,k}}(A) \cap C]}s_{\alpha_{j,n}\alpha^{m}\alpha_{1,k},[\theta_{\alpha_{1,k}}(A)\cap C]} ^{*}\] \[=s_{\alpha_{i,n}\alpha^{l}\alpha_{1,k},[\theta_{\alpha_{1,k}}(A) \cap C]}(s_{\alpha_{k+1},[\theta_{\alpha_{1,k+1}}(A)\cap\theta_{\alpha_{k+1}} (C)]}s_{\alpha_{k+1},[\theta_{\alpha_{1,k+1}}(A)\cap\theta_{\alpha_{k+1}}(C)]} ^{*})s_{\alpha_{j,n}\alpha^{m}\alpha_{1,k},[\theta_{\alpha_{1,k}}(A)\cap C]}^ {*}\] \[\vdots\] \[=s_{\alpha_{i,n}\alpha^{l}\alpha_{1,n},[\theta_{\alpha_{1,n}}(A) \cap\theta_{\alpha_{k+1,n}}(C)]}s_{\alpha_{j,n}\alpha^{m}\alpha_{1,n},[\theta_ {\alpha_{1,n}}(A)\cap\theta_{\alpha_{k+1,n}}(C)]}^{*}\] \[=s_{\alpha_{i,n}\alpha^{l+1},[A]\cap\theta_{\alpha_{k+1,n}}(C)]}s_{ \alpha_{j,n}\alpha^{m+1},[A]\cap\theta_{\alpha_{k+1,n}}(C)]}^{*}\] \[=s_{\alpha_{i,n}\alpha^{l+1},[A]}s_{\alpha_{j,n}\alpha^{m+1},[A]}^{ *}.\] This means that the hereditary subalgebra \(p_{[B]}C^{*}(\mathcal{B}/(\mathcal{B}\setminus\mathcal{T}),\mathcal{L},\theta, [\mathcal{I}_{\alpha}])p_{[B]}\) is generated by the elements \(s_{\alpha_{i},[\theta_{\alpha_{1,i}}(A)]}\) for \(1\leq i\leq n\). Then the same arguments used in [7, Proposition 6.2] show that \[p_{[B]}C^{*}(\mathcal{B}/(\mathcal{B}\setminus\mathcal{T}),\mathcal{L},\theta, [\mathcal{I}_{\alpha}])p_{[B]}\cong C(\mathbb{T})\otimes M_{n}.\] It then follows that \(C^{*}(\mathcal{B}/(\mathcal{B}\setminus\mathcal{T}),\mathcal{L},\theta,[ \mathcal{I}_{\alpha}])\) contains an ideal that is not gauge-invariant. ### Partially defined topological graphs For a locally compact space \(X\), we denote by \(\widetilde{X}\) the one-point compactification of \(X\). **Definition 2.6**.: ([18, Definition 8.2]) A _partially defined_ topological graph is a quadruple \(E=(E^{0},E^{1},d,r)\) where \(E^{0}\) and \(E^{1}\) are locally compact spaces, \(d:E^{1}\to E^{0}\) is a local homeomorphism, and \(r\) is a continuous map from an open subset \(\operatorname{dom}(r)\) of \(E^{1}\) to \(E^{0}\) satisfying that the map \(\tilde{r}:E^{1}\to\widetilde{E^{0}}\) defined by \[\tilde{r}(e)=\left\{\begin{array}{ll}r(e)&\text{if $e\in\operatorname{dom}(r)$,} \\ \infty&\text{if $e\notin\operatorname{dom}(r)$}\end{array}\right.\] is continuous. Let \(E\) be a partially defined topological graph. We recall the construction of the \(C^{*}\)-algebra \(\mathcal{O}(E)\). For \(p\in C(E^{1})\), we define a map \(\left\langle p,p\right\rangle:E^{0}\to[0,\infty]\) by \(\left\langle p,p\right\rangle(v):=\sum_{e\in d^{-1}(v)}|p(e)|^{2}\) for \(v\in E^{0}\). Then, the set \(C_{d}(E^{1}):=\{p\in C(E^{1}):\left\langle p,p\right\rangle\in C_{0}(E^{0})\}\) is a Hilbert \(C_{0}(E^{0})\)-module via \[\left\langle p,q\right\rangle(v)=\sum_{e\in d^{-1}(v)}\overline{p(e)}q(e),\] and \[(pa)(e):=p(e)a(d(e)),\] where \(p,q\in C_{d}(E^{1})\), \(a\in C_{0}(E^{0})\), \(v\in E^{0}\) and \(e\in E^{1}\). Define a left action \(\pi_{r}:C_{0}(E^{0})\to\mathcal{L}(C_{d}(E^{1}))\)by \[(\pi_{r}(a)p)(e)=\left\{\begin{array}{ll}a(r(e))p(e)&\text{if $e\in \operatorname{dom}(r)$,}\\ 0&\text{if $e\notin\operatorname{dom}(r)$}\end{array}\right.\] for \(a\in C_{0}(E^{0})\), \(p\in C_{d}(E^{1})\) and \(e\in E^{1}\). Then, we have a \(C^{*}\)-correspondence \(C_{d}(E^{1})\) over \(C_{0}(E^{0})\). A _Toeplitz \(E\)-pair_ (cf, [16, Definition 2.2]) on a \(C^{*}\)-algebra \(\mathcal{A}\) is a pair of maps \(T=(T^{0},T^{1})\), where \(T^{0}:C_{0}(E^{0})\to\mathcal{A}\) is a \(*\)-homomorphism and \(T^{1}:C_{d}(E^{1})\to\mathcal{A}\) is a linear map, satisfying 1. \(T^{1}(p)^{*}T^{1}(q)=T^{0}(\left\langle p,q\right\rangle)\) for \(p,q\in C_{d}(E^{1})\), 2. \(T^{0}(a)T^{1}(p)=T^{1}(\pi_{r}(a)p)\) for \(a\in C_{0}(E^{0})\) and \(p\in C_{d}(E^{1})\). By \(C^{*}(T^{0},T^{1})\) we mean the \(C^{*}\)-subalgebra of \(\mathcal{A}\) generated by the Toeplitz \(E\)-pair \((T^{0},T^{1})\). For a Toeplitz \(E\)-pair \((T^{0},T^{1})\), we define a \(*\)-homomorphism \(\Phi:\mathcal{K}(C_{d}(E^{1}))\to\mathcal{A}\) by \(\Phi(\Theta_{p,q})=T^{1}(p)T^{1}(q)^{*}\) for \(p,q\in C_{d}(E^{1})\), where the operator \(\Theta_{p,q}\in\mathcal{K}(C_{d}(E^{1}))\) is defined by \(\Theta_{p,q}(r)=p\left\langle q,r\right\rangle\) for \(r\in C_{d}(E^{1})\). We define the following subsets of \(E^{0}\)(cf, [16, Definition 2.6]): \[E^{0}_{sce} :=\{v\in E^{0}:\exists V\text{ neighborhood of $v$ such that $r^{-1}(V)=\emptyset$}\},\] \[E^{0}_{fin} :=\{v\in E^{0}:\exists V\text{ neighborhood of $v$ such that $r^{-1}(V)$ is compact}\},\] \[E^{0}_{rg} :=E^{0}_{fin}\setminus\overline{E^{0}_{sce}},\] \[E^{0}_{sg} :=E^{0}\setminus E^{0}_{rg}.\] A Toeplitz \(E\)-pair \((T^{0},T^{1})\) is called a _Cuntz-Krieger E-pair_ (cf, [16, Definition 2.9]) if \(T^{0}(f)=\Phi(\pi_{r}(f))\) for all \(f\in C_{0}(E^{0}_{rg})\). We denote by \(\mathcal{O}(E)\) the \(C^{*}\)-algebra generated by the universal Cuntz-Krieger E-pair \((t^{0},t^{1})\). Note that \(\mathcal{O}(E)\) is generated by \(\{t^{0}(a):a\in C_{0}(E^{0})\}\) and \(\{t^{1}(p):p\in C_{d}(E^{1})\}\) and that by the universal property of \(\mathcal{O}(E)\), there exists an action \(\beta:\mathbb{T}\curvearrowright\mathcal{O}(E)\) defined by \(\beta_{z}(t^{0}(a))=t^{0}(a)\) and \(\beta_{z}(t^{1}(p))=zt^{1}(p)\) for \(a\in C_{0}(E^{0})\) and \(p\in C_{d}(E^{1})\) and \(z\in\mathbb{T}\). We set \(d^{0}=r^{0}=id_{E^{0}}\) and \(d^{1}=d,r^{1}=r\). For \(n\geq 2\), we define a space \(E^{n}\) of paths with length \(n\) by \[E^{n}:=\{(e_{1},\ldots,e_{n})\in\prod_{i=1}^{n}E^{1}:d(e_{i})=r(e_{i+1})(1\leq i <n)\}\] which we regard as a subspace of the product space \(\prod_{i=1}^{n}E^{1}\). For convenience, we will usually write \(e_{1}\cdots e_{n}\) for \((e_{1},\cdots,e_{n})\in E^{n}\). We define a domain map \(d^{n}:E^{n}\to E^{0}\) by \(d^{n}(e_{1}\cdots e_{n})=d^{n-1}(e_{n})\), an open subset \(\operatorname{dom}(r^{n}):=(\operatorname{dom}(r)\times E^{1}\times\cdots \times E^{1})\cap E^{n}\) of \(E^{n}\) and a range map \(r^{n}:\operatorname{dom}(r^{n})\to E^{0}\) by \(r^{n}(e_{1}\cdots e_{n})=r^{1}(e_{1})\). It is easy to see that \(d^{n}\) is a local homeomorphism, \(r^{n}\) is a continuous map such that \(\widetilde{r^{n}}:E^{n}\to\widetilde{E^{0}}\) defined by \[\widetilde{r^{n}}(e_{1}\cdots e_{n})=\left\{\begin{array}{ll}r^{n}(e_{1} \cdots e_{n})&\text{if $e_{1}\cdots e_{n}\in\operatorname{dom}(r^{n})$,}\\ \infty&\text{if $e_{1}\cdots e_{n}\notin\operatorname{dom}(r^{n})$}\end{array}\right.\] is continuous. Thus, \((E^{0},E^{n},d^{n},r^{n})\) is a partially defined topological graph. Then, we can define a \(C^{*}\)-correspondence \(C_{d_{n}}(E^{n})\) over \(C_{0}(E^{0})\) similarly as \(C_{d}(E^{1})\). By the same argument used in [16, Proposition 1.27], we have that \(C_{d^{n+m}}(E^{n+m})\cong C_{d^{n}}(E^{n})\otimes C_{d^{m}}(E^{m})\) as \(C^{*}\)-correspondence over \(C_{0}(E^{0})\) for any \(n,m\geq 0\), and that \[C_{d^{n}}(E^{n})=\overline{span}\{\xi_{1}\otimes\xi_{2}\otimes\cdots\otimes \xi_{n}:\xi_{i}\in C_{d}(E^{1})\}\] for \(n\geq 1\). To ease notations, we write \(d,r\) for \(d^{n},r^{n}\). For \(n\geq 2\), we define a linear map \(T^{n}:C_{d}(E^{n})\to C^{*}(T)\) by \[T^{n}(\xi)=T^{1}(\xi_{1})T^{1}(\xi_{2})\cdots T^{1}(\xi_{n})\] for \(\xi=\xi_{1}\otimes\xi_{2}\otimes\cdots\otimes\xi_{n}\in C_{d}(E^{n})\), and a linear map \(\Phi^{n}:\mathcal{K}(C_{d}(E^{n}))\to C^{*}(T)\) by \(\Phi^{n}(\Theta_{\xi,\eta})=T^{n}(\xi)T^{n}(\eta)^{*}\), where \(\Theta_{\xi,\eta}\in\mathcal{K}(C_{d}(E^{n}))\). **Definition 2.7**.: (cf.[17, Definition 5.3]) Let \(E\) be a partially defined topological graph. A path \(e=e_{1}\cdots e_{n}\in E^{n}\) is called a _loop_ if \(r(e)=d(e)\). The vertex \(r(e)=d(e)\) is called the _base point_ of the loop \(e\). A loop \(e=e_{1}\cdots e_{n}\) is said to be _without entrances_ if \(r^{-1}(r(e_{k}))=\{e_{k}\}\) for \(k=1,\cdots,n\). **Definition 2.8**.: (cf.[17, definition 5.4]) A partially defined topological graph \(E\) is _topologically free_ if the set of base points of loops without entrances has an empty interior. Using the date of \(d:E^{1}\to E^{0}\), \(r:\operatorname{dom}(r)\to E^{0}\) and the maps \(T^{n},\Phi^{n}\) for \(n\geq 1\), we can have the following Cuntz-Krieger uniqueness theorem for \(C^{*}\)-correspondences arising from partially defined topological graphs on the same way as topological graphs. We omit its proof. **Theorem 2.9**.: _(cf.[17, Theorem 6.4]) For a partially defined topological graph \(E\), the following are equivalent:_ 1. \(E\) _is topologically free;_ 2. _the natural surjection_ \(\rho:\mathcal{O}(E)\to C^{*}(T)\) _is an isomorphism for every injective Cuntz-Krieger_ \(E\)_-pair_ \(T\)_;_ 3. _any non-zero ideal_ \(I\) _of_ \(\mathcal{O}(E)\) _satisfies_ \(I\cap t^{0}(C_{0}(E^{0}))\neq 0\) ## 3. The Cuntz-Krieger uniqueness theorem We will in this section generalize the Cuntz-Krieger uniqueness theorem [9, Theorem 9.9] to the \(C^{*}\)-algebra of an arbitrary generalized Boolean dynamical system. A Cuntz-Krieger uniqueness theorem for \(C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})\) We first generalize Cuntz-Krieger uniqueness theorem [9, Theorem 9.9] to the \(C^{*}\)-algebra of an arbitrary generalized Boolean dynamical system. We consider a partially defined topological graph \(E_{(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})}\) from an arbitrary generalized Boolean dynamical system \((\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})\), and show that \(C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})\) and \(\mathcal{O}(E_{(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})})\) are isomorphic. We then apply the Cuntz-Krieger uniqueness theorem [17, Theorem 6.14] of \(\mathcal{O}(E_{(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})})\). Let \((\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})\) be a generalized Boolean dynamical system. We first recall some terminologies to define a partially defined topological graph associated to \((\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})\). Following [10], we let \(\mathcal{W}^{*}=\{\alpha\in\mathcal{L}^{*}:\mathcal{I}_{\alpha}\neq\{\emptyset\}\}\). Put \(X_{\alpha}:=\widehat{\mathcal{I}_{\alpha}}\) for each \(\alpha\in\mathcal{W}^{*}\) and equip \(X_{\alpha}\) with the topology generated by \(\{Z(\alpha,A):A\in\mathcal{I}_{\alpha}\}\), where we let \[Z(\alpha,A):=\{\mathcal{F}\in X_{\alpha}:A\in\mathcal{F}\}\] for \(A\in\mathcal{I}_{\alpha}\). We also equip the set \(X_{\emptyset}\cup\{\emptyset\}(=\widehat{\mathcal{B}}\cup\{\emptyset\})\) with a suitable topology; if \(\mathcal{B}\) is unital, the topology is such that \(\{\emptyset\}\) is an isolated point. If \(\mathcal{B}\) is not unital, then \(\emptyset\) plays the role of the point at infinity in the one-point compactification of \(X_{\emptyset}\). Let \(\alpha,\beta\in\mathcal{W}^{*}\setminus\{\emptyset\}\) be such that \(\alpha\beta\in\mathcal{W}^{*}\). Define a continuous map \[f_{\alpha[\beta]}:X_{\alpha\beta}\to X_{\alpha}\text{ by }f_{\alpha[\beta]}( \mathcal{F})=\{A\in\mathcal{I}_{\alpha}:\theta_{\beta}(A)\in\mathcal{F}\}\] for \(\mathcal{F}\in X_{\alpha\beta}\), and a continuous map \[f_{\emptyset[\beta]}:X_{\beta}\to X_{\emptyset}\cup\{\emptyset\}\text{ by }f_{\emptyset[\beta]}(\mathcal{F})=\{A\in\mathcal{B}:\theta_{\beta}(A)\in \mathcal{F}\}\] for \(\mathcal{F}\in X_{\beta}\) ([10, Lemma 3.23]). Let \(\alpha,\beta\in\mathcal{W}^{*}\) be such that \(\alpha\beta\in\mathcal{W}^{*}\). We also define an open subspace \[X_{(\alpha)\beta}:=\{\mathcal{F}\in X_{\beta}:\mathcal{F}\cap\mathcal{I}_{ \alpha\beta}\neq\emptyset\}\] of \(X_{\beta}\) ([10, Lemma 4.6(vii)]), a continuous map \[g_{(\alpha)\beta}:X_{(\alpha)\beta}\to X_{\alpha\beta}\text{ by }g_{(\alpha) \beta}(\mathcal{F}):=\mathcal{F}\cap\mathcal{I}_{\alpha\beta}\] for each \(\mathcal{F}\in X_{(\alpha)\beta}\) ([10, Lemma 4.6(vi)]), and a continuous map \[h_{[\alpha]\beta}:X_{\alpha\beta}\to X_{(\alpha)\beta}\text{ by }h_{[\alpha] \beta}(\mathcal{F}):=\{A\in\mathcal{I}_{\beta}:B\subseteq A\text{ for some }B\in\mathcal{F}\}\] for \(\mathcal{F}\in X_{\alpha\beta}\) ([10, Lemma 4.8(v)]). Note that \(X_{(\emptyset)\beta}=X_{\beta}\), \(g_{(\emptyset)\beta}\) and \(h_{[\emptyset]\beta}\) are the identity functions on \(X_{\beta}\), and that \(h_{[\alpha]\beta}:X_{\alpha\beta}\to X_{(\alpha)\beta}\) and \(g_{(\alpha)\beta}:X_{(\alpha)\beta}\to X_{\alpha\beta}\) are mutually inverses ([10, Lemma 4.8(iii)]). We now define a partially defined topological graph from \((\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})\). Let \[E^{0}_{(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})}:=X_{\emptyset} \text{ and }E^{1}_{(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})}:=\left\{e^{ \alpha}_{\eta}:\alpha\in\mathcal{L},\ \eta\in X_{\alpha}\right\}\] and equip \(E^{1}_{(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})}\) with the topology generated by \(\bigcup_{\alpha\in\mathcal{L}}\{Z^{1}(\alpha,B):B\in\mathcal{I}_{\alpha}\}\), where \[Z^{1}(\alpha,B):=\{e^{\alpha}_{\eta}:\eta\in X_{\alpha},B\in\eta\}.\] Note that \(E^{1}_{(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})}\) is homeomorphic to the disjoint union of the family \(\{X_{\alpha}\}_{\alpha\in\mathcal{L}}\). Then, define a local homeomorphism \[d:E^{1}_{(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})}\to E^{0}_{( \mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})}\text{ by }d(e^{\alpha}_{\eta})=h_{[\alpha]\emptyset}(\eta).\] Put \[\text{dom}(r) :=\{e^{\alpha}_{\eta}:\alpha\in\mathcal{L},\ \eta\cap\mathcal{R}_{ \alpha}\neq\emptyset\}\subset E^{1}_{(\mathcal{B},\mathcal{L},\theta, \mathcal{I}_{\alpha})}\] \[\Big{(}=\bigcup_{\alpha\in\mathcal{L},A\in\eta\cap\mathcal{R}_{ \alpha}}Z^{1}(\alpha,A)\Big{)},\] which is an open subset of \(E^{1}_{(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})}\), and define a continuous map \[r:\text{dom}(r)\to E^{0}_{(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{ \alpha})}\text{ by }r(e^{\alpha}_{\eta})=f_{\emptyset[\alpha]}(\eta).\] Then, the map \(\tilde{r}:E^{1}_{(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})}\to E ^{0}_{(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})}\cup\{\emptyset\}\) defined by \[\tilde{r}(e)=\left\{\begin{array}{ll}r(e)&\text{if }e\in\text{dom}(r),\\ \emptyset&\text{if }e\notin\text{dom}(r)\end{array}\right.\] is continuous. Thus, \(E_{(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})}:=(E^{0}_{(\mathcal{ B},\mathcal{L},\theta,\mathcal{I}_{\alpha})},E^{1}_{(\mathcal{B},\mathcal{L},\theta, \mathcal{I}_{\alpha})},d,r)\) is a partially defined topological graph (see [10, Proposition 7.1]). To ease notation, we let \(E:=E_{(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})}\), \(E^{0}:=E^{0}_{(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})}\) and \(E^{1}:=E^{1}_{(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})}\). The following lemmas will be frequently used throughout the paper. **Lemma 3.1**.: _([11, Lemma 3.3]) Let \(\mu=e^{\alpha_{1}}_{\eta_{1}}\cdots e^{\alpha_{n}}_{\eta_{n}}\in E^{n}\), where \(1\leq n\). Then, we have_ \[r(\mu)=f_{\emptyset[\alpha_{1}\cdots\alpha_{n}]}\big{(}g_{(\alpha_{1}\cdots \alpha_{n-1})\alpha_{n}}(\eta_{n})\big{)}.\] **Lemma 3.2**.: _Let \(\alpha\in\mathcal{L}\). For \(e^{\alpha}_{\eta},e^{\alpha}_{\xi}\in X_{\alpha}\), we have \(d(e^{\alpha}_{\eta})=d(e^{\alpha}_{\xi})\) if and only if \(\eta=\xi.\)_ Proof.: (\(\Leftarrow\)) It is claer. (\(\Rightarrow\)) \(h_{[\alpha]\emptyset}(\eta)=h_{[\alpha]\emptyset}(\xi)\) implies that \(\eta=g_{(\alpha)\emptyset}(h_{[\alpha]\emptyset}(\eta))=g_{(\alpha)\emptyset }(h_{[\alpha]\emptyset}(\xi))=\xi.\) **Proposition 3.3**.: _Let \((\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})\) be a generalized Boolean dynamical system and let \(E_{(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})}=(E^{0},E^{1},d,r)\) be the associated partially defined topological graph. Then_ 1. _there is an isomorphism_ \(\phi:C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})\to\mathcal{O} (E_{(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})})\) _that maps_ \(p_{A}\) _to_ \(t^{0}(1_{Z(A)})\) _for_ \(A\in\mathcal{B}\) _and_ \(s_{\alpha,B}\) _to_ \(t^{1}(1_{Z^{1}(\alpha,B)})\) _for_ \(\alpha\in\mathcal{L}\) _and_ \(B\in\mathcal{I}_{\alpha}\)_;_ 2. _if_ \(\psi\) _is a_ \(*\)_-homomorphism defined on_ \(\mathcal{O}(E_{(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})})\)_, then_ \(\psi\circ t^{0}\) _is injective if and only if_ \(\psi(\phi(p_{A}))\neq 0\) _for all_ \(A\in\mathcal{B}\setminus\{\emptyset\}\)_._ Proof.: (1): Let \((t^{0},t^{1})\) be the universal Cuntz-Krieger \(E_{(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})}\)-pair in a \(C^{*}\)-algebra \(\mathcal{X}\). We claim that \[\{t^{0}(1_{Z(A)}),t^{1}(1_{Z^{1}(\alpha,B)}):A\in\mathcal{B},\alpha\in \mathcal{L}\text{ and }B\in\mathcal{I}_{\alpha}\}\] is a \((\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})\)-representation in \(\mathcal{X}\). Let \(A,A^{\prime}\in\mathcal{B}\), \(\alpha,\alpha^{\prime}\in\mathcal{L}\), \(B\in\mathcal{I}_{\alpha}\) and \(B^{\prime}\in\mathcal{I}_{\alpha^{\prime}}\). Then, we check the following; 1. It is easy to check that \(t^{0}(1_{Z(A)})t^{0}(1_{Z(A^{\prime})})=t^{0}(1_{Z(A\cap A^{\prime})})\) and \(t^{0}(1_{Z(A\cup A^{\prime})})=t^{0}(1_{Z(A)})+t^{0}(1_{Z(A^{\prime})})-t^{0}(1_ {Z(A\cap A^{\prime})})\). 2. For \(e_{\eta}^{\beta}\in E^{1}\), we compute \[(\pi_{r}(1_{Z(A)})1_{Z^{1}(\alpha,B)})(e_{\eta}^{\beta})\] \[=\begin{cases}1_{Z(A)}(r(e_{\eta}^{\beta}))1_{Z^{1}(\alpha,B)}(e_{ \eta}^{\beta})&\text{if $e_{\eta}^{\beta}\in\operatorname{dom}(r)$,}\\ 0&\text{if $e_{\eta}^{\beta}\notin\operatorname{dom}(r)$}\end{cases}\] \[=\begin{cases}1_{Z(A)}(f_{\emptyset[\beta]}(\eta))&\text{if $e_{\eta}^{ \beta}\in\operatorname{dom}(r)$; $\beta=\alpha$ and $B\in\eta$,}\\ 0&\text{otherwise}\end{cases}\] \[=\begin{cases}1&\text{if $e_{\eta}^{\beta}\in\operatorname{dom}(r)$; $\beta=\alpha$ and $B\in\eta$ and $\theta_{\alpha}(A)\in\eta$,}\\ 0&\text{otherwise}.\end{cases}\] Since \(B,\theta_{\alpha}(A)\in\eta\iff B\cap\theta_{\alpha}(A)\in\eta\), we have \(\pi_{r}(1_{Z(A)})1_{Z^{1}(\alpha,B)}=1_{Z^{1}(\alpha,B\cap\theta_{\alpha}(A))}\). On the other hand, for \(e_{\eta}^{\beta}\in E^{1}\), \[(1_{Z^{1}(\alpha,B)}1_{Z(\theta_{\alpha}(A))})(e_{\eta}^{\beta}) =1_{Z^{1}(\alpha,B)}(e_{\eta}^{\beta})1_{Z(\theta_{\alpha}(A))}( d(e_{\eta}^{\beta}))\] \[=\begin{cases}1_{Z(\theta_{\alpha}(A))}(h_{[\beta]\emptyset}(\eta ))&\text{if $\beta=\alpha$ and $B\in\eta$,}\\ 0&\text{otherwise}\end{cases}\] \[=\begin{cases}1&\text{if $\beta=\alpha,B\in\eta$ and $\theta_{\alpha}(A)\in h_{[\alpha]\emptyset}(\eta)$,}\\ 0&\text{otherwise}\end{cases}\] \[=\begin{cases}1&\text{if $\beta=\alpha$ and $B\cap\theta_{\alpha}(A)\in\eta$,}\\ 0&\text{otherwise},\end{cases}\] where the last equality follows from the fact that \(\theta_{\alpha}(A)\in h_{[\alpha]\emptyset}(\eta)\iff\theta_{\alpha}(A)\in\eta\), and that \(B,\theta_{\alpha}(A)\in\eta\iff B\cap\theta_{\alpha}(A)\in\eta\). Thus, we have \(1_{Z^{1}(\alpha,B)}1_{Z(\theta_{\alpha}(A))}=1_{Z^{1}(\alpha,B\cap\theta_{ \alpha}(A))}\). It then follows that \[t^{0}(1_{Z(A)})t^{1}(1_{Z^{1}(\alpha,B)}) =t^{1}(\pi_{r}(1_{Z(A)})1_{Z^{1}(\alpha,B)})=t^{1}(1_{Z^{1}( \alpha,B\cap\theta_{\alpha}(A))})\] \[=t^{1}(1_{Z^{1}(\alpha,B)}1_{Z(\theta_{\alpha}(A))})=t^{1}(1_{Z^{1}( \alpha,B)})t^{0}(1_{Z(\theta_{\alpha}(A))}).\] 3. For \(\eta\in E^{0}\), we first see that \[\left\langle 1_{Z^{1}(\alpha,B)},1_{Z^{1}(\alpha^{\prime},B^{ \prime})}\right\rangle(\eta)\] \[=\sum_{e_{\chi}^{\beta}\in E^{1};d(e_{\chi}^{\beta})=\eta}1_{Z^{1 }(\alpha,B)}(e_{\chi}^{\beta})1_{Z^{1}(\alpha^{\prime},B^{\prime})}(e_{\chi}^ {\beta})\] \[=\begin{cases}1_{Z^{1}(\alpha,B)}(e_{\chi}^{\beta})1_{Z^{1}( \alpha^{\prime},B^{\prime})}(e_{\chi}^{\beta})&\text{if $\alpha=\alpha^{\prime}=\beta$, and $\chi=\eta\cap\mathcal{I}_{\alpha}$,}\\ 0&\text{otherwise}\end{cases}\] \[=\begin{cases}1&\text{if $\alpha=\alpha^{\prime}=\beta$, $B,B^{\prime}\in\chi$ and $\chi=\eta\cap\mathcal{I}_{\alpha}$,}\\ 0&\text{otherwise}\end{cases}\] where we use Lemma 3.2 for the second equality. Since \(B,B^{\prime}\in\chi\iff B\cap B^{\prime}\in\chi\iff B\cap B^{\prime}\in\eta\), we have \(\left\langle 1_{Z^{1}(\alpha,B)},1_{Z^{1}(\alpha^{\prime},B^{\prime})}\right\rangle= \delta_{\alpha,\alpha^{\prime}}1_{Z(B\cap B^{\prime})}\). Thus, it follows that \[t^{1}(1_{Z^{1}(\alpha,B)})^{*}t^{1}(1_{Z^{1}(\alpha^{\prime},B^{\prime})})=t^{ 0}\big{(}\left\langle 1_{Z^{1}(\alpha,B)},1_{Z^{1}(\alpha^{\prime},B^{\prime})} \right\rangle\big{)}=\delta_{\alpha,\alpha^{\prime}}t^{0}(1_{Z(B\cap B^{ \prime})}).\] * Lastly, for the last relation, we first prove that \[\pi_{r}(1_{Z(A)})=\sum_{\alpha\in\Delta_{A}}\Theta_{1_{Z^{1}(\alpha,\theta_{ \alpha}(A))},1_{Z^{1}(\alpha,\theta_{\alpha}(A))}}\] for \(A\in\mathcal{B}_{reg}\). For \(p\in C_{d}(E^{1})\) and \(e\in E^{1}\), we see that \[\Big{(}\sum_{\alpha\in\Delta_{A}}\Theta_{1_{Z^{1}(\alpha,\theta_{ \alpha}(A))},1_{Z^{1}(\alpha,\theta_{\alpha}(A))}}\Big{)}(p)(e)\] \[=\sum_{\alpha\in\Delta_{A}}\Big{(}1_{Z^{1}(\alpha,\theta_{\alpha} (A))}\left\langle 1_{Z^{1}(\alpha,\theta_{\alpha}(A))},p\right\rangle\Big{)}(e)\] \[=\begin{cases}1_{Z^{1}(\alpha,\theta_{\alpha}(A))}(e_{\eta}^{ \alpha})\left\langle 1_{Z^{1}(\alpha,\theta_{\alpha}(A))},p\right\rangle(d(e_{\eta}^{ \alpha}))&\text{if $e=e_{\eta}^{\alpha}$ for $\alpha\in\Delta_{A}$},\\ 0&\text{otherwise}\end{cases}\] \[=\begin{cases}\sum_{d(e^{\prime})=d(e_{\eta}^{\alpha})}1_{Z^{1}( \alpha,\theta_{\alpha}(A))}(e^{\prime})p(e^{\prime})&\text{if $e=e_{\eta}^{\alpha}$ for $\alpha\in\Delta_{A}$ and $\theta_{\alpha}(A)\in\eta$},\\ 0&\text{otherwise}\end{cases}\] \[=\begin{cases}p(e)&\text{if $e=e_{\eta}^{\alpha}$ for $\alpha\in\Delta_{A}$ and $\theta_{\alpha}(A)\in\eta$},\\ 0&\text{otherwise},\end{cases}\] where the last equality follows by Lemma 3.2. Also, for \(p\in C_{d}(E^{1})\) and \(e\in E^{1}\), we observe that \[(\pi_{r}(1_{Z(A)})p)(e)\] \[=\begin{cases}1_{Z(A)}(r(e))p(e)&\text{if $e\in\text{dom}(r)$},\\ 0&\text{otherwise}\end{cases}\] \[=\begin{cases}1_{Z(A)}(f_{\emptyset[\alpha]}(\eta))p(e_{\eta}^{ \alpha})&\text{if $e\in\text{dom}(r)$};e=e_{\eta}^{\alpha}$ for $\alpha\in\Delta_{A}$},\\ 0&\text{otherwise}\end{cases}\] \[=\begin{cases}p(e)&\text{if $e\in\text{dom}(r)$};e=e_{\eta}^{\alpha}$ for $\alpha\in\Delta_{A}$ and $\theta_{\alpha}(A)\in\eta$},\\ 0&\text{otherwise}.\end{cases}\] Thus, we have \(\pi_{r}(1_{Z(A)})=\sum_{\alpha\in\Delta_{A}}\Theta_{1_{Z^{1}(\alpha,\theta_{ \alpha}(A))},1_{Z^{1}(\alpha,\theta_{\alpha}(A))}}\) for \(A\in\mathcal{B}_{reg}\). Now, let \(A\in\mathcal{B}_{reg}\) and choose \(\xi\in Z(A)\). Then, \(\xi\in E_{rg}^{0}\) by [10, Lemma 7.9]. Thus, we have \(1_{Z(A)}\in C_{0}(E_{rg}^{0})\). It thus follows that \[t^{0}(1_{Z(A)}) =\Phi(\pi_{r}(1_{Z(A)}))\] \[=\Phi\Big{(}\sum_{\alpha\in\Delta_{A}}\Theta_{1_{Z^{1}(\alpha, \theta_{\alpha}(A))},1_{Z^{1}(\alpha,\theta_{\alpha}(A))}}\Big{)}\] \[=\sum_{\alpha\in\Delta_{A}}t^{1}(1_{Z^{1}(\alpha,\theta_{\alpha}( A))})t^{1}(1_{Z^{1}(\alpha,\theta_{\alpha}(A))})^{*}.\] Thus, there is a \(*\)-homomorphism \[\phi:C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})\to C^{*}(t^{0}, t^{1})\] given by \[\phi(p_{A})=t^{0}(1_{Z(A)})\text{ and }\phi(s_{\alpha,B})=t^{1}(1_{Z^{1}( \alpha,B)})\] for each \(A\in\mathcal{B},\alpha\in\mathcal{L}\) and \(B\in\mathcal{I}_{\alpha}\). Then for \(A\neq\emptyset\), we have \(Z(A)\neq\emptyset\), and hence, \(t^{0}(1_{Z(A)})\neq 0\) for \(A\neq\emptyset\) by [16, Proposition 3.6]. Hence, by the gauge-invariant uniqueness theorem ([8, Corollary 6.2]), we have \(\phi\) is injective. Since \(\mathcal{O}_{E}\) is generated by \(\{t^{0}(a),\;t^{1}(p):a\in C_{0}(E^{0}),\;p\in C_{d}(E^{1})\}\) and \(\{1_{Z(A)}:A\in\mathcal{B}\}\) generates \(C_{0}(E^{0})\) and \(\{1_{Z^{1}(\alpha,B)}:\alpha\in\mathcal{L},\;B\in\mathcal{I}_{\alpha}\}\) generates \(C_{d}(E^{1})\), we have \(\phi\) is surjective. Hence, \(C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})\cong\mathcal{O}( E_{(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})})\). (2): Let \(\psi\) be a \(*\)-homomorphism defined on \(\mathcal{O}(E_{(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})})\). Then the results easily follow since \(\psi(\phi(p_{A}))=\psi(t^{0}(1_{Z(A)}))\) for \(A\in\mathcal{B}\). Let \(\xi\in X_{\emptyset}\) be such that \(\xi\cap\mathcal{I}_{\alpha}\neq\emptyset\) for some \(\alpha=\alpha_{1}\alpha_{2}\cdots\alpha_{n}\in\mathcal{W}^{*}\). Define \[\xi_{n} :=\xi\cap\mathcal{I}_{\alpha_{n}},\] \[\xi_{i} :=f_{\emptyset[\alpha_{i+1}]}(\xi_{i+1})\cap\mathcal{I}_{\alpha_{ i}}\] for \(i=1,\cdots,n-1\). Then we have a path \(e_{\xi_{1}}^{\alpha_{1}}\cdots e_{\xi_{n}}^{\alpha_{n}}\) in \(E\) by [11, Lemma 3.14]. We write such path for \(e(\alpha,\xi)\). Note then that \[d(e(\alpha,\xi))=h_{[\alpha_{n}]\emptyset}(\xi_{n})=h_{[\alpha_{n}]\emptyset} (\xi\cap\mathcal{I}_{\alpha_{n}})=h_{[\alpha_{n}]\emptyset}(g_{(\alpha_{n}) \emptyset}(\xi))=\xi\] and \[r(e(\alpha,\xi))=f_{\emptyset[\alpha_{1},n]}(g_{(\alpha_{1,n-1})\alpha_{n}}( \xi_{n}))=f_{\emptyset[\alpha]}(\xi\cap\mathcal{I}_{\alpha_{n}}\cap\mathcal{I }_{\alpha}).\] **Lemma 3.4**.: _Let \((\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})\) be a generalized Boolean dynamical system and let \((\beta,A)\) be a cycle, where \(\beta=\beta_{1}\cdots\beta_{n}\in\mathcal{L}^{*}\). Then, for each \(\xi\in Z(A)\), the path \(e(\beta,\xi)=e_{\xi_{1}}^{\beta_{1}}\cdots e_{\xi_{n}}^{\beta_{n}}\) is a loop at \(\xi\)._ Proof.: We show that \(f_{\emptyset[\beta]}(\xi\cap\mathcal{I}_{\beta_{n}}\cap\mathcal{I}_{\beta})=\xi\). Choose \(B\in\xi\). Since \(A\in\xi\) and \((\beta,A)\) is a cycle, we have \(\theta_{\beta}(A\cap B)=A\cap B\in\xi\), and hence, \(\theta_{\beta}(B)\in\xi\). It is clear that \(\theta_{\beta}(B)\in\mathcal{I}_{\beta_{n}}\cap\mathcal{I}_{\beta}\). So, \(B\in f_{\emptyset[\beta]}(\xi\cap\mathcal{I}_{\beta_{n}}\cap\mathcal{I}_{\beta})\). Thus, \(\xi\subseteq f_{\emptyset[\beta]}(\xi\cap\mathcal{I}_{\beta_{n}}\cap\mathcal{I }_{\beta})\). Then the equality follows since both are ultrafilters. **Proposition 3.5**.: _Let \((\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})\) be a generalized Boolean dynamical system and let \(E_{(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})}=(E^{0},E^{1},d,r)\) be the associated partially defined topological graph. Then \((\mathcal{B},\mathcal{L},\theta)\) satisfies Condition (L) if and only if \(E_{(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})}\) is topologically free._ Proof.: \((\Rightarrow)\) Suppose that \(E_{(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})}\) is not topologically free. It then follows from the Baire category theorem that there is a positive integer \(n\) and \(A\in\mathcal{B}\) such that \(Z(A)\) is nonempty and each \(Z(A)\) is a base point of a simple loop of length \(n\) with no entrances. Let \(\eta\in Z(A)\). Then there is a simple loop \(\mu:=e_{\eta_{1}}^{\beta_{1}}\cdots e_{\eta_{n}}^{\beta_{n}}\) such that \(r(\mu)=d(\mu)=\eta\). Put \(\beta:=\beta_{1}\cdots\beta_{n}\). We claim that \((\beta,A\cap\theta_{\beta}(A))\) is a cycle with no exit. Let \(B\subseteq A\cap\theta_{\beta}(A)\). If \(B\setminus\theta_{\beta}(B)\neq\emptyset\), choose \(\xi\in\widehat{\mathcal{B}}\) such that \(B\setminus\theta_{\beta}(B)\in\xi\). Then, \(B,A,\theta_{\beta}(A)\in\xi\) and \(\theta_{\beta}(B)\notin\xi\). Consider the path \(e(\beta,\xi)\). Then \(d(e(\beta,\xi))=\xi\in Z(A)\). Also, since \(\theta_{\beta}(A)\in\xi\cap\mathcal{I}_{\beta_{n}}\cap\mathcal{I}_{\beta}\), we have \(A\in r(e(\beta,\xi))(=f_{\emptyset[\beta]}(\xi\cap\mathcal{I}_{\beta_{n}}\cap \mathcal{I}_{\beta}))\), and hence, \(r(e(\beta,\xi))\in Z(A)\). Since each element in \(Z(A)\) is a base point of a simple loop of length \(n\) with no entrances, we must have that \(d(e(\beta,\xi))=r(e(\beta,\xi))(=\xi)\). Hence, \(B\in r(e(\beta,\xi))\). It means that \(\theta_{\beta}(B)\in\xi\), a contradiction. So, \(B\setminus\theta_{\beta}(B)=\emptyset\). Thus, \(B\subseteq\theta_{\beta}(B)\). If \(\theta_{\beta}(B)\setminus B\neq\emptyset\), choose \(\xi\in\widehat{\mathcal{B}}\) such that \(\theta_{\beta}(B)\setminus B\in\xi\). Then, \(\theta_{\beta}(B)\in\xi\) and \(B\notin\xi\). Consider again the path \(e(\beta,\xi)\). Since \(\theta_{\beta}(B)\in\xi\), we have \(r(e(\beta,\xi))\in Z(B)\subseteq Z(A)\). So, \(r(e(\beta,\xi))\) is the base point of a loop of length \(n\) with no entrances. It means that \(r(e(\beta,\xi))\) is the range of a unique loop of length \(n\). Since \(e(\beta,\xi)\) is a path of length \(n\) with range \(r(e(\beta,\xi))\) and domain \(\xi\), it follows that \(\xi=r(e(\beta,\xi))\), and hence, \(B\in\xi\). This is not the case. So, \(\theta_{\beta}(B)\setminus B=\emptyset\). Thus, \(B=\theta_{\beta}(B)\). So, \((\beta,A\cap\theta_{\beta}(A))\) is a cycle. Suppose \(k\in\{1,2,\cdots,n\}\), \(\emptyset\neq B\subseteq\theta_{\beta_{1,k}}(A\cap\theta_{\beta}(A))\) and \(\alpha\in\Delta_{B}\). Then \(\theta_{\alpha}(B)\neq\emptyset\), so there is a \(\zeta\in\widehat{\mathcal{B}}\) such that \(\theta_{\alpha}(B)\in\zeta\). Since \(\theta_{\alpha}(B)\subseteq\theta_{\beta_{1,k}\alpha}(A\cap\theta_{\beta}(A))\), we have \(\theta_{\beta_{1,k}\alpha}(A\cap\theta_{\beta}(A))\in\zeta\). So, \(\zeta\cap\mathcal{I}_{\beta_{1,k}\alpha}\neq\emptyset\), thus we have the path \(e(\beta_{1,k}\alpha,\zeta)\). Then, \(r(e(\beta_{1,k}\alpha,\zeta))\in Z(A\cap\theta_{\beta}(A))\subset Z(A)\). Hence, \(\chi:=r(e(\beta_{1,k}\alpha,\zeta))\) is a base point of a simple loop of length \(n\) with no entrances. On the other hand, since \((\beta,A\cap\theta_{\beta}(A))\) is a cycle, \(\chi\) admits a loop \(e(\beta,\chi)\) by Lemma 3.4. That means that \[e^{\beta_{1}}_{\chi_{1}}\cdots e^{\beta_{k}}_{\chi_{k}}e^{\beta_{k+1}}_{\chi_{ k+1}}\] is the unique path in \(E\) of length \(k+1\) with range \(\chi\). Since \(e(\beta_{1,k}\alpha,\zeta)\) is also a path in \(E\) of length \(k+1\) with range \(\chi\), it follows that \(\chi_{i}=\zeta_{i}\) for \(i=1,\cdots,k+1\) and \(\alpha=\beta_{k+1}\). This shows that the cycle \((\beta,A\cap\theta_{\beta}(A))\) has no exit. We thus have that \((\mathcal{B},\mathcal{L},\theta)\) does not satisfy Condition (L). \((\Leftarrow)\) Assume that \((\mathcal{B},\mathcal{L},\theta)\) does not satisfy Condition (L). There is then a cycle \((\beta,A)\) with no exit, where \(\beta=\beta_{1}\cdots\beta_{n}\). We claim that each element of \(Z(A)\) is the base point of a loop without entrances. Suppose \(\xi\in Z(A)\). Then by Lemma 3.4(i), we have a loop \(e(\beta,\xi)=e^{\beta_{1}}_{\xi_{1}}\cdots e^{\beta_{n}}_{\xi_{n}}\) at \(\xi\). If the loop \(e(\beta,\xi)\) has an entrance, then there exist \(k\in\{1,2,\cdots,n\}\) and \(e^{\alpha}_{\zeta}\in E^{1}\) (\(\alpha\in\mathcal{L},\zeta\in X_{\alpha}\)) such that \(e^{\alpha}_{\zeta}\neq e^{\beta_{k}}_{\xi_{k}}\) and \(r(e^{\alpha}_{\zeta})=r(e^{\beta_{k}}_{\xi_{k}})\). Here, we claim that if \(\alpha=\beta_{k}\), then \(\zeta=\xi_{k}\). Since \(r(e^{\beta_{k}}_{\zeta})=r(e^{\beta_{k}}_{\xi_{k}})\), we have \(r(e^{\beta_{1}}_{\xi_{1}}\cdots e^{\beta_{k-1}}_{\xi_{k-1}}e^{\beta_{k}}_{\zeta })=r(e^{\beta_{1}}_{\xi_{1}}\cdots e^{\beta_{k-1}}_{\xi_{k-1}}e^{\beta_{k}}_{ \xi_{k}})\), which means that \[f_{\emptyset[\beta_{1,k}]}(g_{(\beta_{1,k-1})\beta_{k}}(\zeta))=f_{\emptyset[ \beta_{1,k}]}(g_{(\beta_{1,k-1})\beta_{k}}(\xi_{k})). \tag{3}\] We first show that for every \(B\subseteq\theta_{\beta_{1}\cdots\beta_{k}}(A)\), if \(B\in\xi_{k}\), then \(B\in\zeta\). If \(B\in\xi_{k}=f_{\emptyset[\beta_{k+1}]}(\xi_{k+1})\cap\mathcal{I}_{\beta_{k}}\) for \(B\subseteq\theta_{\beta_{1}\cdots\beta_{k}}(A)\), then \[\theta_{\beta_{k+1}}(B)\in\xi_{k+1}=f_{\emptyset[\beta_{k+2}]}(\xi_{k+2})\cap \mathcal{I}_{\beta_{k+1}},\] and then, \[\theta_{\beta_{k+1}\beta_{k+2}}(B)\in\xi_{k+2}=f_{\emptyset[\beta_{k+3}]}(\xi_ {k+3})\cap\mathcal{I}_{\beta_{k+2}}.\] Continuing this process, one has that \(\theta_{\beta_{k+1,n}}(B)\in\xi\). Since \(\xi=f_{\emptyset[\beta_{1,k}]}(g_{(\beta_{1,k-1})\beta_{k}}(\zeta))\) and \((\beta_{k+1,n}\beta_{1,k},\theta_{\beta_{1}\cdots\beta_{k}}(A))\) is a cycle, we have \[B=\theta_{\beta_{k+1,n}\beta_{1,k}}(B)=\theta_{\beta_{1,k}}(\theta_{\beta_{k+1,n }}(B))\in\zeta.\] Now, if \(\zeta\neq\xi_{k}\), then there is \(B\in\mathcal{I}_{\beta_{k}}\) such that \(B\in\xi_{k}\) and \(B\notin\zeta\). So, we have \(B\cap\theta_{\beta_{1,k}}(A)\in\xi_{k}\). Since \(B\cap\theta_{\beta_{1,k}}(A)\subset\theta_{\beta_{1,k}}(A)\), we have \(B\cap\theta_{\beta_{1,k}}(A)\in\zeta\). It then follows that \(B\in\zeta\), a contradiction. Thus, \(\zeta=\xi_{k}\) if \(\alpha=\beta_{k}\). Hence, if \(e^{\alpha}_{\zeta}\neq e^{\beta_{k}}_{\eta_{k}}\), we have \(\alpha\neq\beta_{k}\). Since \(\theta_{\beta_{1,k}}(A)\in\xi_{k}\) and \[f_{\emptyset[\beta_{1,k-1}\alpha]}(g_{(\beta_{1,k-1})\alpha}(\zeta))=f_{ \emptyset[\beta_{1,k}]}(g_{(\beta_{1,k-1})\beta_{k}}(\xi_{k})),\] we have \(A\in f_{\emptyset[\beta_{1,k}\alpha]}(g_{(\beta_{1,k-1})\alpha}(\zeta))\). It means that \(\theta_{\beta_{1,k-1}\alpha}(A)\in\zeta\cap\mathcal{I}_{\beta_{1,k-1}\alpha}\). So, \(\theta_{\beta_{1,k-1}\alpha}(A)\neq\emptyset\). Thus, \(\alpha\in\Delta_{\theta_{\beta_{1,k-1}}(A)}\). This contradicts to the fact that \((\beta,A)\) is a cycle with no exits. So, the loop \(e(\beta,\xi)\) has no entrances. We thus have that each element of \(Z(A)\) is the base point of a loop without entrances, and hence that \(E\) is not topologically free. We are ready to state and prove our Cuntz-Krieger uniqueness theorem for the \(C^{*}\)-algebra of a generalized Boolean dynamical system. **Theorem 3.6**.: _Let \((\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})\) be a generalized Boolean dynamical system. Then the following are equivalent._ 1. \((\mathcal{B},\mathcal{L},\theta)\) _satisfies Condition (L)._ 2. _If_ \(C\) _is_ \(C^{*}\)_-algebra and_ \(\rho:C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})\to C\) _is a_ \(*\)_-homomorphism, then_ \(\rho\) _is injective if and only if_ \(\rho(p_{A})\neq 0\) _for each_ \(A\in\mathcal{B}\setminus\{\emptyset\}\)_._ 3. _If_ \(C\) _is_ \(C^{*}\)_-algebra and_ \(\rho:C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})\to C\) _is a_ \(*\)_-homomorphism, then_ \(\rho\) _is injective if and only if_ \(\rho(s_{\alpha,A}s_{\alpha,A}^{*})\neq 0\) _for all_ \(\alpha\in\mathcal{L}^{*}\) _and all_ \(A\in\mathcal{I}_{\alpha}\setminus\{\emptyset\}\)_._ 4. _Every non-zero ideal of_ \(C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})\) _contains_ \(p_{A}\) _for some_ \(A\in\mathcal{B}\setminus\{\emptyset\}\)_._ Proof.: (1) \(\Longleftrightarrow\) (2): Let \(\phi:C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})\to\mathcal{O} (E_{(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})})\) be the isomorphism from Proposition 3.3. Then \(\psi\mapsto\psi\circ\phi\) is a bijection between the class of \(*\)-homomorphisms defined on \(C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})\) defined on \(\mathcal{O}(E_{(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})})\) and the class of \(*\)-homomorphisms defined on \(C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})\) sucha tht \(\psi\circ t^{0}\) is injective if and only if \(\psi(\phi(p_{A}))\neq 0\) for all \(A\in\mathcal{B}\). The map \(\psi\mapsto(\psi\circ t^{0},\psi\circ t^{1})\) is a bijection between the class of \(*\)-homomorphisms defined on \(C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})\) such that \(\psi\circ t^{0}\) is injective and the class of injective Cuntz-Krieger \(E_{(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})}\)-pairs. The results therefore follows from Proposition 3.5 and [17, Theorem 6.14]. (2) \(\Longrightarrow\) (3): The "only if" part is clear. To prove the "if" part, assume \(\rho(s_{\alpha,A}s_{\alpha,A}^{*})\neq 0\) for all \(\alpha\in\mathcal{L}^{*}\) and all \(\emptyset\neq A\in\mathcal{I}_{\alpha}\). Taking \(\alpha=\emptyset\), we have \(\rho(p_{A})=\rho(s_{\emptyset,A}s_{\emptyset,A}^{*})\neq 0\) for all \(\emptyset\neq A\in\mathcal{I}_{\emptyset}(=\mathcal{B})\). Thus, by (2), \(\rho\) is injective. (3) \(\Longrightarrow\) (2): The "only if" part is trivial. To prove the "if" part, suppose \(\rho(p_{A})\neq 0\) for each \(A\in\mathcal{B}\setminus\{\emptyset\}\). We show that \(\rho(s_{\alpha,A}s_{\alpha,A}^{*})\neq 0\) for all \(\alpha\in\mathcal{L}^{*}\) and all \(A\in\mathcal{I}_{\alpha}\setminus\{\emptyset\}\). Assume to the contrary that \(\rho(s_{\alpha,A}s_{\alpha,A}^{*})=0\) for some \(\alpha\in\mathcal{L}^{*}\) and some \(\emptyset\neq A\in\mathcal{I}_{\alpha}\). Then \[\rho(p_{A})=\rho(s_{\alpha,A}^{*}s_{\alpha,A}s_{\alpha,A}^{*}s_{\alpha,A})= \rho(s_{\alpha,A}^{*})\rho(s_{\alpha,A}s_{\alpha,A}^{*})\rho(s_{\alpha,A})=0,\] a contradiction. So, it follows by (3) that \(\rho\) is injective. (2) \(\Longrightarrow\) (4): Let \(I\) be a non-zero ideal of \(C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})\). Then the quotient map from \(C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})\) to \(C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})/I\) is a non-injective \(*\)-homomorphism. It therefore follows from (2) that \(p_{A}\in I\) for some \(A\in\mathcal{B}\setminus\{\emptyset\}\). (4) \(\Longrightarrow\) (2): Let \(\rho:C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})\to C\) be a \(*\)-homomorphism. It is obvious that if \(\rho\) is injective, then \(\rho(p_{A})\neq 0\) for each \(A\in\mathcal{B}\setminus\{\emptyset\}\). Conversely if \(\rho(p_{A})\neq 0\) for each \(A\in\mathcal{B}\setminus\{\emptyset\}\), then it follows from (3) that \(\ker\rho=\{0\}\) and thus that \(\rho\) is injective. As a corollary, we get the following strengthening of [9, Theorem 9.9] and [7, Theorem 2.5]. **Corollary 3.7**.: _Let \((\mathcal{B},\mathcal{L},\theta)\) be a Boolean dynamical system. Then the following three conditions are equivalent._ 1. \((\mathcal{B},\mathcal{L},\theta)\) _satisfies Condition (L)._ 2. \(A\) \(*\)_-homomorphism_ \(\pi:C^{*}(\mathcal{B},\mathcal{L},\theta)\to B\) _is injective if and only if_ \(\pi(p_{A})\neq 0\) _for all_ \(\emptyset\neq A\in\mathcal{B}\) 3. \(A\) \(*\)_-homomorphism_ \(\pi:C^{*}(\mathcal{B},\mathcal{L},\theta)\to B\) _is injective if and only if_ \(\pi(s_{\alpha}p_{A}s_{\alpha}^{*})\neq 0\) _for every_ \(\alpha\in\mathcal{L}^{*}\) _and every_ \(\emptyset\neq A\in\mathcal{B}\) _with_ \(A\subseteq\mathcal{R}_{\alpha}\)_._ Proof.: It follows from Theorem 3.6 and [7, Example 4.1]. A Cuntz-Krieger uniqueness theorem for \(C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha};\mathcal{J})\) We now prove a Cuntz-Krieger uniqueness theorem for the \(C^{*}\)-algebras of relative generalized Boolean dynamical systems. Given a relative generalized Boolean dynamical system \((\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha};\mathcal{J})\), it is shown in [8] that there is a generalized Boolean dynamical system \((\mathcal{B}^{\prime},\mathcal{L},\theta^{\prime},\mathcal{I}_{\alpha}^{ \prime})\) such that \(C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha};\mathcal{J})\) is isomorphic to \(C^{*}(\mathcal{B}^{\prime},\mathcal{L},\theta^{\prime},\mathcal{I}_{\alpha}^{ \prime})\). We recall the construction of \((\mathcal{B}^{\prime},\mathcal{L},\theta^{\prime},\mathcal{I}_{\alpha}^{ \prime})\) and the isomorphism between \(C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha};\mathcal{J})\) and \(C^{*}(\mathcal{B}^{\prime},\mathcal{L},\theta^{\prime},\mathcal{I}_{\alpha}^{ \prime})\). Then by applying the Cuntz-Krieger uniqueness theorem (Theorem 3) of \(C^{*}(\mathcal{B}^{\prime},\mathcal{L},\theta^{\prime},\mathcal{I}_{\alpha}^{ \prime})\), we will have our uniqueness theorem. Let \((\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha};\mathcal{J})\) be a relative generalized Boolean dynamical system and let \[\mathcal{B}^{\prime}=\{(A,[B]_{\mathcal{J}}):A,B\in\mathcal{B}\text{ and }[A]_{\mathcal{B}_{reg}}=[B]_{\mathcal{B}_{reg}}\}.\] Define \[(A_{1},[B_{1}]_{\mathcal{J}})\cup(A_{2},[B_{2}]_{\mathcal{J}}) :=(A_{1}\cup A_{2},[B_{1}\cup B_{2}]_{\mathcal{J}}),\] \[(A_{1},[B_{1}]_{\mathcal{J}})\cap(A_{2},[B_{2}]_{\mathcal{J}}) :=(A_{1}\cap A_{2},[B_{1}\cap B_{2}]_{\mathcal{J}}),\] \[(A_{1},[B_{1}]_{\mathcal{J}})\setminus(A_{2},[B_{2}]_{\mathcal{J}}) :=(A_{1}\setminus A_{2},[B_{1}\setminus B_{2}]_{\mathcal{J}}).\] Then \(\mathcal{B}^{\prime}\) is a Boolean algebra with the least element \(\emptyset:=(\emptyset,[\emptyset]_{\mathcal{J}})\). For \(\alpha\in\mathcal{L}\), if we define \(\theta^{\prime}_{\alpha}:\mathcal{B}^{\prime}\to\mathcal{B}^{\prime}\) by \[\theta^{\prime}_{\alpha}(A,[B]_{\mathcal{J}}):=(\theta_{\alpha}(A),[\theta_{ \alpha}(A)]_{\mathcal{J}}),\] then \((\mathcal{B}^{\prime},\mathcal{L},\theta^{\prime})\) is a Boolean dynamical system. Note that \[\mathcal{B}^{\prime}_{reg}:=\mathcal{B}^{\prime(\mathcal{B}^{\prime},\mathcal{ L},\theta^{\prime})}=\{(A,\emptyset):A\in\mathcal{B}_{reg}\}.\] By [8, Proposition 6.4], we see that the map \(\phi:C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha};\mathcal{J}) \to C^{*}(\mathcal{B}^{\prime},\mathcal{L},\theta^{\prime},\mathcal{I}_{ \alpha}^{\prime}),\) where \(\mathcal{I}_{\alpha}^{\prime}:=\{(A,[A]_{\mathcal{J}}):A\in\mathcal{I}_{ \alpha}\}\) for \(\alpha\in\mathcal{L}\), given by \[\rho(p_{A})=p_{(A,[A]_{\mathcal{J}})}\text{ and }\phi(s_{\alpha,B})=s_{\alpha,(B,[B]_{ \mathcal{J}})}\] for all \(A\in\mathcal{B}\), \(\alpha\in\mathcal{L}\) and \(B\in\mathcal{I}_{\alpha}\) is an isomorphism with the inverse map \(\rho:C^{*}(\mathcal{B}^{\prime},\mathcal{L},\theta^{\prime},\mathcal{I}_{ \alpha}^{\prime})\to C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{ \alpha};\mathcal{J})\) given by \[\rho(p_{(A,[B]_{\mathcal{J}})})=p_{A}+p_{C}-\sum_{\alpha\in\Delta_{C}}s_{ \alpha,\theta_{\alpha}(C)}s_{\alpha,\theta_{\alpha}(C)}^{*}-p_{D}+\sum_{\alpha \in\Delta_{D}}s_{\alpha,\theta_{\alpha}(D)}s_{\alpha,\theta_{\alpha}(D)}^{*},\] where \(C,D\in\mathcal{B}_{reg}\) are such that \(A\cup C=B\cup D\) and \(A\cap C=B\cap D=\emptyset\), and \[\rho(s_{\alpha,(A,[A]_{\mathcal{J}})})=s_{\alpha,A}\] for all \((A,[B]_{\mathcal{J}})\in\mathcal{B}^{\prime},\alpha\in\mathcal{L}\) and \((A,[A]_{\mathcal{J}})\in\mathcal{I}_{\alpha}^{\prime}\). **Lemma 3.8**.: _Let \((\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha};\mathcal{J})\) be a relative generalized Boolean dynamical system. Then, \((\mathcal{B},\mathcal{L},\theta)\) satisfies Condition (L) if and only if \((\mathcal{B}^{\prime},\mathcal{L},\theta^{\prime})\) satisfies Condition (L)._ Proof.: \((\Rightarrow)\) Assume to the contrary that \((\mathcal{B}^{\prime},\mathcal{L},\theta^{\prime})\) does not satisfy Condition (L). There is then a cycle \((\beta,(A,[B]_{\mathcal{J}}))\) with no exit, where \(\beta=\beta_{1}\cdots\beta_{n}\). Since \((\beta,(A,[B]_{\mathcal{J}}))\) has no exit, it follows that \((A,[B]_{\mathcal{J}})\in\mathcal{B}^{\prime}_{reg}\). So, \(A\in\mathcal{B}_{reg}\) and \((A,[B]_{\mathcal{J}})=(A,\emptyset)\). We claim that \((\beta,A)\) is a cycle with no exit in \((\mathcal{B},\mathcal{L},\theta)\). Choose \(A^{\prime}\subseteq A\). Then \((A^{\prime},\emptyset)\subseteq(A,\emptyset)\) so \((\theta_{\beta}(A^{\prime}),\emptyset)=\theta^{\prime}_{\beta}(A^{\prime},\emptyset)= (A^{\prime},\emptyset)\). Thus, \(\theta_{\beta}(A^{\prime})=A^{\prime}\), which means that \((\beta,A)\) is a cycle. If \((\beta,A)\) has an exit, there is a \(t\leq n\) and a \(C\in\mathcal{B}\) such that \(\emptyset\neq C\subseteq\theta_{\beta_{1,t}}(A)\) and \(\Delta_{C}\neq\{\beta_{t+1}\}\) (where \(\beta_{n+1}:=\beta_{1}\)). It then easy to see that \(\emptyset\neq(C,\emptyset)\subseteq(\theta_{\beta_{1,t}}(A),\emptyset)=\theta^ {\prime}_{\beta_{1,t}}(A,\emptyset)\) and \(\Delta_{(C,\emptyset)}\neq\{\beta_{t+1}\}\), which contradicts to the fact that \((\beta,(A,[B]_{\mathcal{J}}))\) has no exit. Hence, \((\beta,A)\) is a cycle with no exit, a contradiction. Therefore, \((\mathcal{B}^{\prime},\mathcal{L},\theta^{\prime})\) satisfies Condition (L). (\(\Leftarrow\)) Suppose that \((\mathcal{B},\mathcal{L},\theta)\) does not satisfy Condition (L). Choose a cycle \((\beta,A)\) with no exit, where \(\beta=\beta_{1}\cdots\beta_{n}\). Then, \(A\in\mathcal{B}_{reg}\) and \((A,\emptyset)\in\mathcal{B}^{\prime}_{reg}\). We claim that \((\beta,(A,\emptyset))\) is a cycle with no exit. Let \((A^{\prime},\emptyset)\subseteq(A,\emptyset)\). Then, \(\theta^{\prime}_{\beta}(A^{\prime},\emptyset)=(\theta_{\beta}(A^{\prime}), \emptyset)=(A^{\prime},\emptyset)\). So, \((\beta,(A,\emptyset))\) is a cycle. If \((\beta,(A,\emptyset))\) has an exit, there is a \(t\leq n\) and a \((C,\emptyset)\in\mathcal{B}^{\prime}\) such that \(\emptyset\neq(C,\emptyset)\subseteq(\theta_{\beta_{1,t}}(A),\emptyset)\) and \(\Delta_{(C,\emptyset)}\neq\{\beta_{t+1}\}\) (where \(\beta_{n+1}:=\beta_{1}\)). Then, \(\emptyset\neq C\subseteq\theta_{\beta_{1,t}}(A)\) and \(\Delta_{C}\neq\{\beta_{t+1}\}\), this is not the case since the cycle \((\beta,A)\) has no exit. Thus, the cycle \((\beta,(A,\emptyset))\) has no exit, which is a contradiction. So, \((\mathcal{B},\mathcal{L},\theta)\) satisfies Condition (L). **Theorem 3.9**.: _Let \((\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha};\mathcal{J})\) be a relative generalized Boolean dynamical system. Then the following are equivalent._ 1. \((\mathcal{B},\mathcal{L},\theta)\) _satisfies Condition (L)._ 2. _If_ \(C\) _is_ \(C^{*}\)_-algebra and_ \(\psi:C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha};\mathcal{J})\to C\) _is a_ \(*\)_-homomorphism, then_ \(\psi\) _is injective if and only if the following properties hold:_ 1. \(\psi(p_{A})\neq 0\) _for all_ \(\emptyset\neq A\in\mathcal{B}\)_,_ 2. \(\psi(p_{B}-\sum_{\alpha\in\Delta_{B}}s_{\alpha,\theta_{\alpha}(B)}s_{\alpha, \theta_{\alpha}(B)}^{*})\neq 0\) _for all_ \(\emptyset\neq B\in\mathcal{B}_{reg}\setminus\mathcal{J}\)_._ Proof.: (1) \(\Longrightarrow\) (2): The "only if" statement is clear. We prove the "if" part. Let \(\psi:C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha};\mathcal{J})\to C\) be a \(*\)-homomorphism such that \(\psi(p_{A})\neq 0\) for all \(A\in\mathcal{B}\setminus\{\emptyset\}\) and \(\psi(p_{B}-\sum_{\alpha\in\Delta_{B}}s_{\alpha,\theta_{\alpha}(B)}s_{\alpha, \theta_{\alpha}(B)}^{*})\neq 0\) for all \(B\in\mathcal{B}_{reg}\setminus\mathcal{J}\). Let \(\rho:C^{*}(\mathcal{B}^{\prime},\mathcal{L},\theta^{\prime},\mathcal{I}_{ \alpha}^{\prime})\to C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{ \alpha};\mathcal{J})\) be the isomorphism given by \[\rho(p_{(A,[B]_{\mathcal{J}})})=p_{A}+p_{C}-\sum_{\alpha\in\Delta_{C}}s_{ \alpha,\theta_{\alpha}(C)}s_{\alpha,\theta_{\alpha}(C)}^{*}-p_{D}+\sum_{\alpha \in\Delta_{D}}s_{\alpha,\theta_{\alpha}(D)}s_{\alpha,\theta_{\alpha}(D)}^{*},\] where \(C,D\in\mathcal{B}_{reg}\) are such that \(A\cup C=B\cup D\) and \(A\cap C=B\cap D=\emptyset\), and \[\rho(s_{\alpha,(A,[A]_{\mathcal{J}})})=s_{\alpha,A}\] for all \((A,[B]_{\mathcal{J}})\in\mathcal{B}^{\prime},\alpha\in\mathcal{L}\) and \((A,[A]_{\mathcal{J}})\in\mathcal{I}_{\alpha}^{\prime}\). Then, \(\psi\circ\rho:C^{*}(\mathcal{B}^{\prime},\mathcal{L},\theta^{\prime},\mathcal{I }_{\alpha}^{\prime})\to C\) is a \(*\)-homomorphism such that \[\psi\circ\rho(s_{\alpha,(A,[A]_{\mathcal{J}})}s_{\alpha,(A,[A]_{\mathcal{J}})} ^{*})=\psi(s_{\alpha,A}s_{\alpha,A}^{*})\neq 0\] for all \(\alpha\in\mathcal{L}^{*}\) and all \(\emptyset\neq(A,[A]_{\mathcal{J}})\in\widetilde{\mathcal{I}}_{\alpha}\). In fact, if \(\psi\circ\rho(s_{\alpha,(A,[A]_{\mathcal{J}})}s_{\alpha,(A,[A]_{\mathcal{J}})} ^{*})=\psi(s_{\alpha,A}s_{\alpha,A}^{*})=0\) for some \(\alpha\in\mathcal{L}^{*}\) and some \(\emptyset\neq(A,[A]_{\mathcal{J}})\in\widetilde{\mathcal{I}}_{\alpha}\), then \[\psi(p_{A})=\psi(s_{\alpha,A}^{*}s_{\alpha,A}s_{\alpha,A}^{*}s_{\alpha,A})=\psi (s_{\alpha,A}^{*})\psi(s_{\alpha,A}s_{\alpha,A}^{*})\psi(s_{\alpha,A})=0\] for \(\emptyset\neq A\in\mathcal{I}_{\alpha}\), a contradiction. Since \((\mathcal{B},\mathcal{L},\theta)\) satisfies Condition (L), \((\mathcal{B}^{\prime},\mathcal{L},\theta^{\prime})\) satisfies Condition (L). Thus, \(\psi\circ\rho\) is injective by Theorem 3.6. Hence, \(\psi\) is injective. (2) \(\Longrightarrow\) (1): Let \(\phi:C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha};\mathcal{J})\to C ^{*}(\mathcal{B}^{\prime},\mathcal{L},\theta^{\prime},\mathcal{I}_{\alpha}^{ \prime})\) be the isomorphism such that \(\phi(p_{A})=p_{(A,[A]_{\mathcal{J}})}\) and \(\phi(s_{\alpha,B})=s_{(\alpha,(B,[B]_{\mathcal{J}}))}\) for \(A\in\mathcal{B},\alpha\in\mathcal{L}\) and \(B\in\mathcal{I}_{\alpha}\). If \(C\) is a \(C^{*}\)-algebra and \(\rho:C^{*}(\mathcal{B}^{\prime},\mathcal{L},\theta^{\prime},\mathcal{I}_{ \alpha}^{\prime})\to C\) be a \(*\)-homomorphism such that \(\rho(p_{(A,[B]_{\mathcal{J}})})\neq 0\) for each \(\emptyset\neq(A,[B]_{\mathcal{J}})\in\widetilde{\mathcal{B}}\), then \(\rho\circ\phi:C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha}; \mathcal{J})\to C\) is a \(*\)-homomorphism such that \[\rho\circ\phi(p_{A})=\rho(p_{(A,[A]_{\mathcal{J}})})\neq 0\] for all \(\emptyset\neq A\in\mathcal{B}\), and \[\rho\circ\phi(p_{B}-\sum_{\alpha\in\Delta_{B}}s_{\alpha,\theta_{ \alpha}(B)}s_{\alpha,\theta_{\alpha}(B)}^{*})\] \[=\rho\Big{(}p_{(B,[B]_{\mathcal{J}})}-\sum_{\alpha\in\Delta_{B}}s _{(\alpha,(\theta_{\alpha}(B),[\theta_{\alpha}(B)]_{\mathcal{J}}))}s_{(\alpha, (\theta_{\alpha}(B),[\theta_{\alpha}(B)]_{\mathcal{J}}))}^{*}\Big{)}\] \[=\rho\Big{(}p_{(\emptyset,[B]_{\mathcal{J}})}+p_{(B,\emptyset)}- \sum_{\alpha\in\Delta_{(B,\emptyset)}}s_{(\alpha,(\theta_{\alpha}(B),[\theta_{ \alpha}(B)]_{\mathcal{J}}))}s_{(\alpha,(\theta_{\alpha}(B),[\theta_{\alpha}( B)]_{\mathcal{J}}))}^{*}\Big{(}\] \[=\rho(p_{(\emptyset,[B]_{\mathcal{J}})})\] \[\neq 0\] for all \(\emptyset\neq B\in\mathcal{B}_{reg}\setminus\mathcal{J}\). Thus \(\rho\circ\phi\) is injective by our assumption. So, \(\rho\) is injective, and hence, \((\mathcal{B}^{\prime},\mathcal{L},\theta^{\prime})\) satisfies Condition (L) by Theorem 3.6. Therefore, \((\mathcal{B},\mathcal{L},\theta)\) satisfies Condition (L) by Lemma 3.8. ## 4. Condition (K) Recall from [7, Definition 5.1] that a Boolean dynamical system \((\mathcal{B},\mathcal{L},\theta)\) is said to satisfy Condition (K) if there is no pair \(((\beta,\eta),A)\) where \((\beta,\eta)\) is an ultrafilter cycle and \(A\in\eta\) such that if \(\gamma\in\mathcal{L}^{*}\setminus\{\emptyset\}\), \(B\in\mathcal{I}_{A}\) and \(\theta_{\gamma}(B)\in\eta\), then \(B\in\eta\) and \(\gamma=\beta^{k}\) for some \(k\in\mathbb{N}\). We will now generalize and strengthen the characterization given in [7, Theorem 6.3 and Theorem 8.1] of when a Boolean dynamical system satisfies Condition (K). Recall from [20, Remark 2.1] that a \(C^{*}\)-algebra \(C\) is said to have _the ideal property_ if whenever \(I\) and \(J\) are ideals in \(C\) such that \(I\) is not contained in \(J\), there is a projection in \(I\setminus J\); from [21, Definition 8.1] that \(C^{*}\)-algebra \(C\) is said to have _the weak ideal property_ if whenever \(I\subsetneq J\) are ideals in \(\mathcal{K}\otimes C\), where \(\mathcal{K}\) denotes the \(C^{*}\)-algebra of compact operators on a separable infinite dimensional Hilbert space, then \(J/I\) contains a nonzero projection, and from [1] that a \(C^{*}\)-algebra \(C\) is said to have _topological dimension zero_ if the primitive ideal space of \(C\) endowed with the hull-kernel topology has a basis of compact open sets. For \(n\in\mathbb{N}\), we let \(M_{n}(C(\mathbb{T}))\) denote the \(C^{*}\)-algebra of \(n\times n\)-matrices of continuous functions from \(\mathbb{T}\) to \(\mathbb{C}\). **Theorem 4.1**.: _Let \((\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha};\mathcal{J})\) be a relative generalized Boolean dynamical system. Then the following are equivalent._ 1. \((\mathcal{B},\mathcal{L},\theta)\) _satisfies Condition (K)._ 2. \((\mathcal{B},\mathcal{L},\theta)\) _has no cyclic maximal tails._ 3. _If_ \(\mathcal{H}\) _is a hereditary_ \(\mathcal{J}\)_-saturated ideal of_ \(\mathcal{B}\)_, then_ \((\mathcal{B}/\mathcal{H},\mathcal{L},\theta)\) _satisfies Condition (L)._ 4. _Every ideal of_ \(C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha};\mathcal{J})\) _is gauge-invariant._ 5. \(C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha};\mathcal{J})\) _has the ideal property._ 6. \(C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha};\mathcal{J})\) _has the weak ideal property._ 7. _The topological dimension of_ \(C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha};\mathcal{J})\) _is zero._ * \(C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha};\mathcal{J})\) _has no quotient containing a hereditary_ \(C^{*}\)_-subalgebra that is isomorphic to_ \(M_{n}(C(\mathbb{T}))\) _for some_ \(n\in\mathbb{N}\)_._ Proof.: \((1)\Longrightarrow(2)\) follows from the definition of a cyclic maximal tail. * \(\Longrightarrow\) (3) follows [7, Proposition 4.8]. * \(\Longrightarrow\) (1) follows [7, Proposition 4.5] and Remark 2.4. * \(\Longrightarrow\) (4): Suppose \(I\) is an ideal of \(C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha};\mathcal{J})\). Let \[\mathcal{H}_{I}:=\{A\in\mathcal{B}:p_{A}\in I\}\] and \[\mathcal{S}_{I}:=\Big{\{}A\in\mathcal{B}_{\mathcal{H}_{I}}:p_{A}-\sum_{\alpha \in\Delta[A]_{\mathcal{H}_{I}}}s_{\alpha,\theta_{\alpha}(A)}s_{\alpha,\theta_{ \alpha}(A)}^{*}\in I\Big{\}}\] where \(\mathcal{B}_{\mathcal{H}_{I}}:=\big{\{}A\in\mathcal{B}:[A]_{\mathcal{H}_{I}} \in(\mathcal{B}/\mathcal{H}_{I})_{\mathrm{reg}}\big{\}}\). Then [8, Lemma 7.2] says that \(\mathcal{H}_{I}\) is a hereditary \(\mathcal{J}\)-saturated ideal of \(\mathcal{B}\) and \(\mathcal{S}_{I}\) is an ideal of \(\mathcal{B}_{\mathcal{H}_{I}}\) with \(\mathcal{H}_{I}\cup\mathcal{J}\subseteq\mathcal{S}_{I}\). According to [8, Proposition 7.3], there is a surjective \(*\)-homomorphism \[\phi_{I}:C^{*}(\mathcal{B}/\mathcal{H}_{I},\mathcal{L},\theta,[\mathcal{I}_{ \alpha}];[\mathcal{S}_{I}])\to C^{*}(\mathcal{B},\mathcal{L},\theta, \mathcal{I}_{\alpha};\mathcal{J})/I\] such that \(\phi_{I}(p_{[A]})=p_{A}+I\) for \(A\in\mathcal{B}\) and \(\phi_{I}(s_{\alpha,[B]})=s_{\alpha,B}+I\) for \(\alpha\in\mathcal{L}\) and \(B\in\mathcal{I}_{\alpha}\), where \([\mathcal{I}_{\alpha}]=\{[A]:A\in\mathcal{I}_{\alpha}\}\) and \([\mathcal{S}_{I}]=\{[A]:A\in\mathcal{S}_{I}\}\), and \(I\) is gauge-invariant if (and only if) \(\phi_{I}\) is injective. Since \(\phi_{I}(p_{[A]})=p_{A}+I=0\) if and only if \(A\in\mathcal{H}_{I}\) and \[\phi_{I}\Big{(}p_{[A]}-\sum_{\alpha\in\Delta_{[A]_{\mathcal{H}_{I}}}}s_{ \alpha,\theta_{\alpha}([A])}s_{\alpha,\theta_{\alpha}([A])}^{*}\Big{)}=p_{A}- \sum_{\alpha\in\Delta_{[A]_{\mathcal{H}_{I}}}}s_{\alpha,\theta_{\alpha}(A)}s_ {\alpha,\theta_{\alpha}(A)}^{*}+I=0\] if and only if \(A\in\mathcal{S}_{I}\), it follows from Theorem 3.2 that if \((\mathcal{B}/\mathcal{H}_{I},\mathcal{L},\theta)\) satisfies Condition (L), then \(\phi_{I}\) is injective. Thus, \(I\) is gauge-invariant. \((4)\Longrightarrow(5)\): Suppose that every ideal of \(C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha};\mathcal{J})\) is gauge-invariant. Let \(I\) and \(J\) be ideals of \(C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha};\mathcal{J})\) such that \(I\not\subseteq J\). Since \(I\) and \(J\) are gauge-invariant, \(I=I_{(\mathcal{H}_{I},\mathcal{S}_{I})}\) and \(J=J_{(\mathcal{H}_{J},\mathcal{S}_{J})}\) for some hereditary \(\mathcal{J}\)-saturated ideals \(\mathcal{H}_{I},\mathcal{H}_{J}\) and ideals \(\mathcal{S}_{I},\mathcal{S}_{J}\) of \(\mathcal{B}\) by [8, Proposition 7.3]. If \(\mathcal{H}_{I}=\{A\in\mathcal{B}:p_{A}\in I\}\not\subseteq\{A\in\mathcal{B}:p_ {A}\in J\}=\mathcal{H}_{J}\), then \(I\setminus J\) contains a projection. If \(\mathcal{H}_{I}=\mathcal{H}_{J}\), then it follows that \[\mathcal{S}_{I} =\Big{\{}A\in\mathcal{B}_{\mathcal{H}_{I}}:p_{A}-\sum_{\alpha\in \Delta_{[A]}}s_{\alpha,\theta_{\alpha}(A)}s_{\alpha,\theta_{\alpha}(A)}^{*}\in I \Big{\}}\] \[\not\subseteq\Big{\{}A\in\mathcal{B}_{\mathcal{H}_{I}}:p_{A}-\sum_ {\alpha\in\Delta_{[A]}}s_{\alpha,\theta_{\alpha}(A)}s_{\alpha,\theta_{\alpha}(A)} ^{*}\in J\Big{\}}=\mathcal{S}_{J}.\] Hence, \(I\setminus J\) contains a projection. This shows that \(C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha};\mathcal{J})\) has the ideal property. \((5)\Longrightarrow(6)\) follows from [21, Proposition 8.2]. \((6)\Longrightarrow(7)\) follows from [22, Theorem 2.8]. \((7)\Longrightarrow(8)\): Since the property of having topological dimension zero passes to quotients and hereditary subalgebras, a \(C^{*}\)-algebra with topological dimension zero can not have a quotient with a hereditary \(C^{*}\)-subalgebra that is isomorphic to \(M_{n}(C(\mathbb{T}))\) for some \(n\in\mathbb{N}\setminus\{0\}\). \((8)\Longrightarrow(1)\): We prove \(\neg(1)\implies\neg(8)\). Suppose that \((\mathcal{B},\mathcal{L},\theta)\) does not satisfy Condition (K). Then, by (2) and Proposition 2.5, there is a cyclic maximal tail \(\mathcal{T}\) in \((\mathcal{B},\mathcal{L},\theta)\) and a \(B\in\mathcal{T}\) such that \(p_{[B]}C^{*}(\mathcal{B}/(\mathcal{B}\setminus\mathcal{T}),\mathcal{L},\theta,[ \mathcal{I}_{\alpha}])p_{[B]}\) is isomorphic to \(M_{n}(C(\mathbb{T}))\) for some \(n\in\mathbb{N}\). Since \(C^{*}(\mathcal{B}/(\mathcal{B}\setminus\mathcal{T}),\mathcal{L},\theta,[ \mathcal{I}_{\alpha}])\) is a quotient of \(C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha};\mathcal{J})\), we have that \(C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha};\mathcal{J})\) has a quotient that contains a hereditary \(C^{*}\)-subalgebra that is isomorphic to \(M_{n}(C(\mathbb{T}))\). A \(C^{*}\)-algebra \(A\) has real rank zero if every self-adjoint element in the minimal unitization of \(A\) can be approximated by invertible self-adjoint elements of the minimal unitization of \(A\). The following is an easy consequence of Theorem 4.1. **Corollary 4.2**.: _Let \((\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha};\mathcal{J})\) be a relative generalized Boolean dynamical system. If \(C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha};\mathcal{J})\) is purely infinite or has real rank zero, then \((\mathcal{B},\mathcal{L},\theta)\) satisfies Condition (K)._ Proof.: If \(C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha};\mathcal{J})\) is purely infinite, then \(C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha};\mathcal{J})\) has no quotient containing a hereditary \(C^{*}\)-subalgebra that is isomorphic to \(M_{n}(C(\mathbb{T}))\) for some \(n\in\mathbb{N}\) since the property of being purely infinite passes to quotients and corners (see [19, Propositions 4.3 and 4.17]). Thus, by Theorem 4.1, we have \((\mathcal{B},\mathcal{L},\theta)\) satisfies Condition \((K)\). If \(C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha};\mathcal{J})\) is of real rank zero, then \(C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha};\mathcal{J})\) has the ideal property by [1, Theorem 2.6]. It then follows that \((\mathcal{B},\mathcal{L},\theta)\) satisfies Condition \((K)\) by Theorem 4.1. ## 5. Minimality and simplicity It follows from [8, Theorem 7.4] that if the \(C^{*}\)-algebra of a relative generalized Boolean dynamical system \((\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha};\mathcal{J})\) is simple, then \(\mathcal{J}=\mathcal{B}_{\text{reg}}\). We will in this section generalize [9, Theorem 9.16] and characterize when the \(C^{*}\)-algebra of a generalized Boolean dynamical system \((\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})\) is simple (Corollary 5.6). But we begin with two leammas and a partly generalizing and strengthening [9, Theorem 9.15]. ### Minimality If \(\mathcal{I}_{1}\) and \(\mathcal{I}_{2}\) are two ideals of a Boolean algebra \(\mathcal{B}\), then we denote by \(\mathcal{I}_{1}\oplus\mathcal{I}_{2}\) the smallest ideal of \(\mathcal{B}\) that contains both \(\mathcal{I}_{1}\) and \(\mathcal{I}_{2}\). It is easy to see that \[\mathcal{I}_{1}\oplus\mathcal{I}_{2}=\{A_{1}\cup A_{2}:A_{1}\in\mathcal{I}_{1},\ A_{2}\in\mathcal{I}_{2}\}.\] **Lemma 5.1**.: _Let \((\mathcal{B},\mathcal{L},\theta)\) be a Boolean dynamical system and suppose \(A\in\mathcal{B}\). Then_ \[\mathcal{H}(A):=\big{\{}B\in\mathcal{B}:\text{there exists a finite subset $F\subseteq\mathcal{L}^{*}$ such that $B\subseteq\bigcup_{\beta\in F}\theta_{\beta}(A)$}\big{\}}\] _is the smallest hereditary ideal that contains \(A\), and_ \[\mathcal{S}(\mathcal{H}(A)):=\{B\in\mathcal{B}:\text{there is an $n \in\mathbb{N}_{0}$ such that $\theta_{\beta}(B)\in\mathcal{H}(A)$ for all $\beta\in\mathcal{L}^{n}$},\] \[\text{and $\theta_{\gamma}(B)\in\mathcal{H}(A)\oplus\mathcal{B}_{ \text{reg}}$ for all $\gamma\in\mathcal{L}^{*}$ with $|\gamma|<n$}\}\] _is a saturated hereditary ideal that contains \(A\)._ Proof.: It is straightforward to check that \(\mathcal{H}(A)\) is a hereditary ideal, and it is easy to see that if \(\mathcal{H}\) is a hereditary ideal and \(A\in\mathcal{H}\), then \(\mathcal{H}(A)\subseteq\mathcal{H}\). It is also straightforward to check that \(\mathcal{S}(\mathcal{H}(A))\) is a saturated hereditary ideal. For the proof of Lemma 5.3, the following notion of a partially defined topological graph will be useful. **Definition 5.2**.: (cf.[17, Definition 4.6, 4.7]) Let \(E\) be a partially defined topological graph. 1. For \(n\in\mathbb{N}\cup\{\infty\}\), a path \(e\in E^{n}\) is called a _negative orbit_ of \(v\in E^{0}\) if \(r(e)=v\) and \(d(e)\in E^{0}_{sg}\) when \(n<\infty\). 2. For each negative orbit \(e=(e_{1},e_{2},\cdots,e_{n})\in E^{n}\) for \(v\in E^{0}\), a _negative orbit space_\(\operatorname{Orb}^{-}(v,e)\) is defined by \[\operatorname{Orb}^{-}(v,e)=\{v,d(e_{1}),d(e_{2}),\cdots,d(e_{n})\}\subset E^{0}.\] **Lemma 5.3**.: _Let \((\mathcal{B},\mathcal{L},\theta)\) be a Boolean dynamical system such that \(\mathcal{B}\neq\{\emptyset\}\). Then \((\mathcal{B},\mathcal{L},\theta)\) has a maximal tail._ Proof.: Consider the partially defined topological graph \(E:=E_{(\mathcal{B},\mathcal{L},\theta,\mathcal{R}_{\alpha})}\) constructed in Section 3.1. Since \(\mathcal{B}\neq\{\emptyset\}\), we have that \(E^{0}\neq\emptyset\). Choose \(\chi\in E^{0}\). Let \(e:=(e^{\alpha_{n}}_{\eta_{n}})_{n\geq 1}\) be a negative orbit of \(\chi\). We claim that \[\mathcal{T}:=\{A\in\mathcal{B}:\text{there exists }\beta\in\mathcal{L}^{*} \text{ such that }\theta_{\beta}(A)\in\eta\text{ for some }\eta\in\operatorname{Orb}^{-}(\chi,e)\}\] is a maximal tail. Clearly, we have \(\emptyset\notin\mathcal{T}\). We show that (T2): Let \(A\in\mathcal{B}\) such that \(\theta_{\alpha}(A)\in\mathcal{T}\) for some \(\alpha\in\mathcal{L}\). Then, there is \(\beta\in\mathcal{L}^{*}\) such that \(\theta_{\beta}(\theta_{\alpha}(A))=\theta_{\alpha\beta}(A)\in\eta\) for some \(\eta\in\operatorname{Orb}^{-}(\chi,e)\). Thus, \(A\in\mathcal{T}\). (T3): Let \(A\cup B\in\mathcal{T}\). Then there is \(\beta\in\mathcal{L}^{*}\) such that \(\theta_{\beta}(A\cup B)=\theta_{\beta}(A)\cup\theta_{\beta}(B)\in\eta\) for some \(\eta\in\operatorname{Orb}^{-}(\chi,e)\). Since \(\eta\) is an ultrafilter, either \(\theta_{\beta}(A)\in\eta\) or \(\theta_{\beta}(B)\in\eta\). Hence, \(A\in\mathcal{T}\) or \(B\in\mathcal{T}\). (T4): Let \(A\in\mathcal{T}\) and \(B\in\mathcal{B}\) with \(A\subseteq B\). Then, there is \(\beta\in\mathcal{L}^{*}\) such that \(\theta_{\beta}(A)\in\eta\) for some \(\eta\in\operatorname{Orb}^{-}(\chi,e)\). Since \(\theta_{\beta}(A)\subseteq\theta_{\beta}(B)\), \(\theta_{\beta}(B)\in\eta\). Thus, \(B\in\mathcal{T}\). (T5): Let \(A\in\mathcal{T}\) be a regular set. Then, there is \(\beta\in\mathcal{L}^{*}\) such that \(\theta_{\beta}(A)\in\eta\) for some \(\eta\in\operatorname{Orb}^{-}(\chi,e)\). If \(\theta_{\alpha}(A)\notin\mathcal{T}\) for all \(\alpha\in\mathcal{L}^{*}\setminus\{\emptyset\}\), then \(\theta_{\beta}(\theta_{\alpha}(A))\notin\eta\) for all \(\eta\in\operatorname{Orb}^{-}(\chi,e)\) and all \(\alpha,\beta\in\mathcal{L}^{*}\setminus\{\emptyset\}\), a contradiction. Thus, \(\theta_{\alpha}(A)\in\mathcal{T}\) for some \(\alpha\in\mathcal{L}^{*}\setminus\{\emptyset\}\). (T6): Let \(A,B\in\mathcal{T}\). Then there exist \(\beta,\beta^{\prime}\in\mathcal{L}^{*}\) such that \(\theta_{\beta}(A)\in\eta\) and \(\theta_{\beta^{\prime}}(B)\in\eta^{\prime}\) for some \(\eta,\eta^{\prime}\in\operatorname{Orb}^{-}(\chi,e)\). We may assume that \[\eta=r(e^{\alpha_{i}}_{\eta_{i}}\cdots e^{\alpha_{j}}_{\eta_{j}})(=f_{\emptyset [\alpha_{i,j}]}(g_{(\alpha_{i,j-1})\alpha_{j}}(\eta_{j})))\text{ and }\eta^{\prime}=d(e^{\alpha_{i}}_{\eta_{i}}\cdots e^{\alpha_{j}}_{\eta_{j}})(=h _{[\alpha_{j}]\emptyset}(\eta_{j}))\] for some \(1\leq i,j\leq|e|\). Then, \(\theta_{\beta\alpha_{i,j}}(A)\in\eta_{j}\cap\mathcal{I}_{\alpha_{i,j}}\). Thus, \(\theta_{\beta\alpha_{i,j}}(A)\cap\theta_{\beta^{\prime}}(B)\in\eta^{\prime}\), and hence, \(\theta_{\beta\alpha_{i,j}}(A)\cap\theta_{\beta^{\prime}}(B)\in\mathcal{T}\). **Definition 5.4**.: A Boolean dynamical system \((\mathcal{B},\mathcal{L},\theta)\) is _minimal_ if \(\{\emptyset\}\) and \(\mathcal{B}\) are the only saturated hereditary ideals of \(\mathcal{B}\). **Proposition 5.5**.: _Let \((\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})\) be a generalized Boolean dynamical system. Then the following are equivalent._ 1. \((\mathcal{B},\mathcal{L},\theta)\) _is minimal._ 2. _Either_ \(\mathcal{B}=\{\emptyset\}\) _or_ \(\mathcal{B}\setminus\{\emptyset\}\) _is the only maximal tail of_ \((\mathcal{B},\mathcal{L},\theta)\)_._ 3. _If_ \(A\in\mathcal{B}\setminus\{\emptyset\}\)_, then_ \(\mathcal{S}(\mathcal{H}(A))=\mathcal{B}\)_._ 4. _If_ \(A,B\in\mathcal{B}\)_,_ \(x\in\mathcal{L}^{\infty}\) _and_ \(A\neq\emptyset\)_, then there are a_ \(C\in\mathcal{B}_{\mathrm{reg}}\) _such that_ \(B\setminus C\in\mathcal{H}(A)\)_, and an_ \(n\in\mathbb{N}_{0}\) _such that_ \(\theta_{x_{1,n}}(B)\in\mathcal{H}(A)\)_._ 5. _If_ \(A,B\in\mathcal{B}\) _and_ \(A\neq\emptyset\)_, then there is a_ \(C\in\mathcal{B}_{\mathrm{reg}}\) _such that_ \(B\setminus C\in\mathcal{H}(A)\) _and such that there for every_ \(x\in\mathcal{L}^{\infty}\) _is an_ \(n\in\mathbb{N}_{0}\) _such that_ \(\theta_{x_{1,n}}(C)\in\mathcal{H}(A)\)_._ _._ 6. \(\{0\}\) _and_ \(C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})\) _are the only gauge-invariant ideals of_ \(C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})\)_._ Proof.: The equivalence of (1) and (6) follows from [8, Theorem 7.4]. We will show that \((1)\implies(2)\implies(3)\implies(4)\implies(5)\) and \(\neg(1)\implies\neg(5)\). \((1)\implies(2)\): Suppose \((\mathcal{B},\mathcal{L},\theta)\) is minimal and that \(\mathcal{B}\neq\{\emptyset\}\). According to Lemma 5.3, \((\mathcal{B},\mathcal{L},\theta)\) then has a maximal tail. Suppose \(\mathcal{T}\) is a maximal tail. Then \(\mathcal{B}\setminus\mathcal{T}\) is a saturated hereditary ideal of \(\mathcal{B}\). Since \((\mathcal{B},\mathcal{L},\theta)\) is minimal, it follows that \(\mathcal{B}\setminus\mathcal{T}=\{\emptyset\}\), and thus \(\mathcal{T}=\mathcal{B}\setminus\{\emptyset\}\). \((2)\implies(3)\): Suppose \((2)\) holds and that \(A\in\mathcal{B}\setminus\{\emptyset\}\). Then \(\mathcal{S}(\mathcal{H}(A))\) is a saturated hereditary ideal of \(\mathcal{B}\). Suppose \(\mathcal{S}(\mathcal{H}(A))\neq\mathcal{B}\). Then, we see that \(\mathcal{B}/\mathcal{S}(\mathcal{H}(A))\neq\{[\emptyset]\}\). It then follows from Lemma 5.3 that the quotient Boolean dynamical system \((\mathcal{B}/\mathcal{S}(\mathcal{H}(A)),\mathcal{L},\theta)\) has a maximal tail \(\mathcal{T}\). Then \[\widetilde{\mathcal{T}}:=\{B\in\mathcal{B}:[B]_{\mathcal{S}(\mathcal{H}(A))} \in\mathcal{T}\}\] is a maximal tail of \((\mathcal{B},\mathcal{L},\theta)\) and therefore equal to \(\mathcal{B}\setminus\{\emptyset\}\). But that cannot be the case since \([A]_{\mathcal{S}(\mathcal{H}(A))}=[\emptyset]\). Hence, we must have that \(\mathcal{S}(\mathcal{H}(A))=\mathcal{B}\). \((3)\implies(4)\): Suppose \((3)\) holds, that \(A,B\in\mathcal{B}\), \(x\in\mathcal{L}^{\infty}\) and \(A\neq\emptyset\). Then \(B\in\mathcal{S}(\mathcal{H}(A))\). It follows from the description of \(\mathcal{S}(\mathcal{H}(A))\) givne in Lemma 5.1 that there is an \(n\in\mathbb{N}_{0}\) such that \(\theta_{\beta}(B)\in\mathcal{H}(A)\) for all \(\beta\in\mathcal{L}^{n}\), and \(\theta_{\gamma}(B)\in\mathcal{H}(A)\oplus\mathcal{B}_{\mathrm{reg}}\) for all \(\gamma\in\mathcal{L}^{*}\) with \(|\gamma|<n\). If \(n=0\) and we let \(C=\emptyset\), then \(C\in\mathcal{B}_{\mathrm{reg}}\), \(B\setminus C=B\in\mathcal{H}(A)\) and \(\theta_{x_{1,0}}(B)=\emptyset\in\mathcal{H}(A)\). If \(n>0\), then \(\theta_{x_{1,n}}(B)\in\mathcal{H}(A)\) and there is a \(C\in\mathcal{B}_{\mathrm{reg}}\) such that \(B\setminus C\in\mathcal{H}(A)\). Thus, \((4)\) holds. \((4)\implies(5)\): Suppose \((4)\) holds, that \(A,B\in\mathcal{B}\), \(x\in\mathcal{L}^{\infty}\) and \(A\neq\emptyset\). Then there are a \(C\in\mathcal{B}_{\mathrm{reg}}\) such that \(B\setminus C\in\mathcal{H}(A)\), and an \(n\in\mathbb{N}_{0}\) such that \(\theta_{x_{1,n}}(B)\in\mathcal{H}(A)\). We then have that \(B\cap C\in\mathcal{B}_{\mathrm{reg}}\), \(B\setminus(B\cap C)=B\setminus C\in\mathcal{H}(A)\). Moreover \(\theta_{x_{1,n}}(B\cap C)\subseteq\theta_{x_{1,n}}(B)\in\mathcal{H}(A)\), which implies that \(\theta_{x_{1,n}}(B\cap C)\in\mathcal{H}(A)\). Thus, \((5)\) holds. \(\neg(1)\implies\neg(5)\): Suppose that \(\mathcal{I}\) is a saturated hereditary ideal different from \(\{\emptyset\}\) and \(\mathcal{B}\). Choose \(A\in\mathcal{I}\setminus\{\emptyset\}\) and \(B\in\mathcal{B}\setminus\mathcal{I}\). Since \(\mathcal{H}(A)\subseteq\mathcal{I}\), we have that if there is a \(B^{\prime}\in\mathcal{B}\) such that \(B^{\prime}\setminus C\notin\mathcal{I}\) for any \(C\in\mathcal{B}_{\mathrm{reg}}\), then \((5)\) does not hold. Suppose that for every \(B^{\prime}\in\mathcal{B}\), there is a \(C\in\mathcal{B}_{\mathrm{reg}}\) such that \(B^{\prime}\setminus C\in\mathcal{I}\). Suppose \(C_{1}\in\mathcal{B}_{\mathrm{reg}}\) and \(B\setminus C_{1}\in\mathcal{I}\). Since \(B\notin\mathcal{I}\), it follows that \(C_{1}\notin\mathcal{I}\). Since \(C_{1}\in\mathcal{B}_{\mathrm{reg}}\), we deduce that there is an \(\alpha_{1}\in\mathcal{L}\) such that \(\theta_{\alpha_{1}}(C_{1})\notin\mathcal{I}\). We can then choose \(C\in\mathcal{B}_{\mathrm{reg}}\) such that \(\theta_{\alpha_{1}}(C_{1})\setminus C\in\mathcal{I}\). Let \(C_{2}:=C\cap\theta_{\alpha_{1}}(C_{1})(\neq\emptyset)\). Since \(\theta_{\alpha_{1}}(C_{1})\notin\mathcal{I}\), it follows that \(C_{2}\notin\mathcal{I}\). Since \(C_{2}\in\mathcal{B}_{\mathrm{reg}}\), we deduce that there is an \(\alpha_{2}\in\mathcal{L}\) such that \(\theta_{\alpha_{2}}(C_{2})\notin\mathcal{I}\). Continuing like this, we can construct a sequence \((C_{n},\alpha_{n})_{n\in\mathbb{N}}\) such that we for each \(n\in\mathbb{N}\) have \(C_{n}\in\mathcal{B}_{\mathrm{reg}}\setminus\mathcal{I}\), \(\alpha_{n}\in\mathcal{L}\), \(C_{n+1}\subseteq\theta_{\alpha_{n}}(C_{n})\) and \(\theta_{\alpha_{n}}(C_{n})\setminus C_{n+1}\in\mathcal{I}\). Let \(x=\alpha_{1}\alpha_{2}\cdots\) and suppose \(n\in\mathbb{N}\). Then \(C_{n+1}\subseteq\theta_{x_{1,n}}(C_{1})\). Since \(C_{n+1}\notin\mathcal{I}\), and therefore \(C_{n+1}\notin\mathcal{H}(A)\) it follows that \(\theta_{x_{1,n}}(C_{1})\notin\mathcal{H}(A)\). We thus have that \((5)\) does not hold. ### Simplicity We now state our main result of Section 5. It is a generalization of [9, Theorem 9.16], [12, Theorem 3.6] and [15, Theorem 4.7]. **Theorem 5.6**.: _Let \((\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})\) be a generalized Boolean dynamical system. Then the following are equivalent._ 1. _Either_ \(\mathcal{B}=\{\emptyset\}\)_, or_ \(\mathcal{B}\setminus\{\emptyset\}\) _is the only maximal tail of_ \((\mathcal{B},\mathcal{L},\theta)\) _and_ \(\mathcal{B}\setminus\{\emptyset\}\) _is not cyclic._ 2. \((\mathcal{B},\mathcal{L},\theta)\) _is minimal and satisfies Condition (L)._ 3. \((\mathcal{B},\mathcal{L},\theta)\) _is minimal and satisfies Condition (K)._ 4. \(C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})\) _is simple._ Proof.: The equivalence of (1) and (3) follows from Theorem 4.1 and Proposition 5.5, the equivalence of (2) and (3) follows from Theorem 4.1. (2) \(\Longrightarrow\) (4): Let \(I\) be a nonzero ideal of \(C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})\). Since \((\mathcal{B},\mathcal{L},\theta)\) satisfies Condition (L), \(I\) contains \(p_{A}\) for some \(A\in\mathcal{B}\setminus\{\emptyset\}\) by the Cuntz-Krieger uniqueness theorem 3.6. Then, \(\mathcal{H}_{I}=\{A\in\mathcal{B}:p_{A}\in I\}\) is a nonempty saturated hereditary ideal of \(\mathcal{B}\) by [7, Lemma 7.2(1)]. Since \((\mathcal{B},\mathcal{L},\theta)\) is minimal, \(\mathcal{H}_{I}=\mathcal{B}\). Thus, \(I=C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})\). (4) \(\Longrightarrow\) (1): Suppose that \(C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})\) is simple. Then, by Proposition 5.5, either \(\mathcal{B}=\{\emptyset\}\) or \(\mathcal{B}\setminus\{\emptyset\}\) is the only maximal tail of \((\mathcal{B},\mathcal{L},\theta)\). Suppose that \(\mathcal{T}:=\mathcal{B}\setminus\{\emptyset\}\) is a cyclic maximal tail. Then, by Proposition 2.5, there is a \(B\in\mathcal{T}\) such that \(p_{B}C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})p_{B}\) is isomorphic to \(M_{n}(C(\mathbb{T}))\) for some \(n\in\mathbb{N}\). This contradicts to the fact that \(C^{*}(\mathcal{B},\mathcal{L},\theta,\mathcal{I}_{\alpha})\) is simple. Thus, \(\mathcal{T}=\mathcal{B}\setminus\{\emptyset\}\) is not cyclic.
2306.10233
Energy Minimization for Active RIS-Aided UAV-Enabled SWIPT Systems
In this paper, we consider an active reconfigurable intelligent surface (RIS)-aided unmanned aerial vehicle(UAV)-enabled simultaneous wireless information and power transfer(SWIPT) system with multiple ground users. Compared with the conventional passive RIS, the active RIS deploying the internally integrated amplifiers can offset part of the multiplicative fading. In this system, we deal with an optimization problem of minimizing the total energy cost of the UAV. Specifically, we alternately optimize the trajectories, the hovering time, and the reflection vectors at the active RIS by using the successive convex approximation (SCA) method. Simulation results show that the active RIS performs better in energy saving than the conventional passive RIS.
Zhangjie Peng, Ruijing Liu, Cunhua Pan, Zhenkun Zhang, Jiangzhou Wang
2023-06-17T02:16:47Z
http://arxiv.org/abs/2306.10233v2
# Energy Minimization for Active RIS-Aided UAV-Enabled SWIPT Systems ###### Abstract In this paper, we consider an active reconfigurable intelligent surface (RIS)-aided unmanned aerial vehicle (UAV)-enabled simultaneous wireless information and power transfer (SWIPT) system with multiple ground users. Compared with the conventional passive RIS, the active RIS deploying the internally integrated amplifiers can offset part of the multiplicative fading. In this system, we deal with an optimization problem of minimizing the total energy cost of the UAV. Specifically, we alternately optimize the trajectories, the hovering time, and the reflection vectors at the active RIS by using the successive convex approximation (SCA) method. Simulation results show that the active RIS performs better in energy saving than the conventional passive RIS. Reconfigurable intelligent surface (RIS), active RIS, unmanned aerial vehicle (UAV), simultaneous wireless information and power transfer (SWIPT), successive convex approximation (SCA). ## I Introduction As one of the potential technologies driving the development of 6G, the reconfigurable intelligent surface (RIS), which consists of a large number of reflecting elements, has become a promising technology to enhance the communication quality of wireless networks. With the characteristics of low cost and easy deployment, researchers have studied the deployment of the RIS into wireless communication systems [1, 2, 3]. Due to the high flexibility, unmanned aerial vehicle (UAV) has been proposed to assist the signal transmission in the communication system [4, 5]. Meanwhile, simultaneous wireless information and power transfer (SWIPT), which can enhance the energy and information transmission, has emerged as a promising communication transmission approach for internet-of-things (IoT) networks [6]. By integrating the RIS, UAV and SWIPT, the performance of the communication system can be further enhanced. In [7], an RIS was deployed in SWIPT systems to enhance both information and energy transmission. The authors in [8] considered a problem for maximizing the average achievable rate, and showed that the performance of SWIPT systems with a single IoT device can be improved with the deployment of both the UAV and RIS. However, the performance of the RIS-aided systems is affected by the significant multiplicative fading on the cascaded channels of the reflective links. To solve this problem, the active RIS integrates the amplifiers into the reflecting elements to offset the multiplicative fading [9]. The authors in [10] theoretically compared the active RIS-aided system with the passive RIS-aided system, and demonstrated that the better performance can be achieved by the active RIS-aided system when the power budget is adequate. In [11], the authors investigated an active RIS-aided system and proposed a joint computing and communication design using the successive convex approximation (SCA) method. Against the above background, we consider an active RIS-aided UAV-enabled SWIPT system with multiple ground users, where the UAV serves as a transmitter. In this paper, we aim to minimize the total energy cost of the UAV while fulfilling the energy reception and information transmission target for all users. The problem is solved by reformulating the original problem into two subproblems, and applying the SCA method to handle the subproblems, and the original problem is solved by alternately optimizing two subproblems. Simulation results show that the active RIS performs better in terms of energy saving than the passive RIS under the same total energy supply for the UAV. ## II System Model And Problem Formulation ### _System Model_ As shown in Fig. 1, we consider an active RIS-aided UAV-enabled SWIPT system with \(K\) ground users, where the active RIS consists of \(M\) reflecting elements, and both the UAV and each user are equipped with one antenna. To facilitate the representation of distances between devices, we use a complex number \(q=x+jy\) to denote the horizontal coordinate \((x,y)\) of the device. In this case, the position of the active RIS and user \(k\) are respectively denoted by \(q_{\rm R}=q_{\rm R}^{x}+jq_{\rm R}^{y}\) with the height \(H_{\rm R}\) and \(q_{\rm S,k}=q_{\rm S,k}^{x}+jq_{\rm S,k}^{y}\) for \(k\in\mathcal{K}\triangleq\{1,...,K\}\) with zero height. Meanwhile, we set the UAV flying at the same height, and each flight consists of \(L\) straight path segments starting from \(q_{\rm V,0}\) and ending at \(q_{\rm V,L}\). Besides, the hovering position of the UAV is denoted by \(q_{\rm V,I}=q_{\rm V,I}^{x}+jq_{\rm V,I}^{y}\) for \(l\in\mathcal{L}\triangleq\{1,...,L-1\}\) with the Fig. 1: Active RIS-aided UAV-enabled SWIPT system. height \(H_{\rm V}\). Then, the trajectory of the UAV is expressed as \({\bf q}_{\rm V}\!\!=\!\!\left[q_{\rm V,0},...,q_{\rm V,l},...,q_{\rm V,L}\right]^{ \rm c}\!\!\in\!\!\mathbb{C}^{(L+1)\times 1}\), and the hovering time is denoted by \({\bf t}=[t_{1},t_{l},...,t_{L-1}]^{\rm T}\). When the UAV is hovering at the \(l\)-th hovering position, the reflection vector and the diagonal reflection matrix are respectively denoted by \(\mathbf{\phi}_{l}\!\!=\!\!\left[b_{1,l}e^{j\theta_{l,1}},...,b_{M,l}e^{j\theta_{M, l}}\right]^{\rm T}\) and \(\mathbf{\Theta}_{l}\!\!=\!\!\mathrm{diag}\left(\mathbf{\phi}_{l}\right)\!=\!\!\mathrm{ diag}\!\left(\!\mathbf{b}_{1,l}e^{j\theta_{1,l}},...,b_{M,l}e^{j\theta_{M,l}}\right)\), where \(\theta_{m,l}\) and \(b_{m,l}\) are the phase shift and the amplitude of the \(m\)-th reflecting element at the active RIS, respectively. The channels from the UAV to user \(k\), from the UAV to the active RIS, and from the active RIS to user \(k\) are respectively denoted by \(g_{\rm d,k,l}\!\in\!\!\mathbb{C}^{\rm l\times 1}\), \({\bf g}_{\rm t,l}\!\in\!\!\mathbb{C}^{\rm M\times 1}\) and \({\bf g}_{x,k}\!\in\!\mathbb{C}^{\rm M\times 1}\). The Rician fading channels \(g_{\rm d,k,l}\), \({\bf g}_{\rm t,l}\) and \({\bf g}_{r,k}\) are respectively modeled as \[g_{\rm d,k,l}\!=\!\!\sqrt{\beta_{\rm d,k,l}}\left(\sqrt{\frac{ \mu_{\rm d}}{\mu_{\rm d}+1}}g_{\rm d,k,l}^{\rm LoS}+\sqrt{\frac{1}{\mu_{\rm d} +1}}g_{\rm d,k,l}^{\rm NLoS}\right), \tag{1}\] \[{\bf g}_{\rm t,l}\!=\!\sqrt{\beta_{\rm t,l}}\quad\left(\sqrt{\frac {\mu_{\rm t}}{\mu_{\rm t}+1}}{\bf g}_{\rm t,l}^{\rm LoS}\right.\left.+\sqrt{ \frac{1}{\mu_{\rm t}+1}}{\bf g}_{\rm t,l}^{\rm NLoS}\right),\] (2) \[{\bf g}_{\rm r,k}\!=\!\sqrt{\beta_{r,k}}\quad\left(\sqrt{\frac{ \mu_{\rm r}}{\mu_{\rm r}+1}}{\bf g}_{\rm r,k}^{\rm LoS}\right.\left.+\sqrt{ \frac{1}{\mu_{\rm r}+1}}{\bf g}_{\rm r,k}^{\rm NLoS}\right), \tag{3}\] where each element of the non-line-of-sight (NLoS) components \(g_{\rm d,k,l}^{\rm NLoS}\), \({\bf g}_{\rm t,l}^{\rm NLoS}\) and \({\bf g}_{\rm r,k}^{\rm NLoS}\) are i.i.d. circularly symmetric complex Gaussian distribution with zero mean and unit variance, \(\mu_{\rm d}\), \(\mu_{\rm t}\) and \(\mu_{\rm r}\) are the Rician factors, \(g_{\rm d,k,l}^{\rm LoS}\), \({\bf g}_{\rm t,l}^{\rm LoS}\) and \({\bf g}_{\rm r,k}^{\rm LoS}\) are the line-of-sight (LoS) components. Under the fly-hover-broadcast (FHB) protocol [5], it is assumed that the UAV only transmits signals during hovering mode, and thus the large-scale fading coefficients \(\beta_{\rm d,k,l}\), \(\beta_{\rm t,l}\) and \(\beta_{r,k}\) that depend on the hovering position are respectively given by \[\beta_{\rm d,k,l} = \frac{\beta_{0}}{(\left|q_{\rm V,l}\!-\!q_{\rm S,k}\right|^{2}\!+ \!H_{\rm V}^{2})^{\tau_{d}/2}}\triangleq\frac{\beta_{0}}{d_{\rm d,k,l}^{\tau_{ d}}}, \tag{4}\] \[\beta_{\rm t,l} = \frac{\beta_{0}}{(\left|q_{\rm V,l}-q_{\rm R}\right|^{2}+(H_{\rm V }-H_{\rm R})^{2})^{\tau_{\rm V}/2}}\triangleq\frac{\beta_{0}}{d_{\rm t,l}^{\tau_ {\rm R}}},\] (5) \[\beta_{\rm r,k} = \frac{\beta_{0}}{(\left|q_{\rm S,k}-q_{\rm R}\right|^{2}+H_{\rm R} ^{2})^{\tau_{\rm r}/2}}\triangleq\frac{\beta_{0}}{d_{\rm r,k}^{\tau_{\rm r}}}, \tag{6}\] where \(\beta_{0}\) is the channel gain of \(1\,\mathrm{m}\), \(\tau_{\rm d}\), \(\tau_{\rm t}\) and \(\tau_{\rm r}\) are the path-loss coefficients, \(d_{\rm d,k,l}\), \(d_{\rm t,l}\) and \(d_{\rm r,k}\) are the distances from user \(k\) to the UAV, from the UAV to the active RIS and from the active RIS to user \(k\), respectively. \(g_{\rm d,k,l}^{\rm LoS}\), \({\bf g}_{\rm t,l}^{\rm LoS}\) and \({\bf g}_{\rm r,k}^{\rm LoS}\) represent the LoS components under the uniform linear array (ULA) model, which are respectively expressed as \[g_{\rm d,k,l}^{\rm LoS} = e^{-j\frac{2\pi}{\lambda}d_{\rm d,k,l}}, \tag{7}\] \[{\bf g}_{\rm t,l}^{\rm LoS} = e^{-j\frac{2\pi d_{\rm d,k}}{\lambda}}\Big{[}1,\!e^{-j\frac{2\pi d }{\lambda}\cos\omega_{\rm t,l}}\!,\!...,\!e^{-j\frac{2\pi(M-1)d}{\lambda}\cos \omega_{\rm t,l}}\!\Big{]}^{\rm T}\!,\] (8) \[{\bf g}_{\rm r,k}^{\rm LoS} = e^{-j\frac{2\pi d_{\rm d,k}}{\lambda}}\Big{[}\!1,\!e^{-j\frac{2 \pi d}{\lambda}\cos\omega_{\rm t,k}}\!,\!...,\!e^{-j\frac{2\pi(M-1)d}{\lambda} \cos\omega_{\rm t,k}}\!\Big{]}^{\rm T}\!, \tag{9}\] where \(\lambda\) and \(d\) respectively represent the wavelength and the element spacing of the active RIS, \(\cos\omega_{\rm t,l}\!=\!\frac{q_{\rm R}^{\tau_{\rm R}}-q_{\rm V,l}^{\tau_{\rm V,l}}}{d_{\rm t,l}}\) and \(\cos\omega_{\rm r,k}\!=\!\frac{q_{\rm S,k}^{\tau_{\rm R}}-q_{\rm R}^{\tau_{\rm R}}}{d _{\rm t,k}}\) are the cosine of angle-of-arrival (AoA) and angle-of-departure (AoD), respectively. In our system, the information transfer and energy application can be obtained simultaneously, and the power split ratio for decoding information is \(\eta\in[0,1]\) and for harvested power is \(1-\eta\). Then, the received signal of user \(k\) is given by \[y_{k}= \sqrt{p_{k}}({\bf g}_{\rm r,\!k}^{\rm H}\mathbf{\Theta}_{l}\!{\bf g }_{\rm t,l}\!+\!g_{\rm d,k,l})x_{k}+{\bf g}_{\rm r,\!k}^{\rm H}\mathbf{\Theta}_{l}{ \bf n}_{\rm RIS}+n_{\rm r}\] \[+\!\!\!\sum_{j\neq k,j\in\mathcal{K}}\!\!\!\sqrt{p_{j}}({\bf g}_{ \rm r,\!k}^{\rm H}\mathbf{\Theta}_{l}\!{\bf g}_{\rm t,l}\!+\!g_{\rm d,k,l})x_{j}\] \[= \sqrt{p_{k}}g_{\rm k,l}x_{k}+\!\!\!\sum_{j\neq k,j\in\mathcal{K}}\! \!\!\sqrt{p_{j}}g_{\rm k,l}x_{j}+{\bf g}_{\rm r,\!k}^{\rm H}\mathbf{\Theta}_{l}{\bf n }_{\rm RIS}+n_{\rm r}, \tag{10}\] where \(g_{k,l}\!=\!{\bf g}_{\rm r,\!k}^{\rm H}\mathbf{\Theta}_{l}\!{\bf g}_{\rm t,l}\!+\!g_{ \rm d,k,l}\), \(p_{k}\) is the transmit power for user \(k\) from the UAV, \(x_{k}\!\sim\!\mathcal{CN}(0,1)\) is the transmit signal for user \(k\), \(n_{\rm r}\!\sim\!\mathcal{CN}\left(0,\delta_{\rm f}^{2}\right)\) is the noise at user \(k\), and \({\bf n}_{\rm RIS}\!\sim\!\mathcal{CN}\left(0,\delta_{\rm f}^{2}\right)\) is the noise at the active RIS. When the UAV is hovering at the \(l\)-th hovering position, the ergodic rate of user \(k\) is given by \[R_{k,l}\!=\!\mathbb{E}\!\left\{\!\!\log_{2}\!\!\left(\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\! ## III Joint Reflection Coefficients And Trajectory Design In this section, we propose an effective SCA-based algorithm for solving Problem (17). Specifically, we reformulate the original problem into two subproblems, and solve the subproblems by using SCA method. Then, the original problem is solved by alternately optimizing two subproblems. ### _The Design of Reflection Coefficient Vectors_ In this subsection, we first optimize \(\{\boldsymbol{\phi}_{l}\}\) with given \(\mathbf{q}_{\mathrm{V}}\) and \(\mathbf{t}\) when the UAV is at the \(l\)-th hovering position. Since (17a) is not a function of \(\{\boldsymbol{\phi}_{l}\}\), the corresponding subproblem is a feasibility-check problem. To improve the convergence performance, the optimization objective can be enhanced by maximizing the total oversupplied energy of all users [1]. Since the subproblem may not be feasible, it may cause the algorithm to converge early. To solve this, we set the objective function to maximize the minimum charged energy of all users instead [1], and the subproblem is formulated as \[\max_{\{\boldsymbol{\phi}_{l}\},\varepsilon} \varepsilon\] (18a) \[\mathrm{s.t.} (1/E_{k}^{\mathrm{req}})\sum_{l\in\mathcal{L}}t_{l}\tilde{P}_{k,l }\geqslant\varepsilon,\,k\in\mathcal{K},\] (18b) \[\eqref{eq:constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraintconstraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraintconstraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraint_constraintconstraint_constraint_constraint_constraint_constraint_constraint_constraint_constraintconstraint_constraint_constraint_constraint_constraintconstraint_constraintconstraint_constraint_constraintconstraint_constraint_constraint_constraint_constraint_constraintconstraint_constraint_constraint_constraintconstraint_constraint_constraintconstraint_constraint_constraintconstraint_constraint_constraint_constraint_constraint_constraint_constraintconstraint_constraint_constraint_constraintconstraint_constraint_constraint_constraint_constraintconstraint_constraint_constraintconstraint_constraint_constraint_constraintconstraint_constraint_constraintconstraint_constraintconstraint_constraintconstraint_constraintconstraint_constraintconstraint_constraintconstraint_constraintconstraint_constraintconstraint_constraintconstraint_constraintconstraint_constraintconstraint_constraintconstraint_constraintconstraint_constraintconstraint_constraintconstraint_constraintconstraintconstraint_constraintconstraint_constraintconstraint_constraintconstraint_constraintconstraintconstraint_constraintconstraintconstraint_constraintconstraintconstraintconstraint_constraint Then, constraint (17d) can be reformulated as \[\left(\sum_{k\in\mathcal{K}}p_{k}\beta_{t,l}+\delta_{\mathrm{RIS}}^{2}\right) \boldsymbol{\phi}_{l}^{\mathrm{H}}\boldsymbol{\phi}_{t}l\leqslant E_{\mathrm{ RIS}}^{\mathrm{act}},\,l\in\mathcal{L}. \tag{40}\] Note that constraints (17b) and (33) are still non-convex. By using the similar processing procedure in (39), we can obtain \(\mathbb{E}\{\|\boldsymbol{\mathrm{g}}_{\mathrm{r},k}^{\mathrm{H}}\boldsymbol{ \mathrm{\Theta}}_{l}\|^{2}\}=\beta_{r,k}\boldsymbol{\mathrm{\Theta}}_{l}^{ \mathrm{H}}\boldsymbol{\mathrm{\Theta}}_{l}^{2}\delta_{\mathrm{RIS}}^{2}\), and reformulate (17b) as \[\frac{\eta p_{k}\mathcal{D}_{k,l}}{\sum\limits_{j\neq k,k\in \mathcal{K}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Hence, we approximate (17c) as \[\frac{(1-\eta)}{E_{k}^{\rm req}}\sum_{l\in\mathcal{L}}p_{k}((U_{1,k,l }^{n}+U_{3,k,l}))\beta_{\mathrm{t},l}\] \[+U_{2,k,l}^{n}\sqrt{\beta_{\mathrm{d},k,l}\beta_{\mathrm{t},l}}+ \beta_{\mathrm{d},k,l})t_{l}\geqslant 1. \tag{55}\] In order to transform (55) into a convex constraint, we derive the lower bounds of \(\beta_{\mathrm{d},k,l}\) and \(\beta_{\mathrm{t},l}\) with its first-order Taylor expansion as follows \[\beta_{\mathrm{d},k,l}\geqslant \frac{\beta_{0}}{(|q_{\mathrm{V},l}^{n}-q_{\mathrm{S},k}|^{2}+H_{ \mathrm{V}}^{2})^{\frac{n}{2}}}\] \[-\frac{\tau_{\mathrm{d}}\beta_{0}(|q_{\mathrm{V},l}-q_{\mathrm{S},k}|^{2}-|q_{\mathrm{V},l}^{n}-q_{\mathrm{S},k}|^{2})}{2(|q_{\mathrm{V},l}^{n} -q_{\mathrm{S},k}|^{2}+H_{\mathrm{V}}^{2})^{\frac{n}{2}+1}}\triangleq\bar{ \beta}_{\mathrm{d},k,l}, \tag{56}\] \[\beta_{\mathrm{t},l}\geqslant \frac{\beta_{0}}{(|q_{\mathrm{V},l}^{n}-q_{\mathrm{R}}|^{2}+(H_{ \mathrm{V}}-H_{\mathrm{R}})^{2})^{\frac{n}{2}}}\] \[-\frac{\tau_{\mathrm{t}}\beta_{0}(|q_{\mathrm{V},l}-q_{\mathrm{R }}|^{2}-|q_{\mathrm{V},l}^{n}-q_{\mathrm{R}}|^{2})}{2(|q_{\mathrm{V},l}^{n}-q_ {\mathrm{R}}|^{2}+(H_{\mathrm{V}}-H_{\mathrm{R}})^{2})^{\frac{n}{2}+1}} \triangleq\bar{\beta}_{\mathrm{t},l}, \tag{57}\] where \(q_{\mathrm{V},l}^{n}\) is the value of \(q_{\mathrm{V},l}\) at the \(n\)-th iteration. By setting the slack variables \(\{\tilde{z}_{1,l}\}\), \(\{\tilde{z}_{2,k,l}\}\), \(\{\tilde{z}_{3,k,l}\}\) and \(\{\tilde{y}_{k,l}\}\), constraint (55) can be transformed as \[W_{k,l}\geqslant\frac{E_{k}^{\rm req}\tilde{y}_{k,l}^{2}}{(1-\eta)\,p_{k}t_{l}}, \tag{58}\] where \[W_{k,l}=(U_{1,k,l}^{n}+U_{3,k,l})\tilde{z}_{1,l}+U_{2,k,l}^{n} \tilde{z}_{3,k,l}+\tilde{z}_{2,k,l}, \tag{59}\] \[\bar{\beta}_{\mathrm{t},l}\geqslant\tilde{z}_{1,l}\geqslant 0,\,l \in\mathcal{L},\] (60) \[\bar{\beta}_{\mathrm{d},k,l}\geqslant\tilde{z}_{2,k,l}\geqslant 0,\,l \in\mathcal{L},\,k\in\mathcal{K},\] (61) \[\tilde{z}_{1,l}\geqslant(1/\tilde{z}_{2,k,l})\tilde{z}_{3,k,l}^{2 },\,l\in\mathcal{L},\,k\in\mathcal{K},\] (62) \[\sum_{l\in\mathcal{L}}\tilde{y}_{k,l}^{2}\geqslant 1,\,k\in \mathcal{K}. \tag{63}\] Similarly, constraint (63) is transformed as \[\sum_{l\in\mathcal{L}}(2\tilde{y}_{k,l}^{n}\tilde{y}_{k,l}-(\tilde{y}_{k,l}^{n })^{2})\geqslant 1,\,k\in\mathcal{K}, \tag{64}\] where \(\tilde{y}_{k,l}^{n}\) is the value of \(\tilde{y}_{k,l}\) at the \(n\)-th iteration. To address the non-convex constraint (40), we first derive a lower bound of \(\beta_{\mathrm{t},l}\). By defining \(\mathcal{Q}_{l}\triangleq|q_{\mathrm{V},l}-q_{\mathrm{R}}|^{2}=(q_{\mathrm{V},l}^{x}-q_{\mathrm{R}}^{x})^{2}+(q_{\mathrm{V},l}^{y}-q_{\mathrm{R}}^{y})^{2}\), and utilizing the first-order Taylor expansions to \(q_{\mathrm{V},l}^{x}\) and \(q_{\mathrm{V},l}^{y}\), we have \[\mathcal{Q}_{l}\geqslant ((q_{\mathrm{V},l}^{x})^{n}-q_{\mathrm{R}}^{x})^{2}+((q_{\mathrm{ V},l}^{y})^{n}-q_{\mathrm{R}}^{y})^{2}\] \[+2((q_{\mathrm{V},l}^{x})^{n}-q_{\mathrm{R}}^{x})(q_{\mathrm{V},l}^{x}-(q_{\mathrm{V},l}^{x})^{n})\] \[+2((q_{\mathrm{V},l}^{y})^{n}-q_{\mathrm{R}}^{y})(q_{\mathrm{V},l}^{y}-(q_{\mathrm{V},l}^{y})^{n})\triangleq\bar{\mathcal{Q}}_{l}, \tag{65}\] where \((q_{\mathrm{V},l}^{x})^{n}\) and \((q_{\mathrm{V},l}^{y})^{n}\) are the values of \(q_{\mathrm{V},l}^{x}\) and \(q_{\mathrm{V},l}^{y}\) at the \(n\)-th iteration, respectively. Then, we can reformulate constraint (40) as \[\Big{(}\sum_{k\in\mathcal{K}}p_{k}\frac{\beta_{0}}{\left(\bar{\mathcal{Q}}_{l}+ \left(H_{\mathrm{V}}-H_{\mathrm{R}}\right)^{2}\right)^{\frac{n}{2}}}+\delta_{ \mathrm{RIS}}^{2}\Big{)}\mathbf{\phi}_{l}^{\mathrm{H}}\mathbf{\phi}_{l}t_{l}\leqslant E _{\mathrm{RIS}}^{\rm act}. \tag{66}\] By utilizing the first-order Taylor expansion with respect to \(\{t_{l}\}\), constraint (66) can be reformulated as \[\Big{(}\sum_{k\in\mathcal{K}}p_{k}\frac{\beta_{0}}{\left(\bar{ \mathcal{Q}}_{l}+\left(H_{\mathrm{V}}-H_{\mathrm{R}}\right)^{2}\right)^{\frac{n}{2 }}}+\delta_{\mathrm{RIS}}^{2}\Big{)}\mathbf{\phi}_{l}^{\mathrm{H}}\mathbf{\phi}_{l}\] \[\leqslant E_{\mathrm{RIS}}^{\rm act}(\frac{1}{t_{l}^{n}}-\frac{1}{(t _{l}^{n})^{2}}(t_{l}-t_{l}^{n})),\,l\in\mathcal{L}, \tag{67}\] where \(t_{l}^{n}\) is the value of \(t_{l}\) at the \(n\)-th iteration. For constraint (17b), we reformulate it into \[\frac{\eta p_{k}W_{k,l}}{\sum\limits_{j\neq k,j\in\mathcal{K}}\eta p_{j}W_{k,l}+ \beta_{\mathrm{r},k}\delta_{\mathrm{RIS}}^{2}\mathbf{\phi}_{l}^{\mathrm{H}}\mathbf{\phi}_{l }+\delta_{\mathrm{r}}^{2}}\geqslant\gamma_{k}. \tag{68}\] With some basic transformations, we can rewrite (68) as \[\eta\Big{(}p_{k}-\gamma_{k}\sum_{j\neq k,j\in\mathcal{K}}p_{j}\Big{)}W_{k,l} \geqslant\gamma_{k}(\beta_{\mathrm{r},k}\delta_{\mathrm{RIS}}^{2}\mathbf{\phi}_{l}^{ \mathrm{H}}\mathbf{\phi}_{l}+\delta_{\mathrm{r}}^{2}). \tag{69}\] Finally, by defining \(\Upsilon\) as the set of slack variables \(\{\tilde{z}_{1,l}\}\), \(\{\tilde{z}_{2,k,l}\}\), \(\{\tilde{z}_{3,k,l}\}\) and \(\{\tilde{y}_{k,l}\}\), we obtain the problem to be solved at the \(n\)-th SCA iteration for Problem (50) as follows \[\min_{\mathbf{\mathrm{q}}_{\mathrm{V}},\mathbf{\mathrm{t}},\mathbf{\mathrm{T}}} E_{\mathrm{V}}\left(\mathbf{\mathrm{q}}_{\mathrm{V}},\mathbf{\mathrm{t}}\right) \tag{70a}\] \[\mathrm{s.t.} \tag{50b}\] Problem (70) is convex and thus can be solved using CVX. ### _Algorithm Development_ Based on the above discussions, we propose an SCA-based algorithm for solving Problem (17), of which details are summarized in Algorithm 1. Furthermore, we update the value of \(\mathbf{\psi}_{k,l}\) when solving Problem (70) to limit the approximation gap in (51). Since the optimal solution to Problem (70) may not satisfy constraint (58), the objective function value of (17) generated by Algorithm 1 may not decrease monotonically. However, the simulation results show the fluctuations are not significant. ## IV Simulation Results To provide numerical results of our system, 5 users are assumed in a semicircular area with a radius of \(30\,\mathrm{m}\), and the positions of user S1-S5 are \((-30,0)\), \((-15\sqrt{2},15\sqrt{2})\), \((0,30)\), \((30,0)\) and \((15\sqrt{2}/2,15\sqrt{2}/2)\), respectively, i.e., user S1-S4 are placed on the circle, and user S5 is placed at the midpoint of the radius. The position of the active RIS is \((q_{\mathrm{F}}^{\mathrm{st}},q_{\mathrm{W}}^{\mathrm{st}})=(0,0)\) with \(H_{\mathrm{R}}=10\,\mathrm{m}\), the initial position \(q_{\mathrm{V,0}}=(-35,0)\) and the final position \(q_{\mathrm{V,L}}=(35,0)\). The UAV's flight height is set as \(H_{\mathrm{V}}=20\,\mathrm{m}\), and the transmit power \(P_{k}=0.2\,\mathrm{w}\). The SINR threshold of the users and the energy constraint of the active RIS are set as \(\gamma=-10\,\mathrm{dB}\) and \(E_{\mathrm{RIS}}^{\mathrm{act}}=20\,\mathrm{J}\), respectively. The parameters related to the propulsion power of the UAV are set as the same in [5], where the maximum-range speed is \(v=18.3\,\mathrm{m/s}\). Unless stated otherwise, we set the Rician factors \(\mu_{\mathrm{t}}=\mu_{\mathrm{r}}=\mu_{\mathrm{d}}=10\), the path-loss exponents \(\tau_{\mathrm{t}}=\tau_{\mathrm{r}}=2.3\) and \(\tau_{\mathrm{d}}=2.4\), the energy requirement for each user \(E_{\mathrm{req}}=0.04\,\mathrm{mJ}\), \(\eta=0.5\), \(\lambda=1\,\mathrm{m}\), and \(d=0.5\,\mathrm{m}\). To compare the performance of two types RIS-aided model, the energy resources of UAV are assumed to be equal in both the active RIS-aided and the passive RIS-aided systems. Specifically, the total energy cost models are expressed as \[E_{\mathrm{pas}} =E_{\mathrm{V}}^{\mathrm{pas}}, \tag{71}\] \[E_{\mathrm{act}} =E_{\mathrm{V}}^{\mathrm{act}}+(L-1)E_{\mathrm{RIS}}^{\mathrm{act }}. \tag{72}\] Fig. 2 shows the total energy cost versus the number of reflecting elements. It is seen from the figure that the total energy cost decreases with the increase of the number of reflecting elements, which verifies the effectiveness of the RIS-aided system in energy saving. From the passive RIS-aided scheme with \(M=32\), the active RIS performs better in terms of energy saving under the same energy resources in the UAV. Moreover, the convergence of proposed algorithm, indicates that it converges within a few iterations and exhibits no noticeable fluctuations. Fig. 3 shows the optimized trajectory of both the active and passive RIS-aided systems when \(M=32\). Note that the trajectory of the UAV in the active RIS-aided system is closer to the RIS than in the passive RIS-aided system, which shows a decreased flight range, leading to a corresponding decrease in the duration available for signal transmission. ## V Conclusions In this paper, an active RIS-aided UAV-enabled SWIPT system was considered. We aimed to minimize the total energy cost of the UAV by alternately optimizing the trajectories, the hovering time and the reflection vectors at the active RIS based on the SCA method. The simulation results showed that the active RIS-aided system performs better in energy saving than the passive RIS-aided system under the same energy resources in the UAV-enabled systems.
2303.15595
Bi-Encoder Cascades for Efficient Image Search
Modern neural encoders offer unprecedented text-image retrieval (TIR) accuracy, but their high computational cost impedes an adoption to large-scale image searches. To lower this cost, model cascades use an expensive encoder to refine the ranking of a cheap encoder. However, existing cascading algorithms focus on cross-encoders, which jointly process text-image pairs, but do not consider cascades of bi-encoders, which separately process texts and images. We introduce the small-world search scenario as a realistic setting where bi-encoder cascades can reduce costs. We then propose a cascading algorithm that leverages the small-world search scenario to reduce lifetime image encoding costs of a TIR system. Our experiments show cost reductions by up to 6x.
Robert Hönig, Jan Ackermann, Mingyuan Chi
2023-03-27T20:54:49Z
http://arxiv.org/abs/2303.15595v2
# Model Cascades for Efficient Image Search ###### Abstract. Modern neural encoders offer unprecedented text-image retrieval (TIR) accuracy. However, their high computational cost impedes an adoption to large-scale image searches. We propose a novel image ranking algorithm that uses a cascade of increasingly powerful neural encoders to progressively filter images by how well they match a given text. Our algorithm reduces lifetime TIR costs by over 3x. neural networks, text-image retrieval, cascaded models + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + ## 2. Related Work Model cascading is a recurrent theme in the literature on efficient machine learning (ML) systems. FrugalML (Frigal et al., 2015) minimizes access costs of ML APIs by cascading two calls to a cheap API and to an expensive API. NoScope (Frigal et al., 2015) speeds up object detection in videos by splitting a reference model into a sequence of two specialized models. Model cascades have also been applied to facial key point estimation (Krause et al., 2016), pedestrian detection (Beng et al., 2017) and other domains. Recent work on encoders for TIR is dominated by transformer-based bi-encoders (BEs) (Beng et al., 2017; Li et al., 2017; Wang et al., 2018) and cross-encoders (CEs) (Frigal et al., 2015; Wang et al., 2018; Wang et al., 2018). BEs process images and texts with separate encoders, whereas CEs also add cross-connections between the encoders. Hence, CEs are more powerful, but need to recompute \(\mathbf{V}_{\mathcal{D}}\) for new queries. This makes them impractical for large-scale searches and unsuitable for our idea. Therefore, we focus on BEs. Several methods for fast TIR with CEs have been developed: VLDeformer (Krause et al., 2016) trains a decomposable CE that can be used as a BE at inference time with minimal loss in quality. CrispSearch (CrispSearch, 2017), LightningDot (Lithith et al., 2017) and "Retrieve Fast, Rerank Smart" (Frigal et al., 2015) all introduce two-level sequences of a BE whose results can be cached for approximate inference and a CE for precise inference on a subset of the BE results. This is similar to our idea but differs in two key ways: First, we consider arbitrarily deep model cascades, whereas these approaches are fundamentally limited to two models. Second, we target BE inference instead of CE inference. In fact, this suggests that our approach could complement these existing techniques as the BE model in their first stage for even faster TIR. ## 3. Models and Methods ### Cascaded Search Let \(\mathcal{D}\) be a collection of \(n\) images that we want to query with a cascade of BEs. Consider a cascade of image encoders \(I=[I_{s},I_{1},\ldots,I_{r}]\) that all use the same text encoder \(T\). We propose Algorithm 1 to query \(\mathcal{D}\) by ranking all images with \(I_{s}\) and subsequently the top \(m_{j}\) images with \(I_{j}\). Note that with \(r=0\), Algorithm 1 reduces to a standard BE search. Computational costAssume that function Query in Algorithm 1 invoked \(q\) times and denote the computational cost of Algorithm 1 with \(C(I,q)\). We want to minimize the lifetime computational cost of Algorithm 1, that is \(C(I,q)\) as \(q\to\infty\). We can decompose \(C(I,q)\) into the sum of the lifetime image encoding cost \(a(I,q)\) and some term \(b(q)\) that is independent of \(I\) and thus irrelevant for optimization over \(I\). Next, we formalize our introductory observation on the set of a search engine's lifetime search results into the following key assumption: **Assumption 1**.: _For \(q\in\mathbb{N}\), let \(S_{q}\subset\mathcal{D}\) be the set of all images pushed to \(\mathrm{Top}\) in query \(q\). Then, \(\frac{1}{n}\left\|\cup_{q\in\mathbb{N}}S_{q}\right\|=:f\ll 1\)._ If \(I_{s},I_{1},\ldots,I_{r}\) have costs \(t_{s}<t_{1}<\ldots<t_{r}\), then Assumption 1 implies that \[a(I,q)=nt_{s}+fn\sum_{i=1}^{r}t_{i}.\] Hence, the 2-level cascade \([I_{s},I_{1}]\) is cheaper than the 1-level cascade \([I_{1}]\) if the speedup factor \((t_{s}+ft_{1})\)\(/t_{1}\) exceeds 1. We note that Assumption 1 implies no computational advantage of the \((r+1)\)-level cascade \(I\) with \(r>1\) over the equally powerful 2-level cascade \(I^{\prime}=[I_{s},I_{r}]\) with \(m_{r}^{\prime}=m_{1}\). However, if \(q\) is low enough that \(\mathbf{V}\) is not hit, then the \((r+1)\)-level cascade \(I\) speeds up individual queries by a factor of \[m_{1}t_{r}/\sum_{i=1}^{r}m_{i}t_{i} \tag{1}\] This is useful, because unlike an uncascaded model \([I_{r}]\) that executes the expensive image encoder \(I_{r}\) only during build time, the 2-level cascade \(I^{\prime}\) has a \(m_{1}t_{r}\) runtime overhead when \(\mathbf{V}\) is not hit. Hence, deep cascades can mitigate the increased latency of early queries in 2-level cascades. ### Creating the Cascade We apply our proposed methods to CLIP (Krause et al., 2016), a powerful transformer-based text-image BE. CLIP uses the GPT-2 architecture (Gord et al., 2017) for the text encoder, the vision transformer (ViT) (Frigal et al., 2015) architecture for the image encoder and matches images to texts by the cosine similarity of their embeddings. We create a cascade \([I_{s},I_{1},\ldots,I_{r}]\) from publicly available trained CLIP image encoders of different sizes. ## 4. Experiments ### Experimental Setup **Metrics**: Given a dataset \(\mathcal{D}\) of image-caption pairs we measure the Recall@\(k\) (\(\mathbb{R}@k\)) metric as the fraction of captions in \(\mathcal{D}\) whose corresponding image is among the top-\(k\) search results. In line with the IR literature, we report the Recall@\(k\) for \begin{table} \begin{tabular}{l l l l l l} \hline \hline Dataset & Method & R@1 & R@5 & R@10 & Speedup \\ \hline \multirow{3}{*}{MSCOCO} & No Cascade & 30.1 & 54.2 & 64.6 & 1x \\ & Cascade & +0.2 & +0.4 & +0.5 & 3.2x \\ \hline \multirow{3}{*}{Flickr30k} & No Cascade & 29.9 & 52.0 & 61.3 & 1x \\ & Cascade & +0.8 & +2.0 & +2.4 & 3.2x \\ \hline \hline \end{tabular} \end{table} Table 1. Recall@\(k\) in % and lifetime speedup of the 2-level cascade [ViT-B/32, ViT-B/16] over the uncascaded baseline [ViT-B/16]. \(k\in\{1,5,10\}\). In addition, we report for 2-level cascades the lifetime speedup and for deeper cascades the query speedup as discussed in Section 3.1. We run all experiments on an Intel i7-11800H CPU at 2.30 GHz with turboboot disabled and compute speedups by measuring the total CPU time of queries. **Datasets**: We evaluate our algorithm on the MSCOCO validation dataset with 5k samples and on the Flickr30k dataset with 32k samples. **Parameters**: We set the top-\(m\) value of encoder \(I_{1}\) to \(m_{1}=50\) and assume a lifetime return fraction of \(f=0.1\). ### 2-level cascades We use the Huggingface (Zhou et al., 2017) CLIP implementation with a ViT-B/16 image encoder as our uncascaded baseline \([I_{1}]\). We use the faster ViT-B/32 image encoder as \(I_{\text{s}}\) to create the 2-level cascade \([I_{\text{s}},I_{1}]\). Table 1 shows empirical results. The cascaded model reduces lifetime computational costs threefold. Surprisingly, the cascaded model achieves at the same time consistently higher Recall@\(k\) than the uncascaded model. One explanation may be that ViT-B/32 initially processes input images into 32x32 tiles. Since this tiling is more coarse-grained than the 16x16 tiling used by ViT-B/16, it may offer superior approximate filtering of search results. Hence, \(I_{\text{s}}\) could determine the top \(m_{1}\) images more effectively than \(I_{1}\). Further research is needed to explain why 2-level cascades show superior Recall@k. ### 3-level cascades As noted in Section 3.1, \(n\)-level cascades offer no reduced lifetime costs over 2-level cascades, but may speed up individual queries. This is important for large image encoders that slow down queries, such as the ViT-L/14 encoder that is 3.3x slower than ViT-B/16. Therefore, we introduce the 2-level cascade [ViT-B/32, ViT-L/14] and compare it against the 3-level cascade [ViT-B/32, ViT-B/16, ViT-L/14]. Concretely, we set a target speedup of 2x and use Equation (1) to determine the corresponding number \(m_{2}\) of top ranked images on which Algorithm 1 should execute ViT-L/14. This yields \(m_{2}=m_{1}\left(\frac{1}{2}-\frac{I_{1}}{t_{2}}\right)=50\left(\frac{1}{2} -\frac{1}{3.3}\right)\approx 10\). Table 2 reports the empirically measured query speedups and the change in Recall@\(k\) of the 3-level cascade. Similarly to Section 4.2, the deeper cascade offers superior predictions. However, for Recall@10 the predictions become significantly worse. This is because Algorithm 1 only uses ViT-L/14 to rerank the top \(m_{2}=10\) images, so the set of the top 10 images stays unchanged. Hence, for \(m_{2}=10\), the cascade [ViT-B/32, ViT-B/16, ViT-L/14] is equivalent to the less powerful cascade [ViT-B/32, ViT-B/16] with respect to the Recall@10 metric. ## 5. Conclusion Our experiments show that Algorithm 1 can lower lifetime computational search costs by over 3x at no reduction in search quality. At the same time, we show that deeper model cascades can mitigate the increase in latency of early queries. However, single-digit speedups may not sufficiently reduce computational costs to economically rank large-scale image databases with expensive transformer-based BEs. Instead, a practitioner may use traditional search engines to retrieve the top-\(k\) images and apply a neural search cascade on top of it. This heterogeneous cascade may offer a viable path towards the integration of state-of-the-art neural networks with established image search platforms. It is important to note that all our observations rely on Assumption 1. While we have provided anecdotal evidence to support our choice of the lifetime return fraction as \(f=10\%\), different search scenarios likely vary in \(f\) and achieve accordingly different speedups.
2307.07466
Comparing Scale Parameter Estimators for Gaussian Process Interpolation with the Brownian Motion Prior: Leave-One-Out Cross Validation and Maximum Likelihood
Gaussian process (GP) regression is a Bayesian nonparametric method for regression and interpolation, offering a principled way of quantifying the uncertainties of predicted function values. For the quantified uncertainties to be well-calibrated, however, the kernel of the GP prior has to be carefully selected. In this paper, we theoretically compare two methods for choosing the kernel in GP regression: cross-validation and maximum likelihood estimation. Focusing on the scale-parameter estimation of a Brownian motion kernel in the noiseless setting, we prove that cross-validation can yield asymptotically well-calibrated credible intervals for a broader class of ground-truth functions than maximum likelihood estimation, suggesting an advantage of the former over the latter. Finally, motivated by the findings, we propose interior cross validation, a procedure that adapts to an even broader class of ground-truth functions.
Masha Naslidnyk, Motonobu Kanagawa, Toni Karvonen, Maren Mahsereci
2023-07-14T16:48:34Z
http://arxiv.org/abs/2307.07466v2
Comparing Scale Parameter Estimators for Gaussian Process Regression: Cross Validation and Maximum Likelihood ###### Abstract Gaussian process (GP) regression is a Bayesian nonparametric method for regression and interpolation, offering a principled way of quantifying the uncertainties of predicted function values. For the quantified uncertainties to be well-calibrated, however, the covariance kernel of the GP prior has to be carefully selected. In this paper, we theoretically compare two methods for choosing the kernel in GP regression: cross-validation and maximum likelihood estimation. Focusing on the scale-parameter estimation of a Brownian motion kernel in the noiseless setting, we prove that cross-validation can yield asymptotically well-calibrated credible intervals for a broader class of ground-truth functions than maximum likelihood estimation, suggesting an advantage of the former over the latter. ###### Contents * 1 Introduction * 1.1 Scale Parameter Estimation * 1.2 Contributions * 1.3 Related work * 2 Background * 2.1 Gaussian process regression * 2.2 Kernel parameter estimation * 3 Setting * 3.1 Brownian motion kernel * 3.2 Sequences of partitions * 3.3 Holder spaces * 3.4 Fractional Brownian motion * 3.5 Functions of finite quadratic variation * 4 Main results * 4.1 Deterministic setting * 4.2 Random setting * 5 Consequences for credible intervals * 6 Experiments * 6.1 Test functions * 6.2 Asymptotics of the CV estimator * 6.3 Comparison of CV and ML estimators * 7 Conclusion and future work * 8 Proofs * 8.1 Explicit expressions for the CV and ML estimators * 8.2 Proofs for Section 4.1 * 8.3 Proofs for Section 4.2 * 8.4 Proofs for Section 5 * A Connection between the ML and CV estimators * B Further discussion on Theorem 10 ## 1 Introduction Gaussian process (GP) regression (or kriging) is a Bayesian nonparametric method for regression and interpolation that has been extensively studied in statistics and machine learning (O'Hagan, 1978; Stein, 1999; Rasmussen and Williams, 2006). Its key property is that it enables uncertainty quantification of estimated function values in a principled manner, which is crucial for applications involving decision-making, safety concerns, and scientific discovery. As such, GP regression has been a core building block of more applied algorithms, including Bayesian optimisation (Jones et al., 1998; Shahriari et al., 2015; Garnett, 2023), probabilistic numerical computation (Hennig et al., 2015; Cockayne et al., 2019; Hennig et al., 2022), and calibration and emulation of computer models (Sacks et al., 1989; Kennedy and O'Hagan, 2001; O'Hagan, 2006; Beck and Guillas, 2016), to name just a few. GP regression estimates an unknown function \(f\) from its observations as follows. One first defines a _prior distribution_ for \(f\) as a GP by specifying its _covariance kernel_ (and mean function). Provided \(N\) observations about \(f\), one then derives the _posterior distribution_ of \(f\), which is another GP with mean function \(m_{N}\) and covariance kernel \(k_{N}\). One can then predict the function value \(f(x)\) at any input \(x\) by the posterior mean \(m_{N}(x)\) and quantify its uncertainty using the posterior standard deviation \(\sqrt{k_{N}(x)}\coloneqq\sqrt{k_{N}(x,x)}\). Specifically, one can construct a _credible interval_ of \(f(x)\) as the interval \([m_{N}(x)-\alpha\sqrt{k_{N}(x)},m_{N}(x)+\alpha\sqrt{k_{N}(x)}]\) for a constant \(\alpha>0\) (for example, \(\alpha\approx 1.96\) leads to the 95% credible interval). Such uncertainty estimates constitute key ingredients in the above applications of GP regression. For GP uncertainty estimates to be reliable, the posterior standard deviation \(\sqrt{k_{N}(x)}\) should, ideally, decay at the _same_ rate as the prediction error \(|m_{N}(x)-f(x)|\) decreases, with the increase of sample size \(N\). Otherwise, GP uncertainty estimates are either asymptotically _overconfident_ or _underconfident_. For example, if \(\sqrt{k_{N}(x)}\) goes to 0 faster than the error \(|m_{N}(x)-f(x)|\), then the credible interval \([m_{N}(x)-\alpha\sqrt{k_{N}(x)},m_{N}(x)+\alpha\sqrt{k_{N}(x)}]\) will _not_ contain the true value \(f(x)\) as \(N\) increases for _any_ fixed constant \(\alpha>0\) (asymptotically overconfident). If \(\sqrt{k_{N}(x)}\) goes to 0 slower than the error \(|m_{N}(x)-f(x)|\), then the confidence interval \([m_{N}(x)-\alpha\sqrt{k_{N}(x)},m_{N}(x)+\alpha\sqrt{k_{N}(x)}]\) will get larger than the error \(|m_{N}(x)-f(x)|\) as \(N\) increases (asymptotically underconfident). Both of these cases are not desirable in practice, as GP credible intervals will not be accurate estimates of prediction errors. Unfortunately, in general, the posterior standard deviation \(\sqrt{k_{N}(x)}\) does _not_ decay at the same rate as the prediction error \(|f(x)-m_{N}(x)|\), because, as is well-known, \(\sqrt{k_{N}(x)}\) does _not_ depend on the true function \(f\); see (3b) in Section 2.1. Exceptionally, if the function \(f\) is a sample path of the GP prior (the well-specified case), GP uncertainty estimates can be well-calibrated. However, in general, the unknown \(f\) is not exactly a sample path of the GP prior (the misspecified case), and the posterior standard deviation \(\sqrt{k_{N}(x)}\) does not scale with the prediction error \(|f(x)-m_{N}(x)|\). Figures 1 and 2 (the left panels) show examples where the true function \(f\) is not a sample of the GP prior and where the GP uncertainty estimates are not well-calibrated. Figure 1: GP interpolation of a fractional Brownian motion with the Hurst parameter \(H=0.2\) (smoothness \(l+\alpha=0.2\)) using the Brownian motion kernel (2) with three different scale parameters: \(\sigma^{2}=1\) (left), \(\sigma^{2}=\hat{\sigma}_{\text{CV}}^{2}=4.752\) given by the LOO-CV estimator (middle) and \(\sigma^{2}=\hat{\sigma}_{\text{ML}}^{2}=3.729\) obtained with the ML estimator (right). In each figure, the red trajectory represents the path of the fractional Brownian motion, the purple circles the training data, the blue curve the posterior mean \(m_{N}(x)\) and the green shade the 95 % credible interval \([m_{N}(x)-1.96\sigma\sqrt{k_{N}(x)},m_{N}(x)+1.96\sigma\sqrt{k_{N}(x)}]\). Figure 2: GP interpolation of an integrated fractional Brownian motion with the Hurst parameter \(H=0.5\) (smoothness \(l+\alpha=1.5\)) using the Brownian motion kernel (2) with three different scale parameters: \(\sigma^{2}=1\) (left), \(\sigma^{2}=\hat{\sigma}_{\text{CV}}^{2}=0.019\) given by the LOO-CV estimator (middle) and \(\sigma^{2}=\hat{\sigma}_{\text{ML}}^{2}=0.067\) obtained with the ML estimator (right). For the explanation of the figures, see the caption of Figure 1. ### Scale Parameter Estimation To obtain sensible uncertainty estimates, one thus needs to adapt the posterior standard deviation \(\sqrt{k_{N}(x)}\) to the function \(f\). One simple way to achieve this is to introduce the _scale parameter_\(\sigma^{2}>0\) and parametrize the kernel as \[k_{\sigma}(x,x^{\prime})\coloneqq\sigma^{2}k(x,x^{\prime}), \tag{1}\] where \(k\) is the original kernel. GP regression with this kernel \(k_{\sigma}\) yields the posterior mean function \(m_{N}\), which is not influenced by \(\sigma^{2}\), and the posterior covariance function \(\sigma^{2}k\), which is scaled by \(\sigma^{2}\). If one estimates \(\sigma^{2}\) from observed data of \(f\), the estimate \(\hat{\sigma}^{2}\) depends on \(f\), and so does the resulting posterior standard deviation \(\hat{\sigma}\sqrt{k_{N}(x)}\). One approach to scale-parameter estimation is the method of _maximum likelihood (ML)_, which optimizes \(\sigma^{2}\) to maximize the marginal likelihood of the observed data (Rasmussen and Williams, 2006, Section 5.4). The ML approach is popular for general hyperparameter optimization in GP regression. Another less common way in the GP literature is _cross-validation (CV)_, which optimizes \(\sigma^{2}\) to maximize the average predictive likelihood of held-out data (Sundararajan and Keerthi, 2001). For either approach, the optimized scale parameter can be obtained analytically in computational complexity \(\mathcal{O}(N^{3})\). Figures 1 and 2 (middle and right panels) demonstrate that both approaches yield uncertainty estimates better calibrated than the original estimates without the scale parameter. Do these scale parameter estimators lead to asymptotically well-calibrated uncertainty estimates? To answer this question, one needs to understand their convergence properties as the sample size \(N\) increases. Most existing theoretical works focus on the well-specified case where there is a "true" scale parameter \(\sigma_{0}^{2}\) such that the unknown \(f\) is a GP with the covariance kernel \(\sigma_{0}^{2}k\). In this case, both the ML and CV estimators have been shown to be consistent in estimating the true \(\sigma_{0}^{2}\)(e.g., Ying, 1991; Zhang, 2004; Bachoc et al., 2017, 2020). However, in general, no "true" scale parameter \(\sigma_{0}^{2}\) exists such that the unknown \(f\) is a GP with the covariance \(\sigma_{0}^{2}k\). In such misspecified cases, not much is known about the convergence properties of both estimators. Karvonen et al. (2020) analyze the ML estimator for the scale parameter, assuming that \(f\) is a deterministic function. They derive upper bounds (and lower bounds in some cases) for the the ML estimator; see Wang (2021) for closely related work. To our knowledge, no theoretical work exists for the CV estimator for the scale parameter in the misspecified case. Bachoc (2013) and Petit et al. (2022) empirically compare the ML and CV estimators under different model misspecification settings. We will review other related works in Section 1.3. ### Contributions This work studies the convergence properties of the ML and CV estimators, \(\hat{\sigma}_{\text{ML}}^{2}\) and \(\hat{\sigma}_{\text{CV}}^{2}\), of the scale parameter \(\sigma^{2}\) in GP regression, to understand whether they lead to asymptotically well-calibrated uncertainty estimates. In particular, we provide the first theoretical analysis of the CV estimator \(\hat{\sigma}_{\text{CV}}^{2}\) when the GP prior is misspecified, and also establish novel results for the ML estimator \(\hat{\sigma}_{\text{ML}}^{2}\). To facilitate the analysis, we focus on the following simplified setting. For a constant \(T>0\), let \([0,T]\subset\mathbb{R}\) be the input domain. Let \(k\) in (1) be the Brownian motion kernel \[k(x,x^{\prime})=\min(x,x^{\prime})\quad\text{ for }\quad x,x^{\prime}\in[0,T]. \tag{2}\] With this choice, a sample path of the GP prior has roughly a smoothness of \(1/2\) (in terms of the differentiability; we will be more rigorous in later sections). We assume that the true unknown function \(f\) has the smoothness \(l+\alpha\), where \(l\in\{0\}\cup\mathbb{N}\) and \(0<\alpha\leq 1\). The GP prior is well-specified if \(l=0\) and \(\alpha=1/2\). Other settings of \(l\) and \(\alpha\) represent misspecified cases. If \(l=0\) and \(\alpha<1/2\), the true function \(f\) is rougher than the GP prior (Figure 1); if \(l=0\) and \(\alpha>1/2\) or \(l\geq 1\), the function \(f\) is smoother than the GP prior. We focus on the noise-free setting where one observes the function values \(f(x_{1}),\ldots,f(x_{N})\) at input points \(x_{1},\ldots,x_{N}\in[0,T]\). Our main results are new upper and lower bounds for the asymptotic rates of the CV estimator \(\hat{\sigma}_{\mathrm{CV}}^{2}\) and the ML estimator \(\hat{\sigma}_{\mathrm{ML}}^{2}\) as \(N\to\infty\) (Section 4). The results suggest that the CV estimator can yield asymptotically well-calibrated uncertainty estimates for a broader class of functions \(f\) than the ML estimator; thus, the former has an advantage over the latter (Section 5). More specifically, asymptotically well-calibrated uncertainty estimates may be obtained with the CV estimator for the range \(0<l+\alpha\leq 3/2\) of smoothness of the true function, while this range becomes \(0<l+\alpha\leq 1\) with the ML estimator and is narrower. This finding is consistent with the example in Figure 2, where the true function has smoothness \(l+\alpha=3/2\) and is thus smoother than the GP prior. The uncertainty estimates of the CV estimator appear to be well-calibrated, while those of the ML estimator are unnecessarily wide, failing to adapt to the smoothness. This paper is structured as follows. After reviewing related works in Section 1.3, we introduce the necessary background on the ML and CV approaches to scale parameter estimation for GP regression in Section 2. We describe the setting of the theoretical analysis in Section 3, present our main results in Section 4, and discuss its consequences on uncertainty quantification in Section 5. We report simulation experiments in Section 6, conclude in Section 7, and present proofs in Section 8. ### Related work We review here related theoretical works on hyper-parameter selection in GP regression. We categorize them into two groups based on how the true unknown function \(f\) is modelled: random and deterministic. Random setting.One group of works models the ground truth \(f\) as a random function, specifically as a GP. Most of these works model \(f\) as a GP with a Matern-type covariance kernel and analyze the ML estimator. Under the assumption that the GP prior is correctly specified, asymptotic properties of the ML estimator for the scale parameter and other parameters have been studied (Stein, 1990; Ying, 1991, 1993; Loh and Kam, 2000; Zhang, 2004; Loh, 2005; Du et al., 2009; Anderes, 2010; Wang and Loh, 2011; Kaufman and Shaby, 2013; Bevilacqua et al., 2019). Recently Loh et al. (2021) and Loh and Sun (2023) have constructed consistent estimators of various parameters for many commonly used kernels, including Materns. Chen et al. (2021) and Petit (2023) consider a periodic version of Matern GPs, and show the consistency of the ML estimator for its smoothness parameter. To our knowledge, no theoretical result exists for the ML estimation of the scale parameter in the misspecified random setting, which we provide in Section 4.2 (Theorem 12). In contrast, few theoretical works exist for the CV estimator. Bachoc et al. (2017) study the leave-one-out (LOO) CV estimator for the Matern-1/2 model (or the Laplace kernel) with one-dimensional inputs, in which case the GP prior is an Ornstein-Uhlenbeck (OU) process. Assuming the well-specified case where the true function is also an OU process, they prove the consistency and asymptotic normality of the CV estimator for the microergodic parameter in the fixed-domain asymptotic setting. Bachoc (2018) and Bachoc et al. (2020) discuss another CV estimator that uses the mean square prediction error as the scoring criterion of CV (thus different from the one discussed here) in the increasing-domain asymptotics. Bachoc (2013) and Petit et al. (2022) perform empirical comparisons of the ML and CV estimators under different model misspecification settings. Thus, to our knowledge, no theoretical result exists for the CV estimator of the scale parameter in the random misspecified setting, which we provide in Section 4.2 (Theorem 11). Deterministic setting.Another line of research assumes that the ground truth \(f\) is a fixed function belonging to a specific function space (Stein, 1993). Xu and Stein (2017) assumed that the ground truth \(f\) is a monomial on \([0,1]\) and proved some asymptotic results for the ML estimator when the kernel \(k\) is Gaussian. As mentioned earlier, Karvonen et al. (2020) proved asymptotic upper (and, in certain cases, also lower) bounds on the ML estimator \(\hat{\sigma}^{2}_{\text{ML}}\) of the scale parameter \(\sigma^{2}\); see Wang (2021) for a closely related work. Karvonen (2023) has studied the ML and LOO-CV estimators for the smoothness parameter in the Matern model; see also Petit (2023). Ben Salem et al. (2019) and Karvonen and Oates (2023) proved non-asymptotic results on the length-scale parameter in the Matern and related models. Thus, there has been no work for the CV estimator of the scale parameter \(\sigma^{2}\) in the deterministic setting, which we provide in Section 4.1 (Theorem 7); we also prove a corresponding result for the ML estimator (Theorem 8). ## 2 Background This section briefly reviews GP regression and the ML and LOO-CV estimators of kernel parameters. ### Gaussian process regression We first explain GP regression (or interpolation). Let \(\Omega\) be a set, and \(f\colon\Omega\to\mathbb{R}\) be an unknown function of interest. Suppose one observes \(N\) function values \(f(x_{1}),\ldots,f(x_{N})\) at pairwise distinct input points \(x_{1},\ldots,x_{N}\in\Omega\). The task here is to estimate \(f\) based on the data \((\mathbf{x},f(\mathbf{x}))\), where \(f(\mathbf{x})\coloneqq[f(x_{1}),\ldots,f(x_{N})]^{\top}\in\mathbb{R}^{N}\) and \(\mathbf{x}\coloneqq[x_{1},\ldots,x_{N}]^{\top}\in\Omega^{N}\). In GP regression, one first defines a prior distribution of the unknown \(f\) as a GP by specifying its mean function \(m\colon\Omega\to\mathbb{R}\) and covariance function (kernel) \(k\colon\Omega\times\Omega\to\mathbb{R}\); we may write \(f\sim\mathcal{GP}(m,k)\) to indicate this. Conditioned on the data \((\mathbf{x},f(\mathbf{x}))\), the posterior distribution of \(f\) is again a GP whose mean function \(m_{N}:\Omega\to\mathbb{R}\) and covariance function \(k_{N}:\Omega\times\Omega\to\mathbb{R}\) are given by \[m_{N}(x) \coloneqq m(x)+k(x,\mathbf{x})^{\top}k(\mathbf{x},\mathbf{x})^{ -1}\left(f(\mathbf{x})-m(\mathbf{x})\right),\quad x\in\Omega, \tag{3a}\] \[k_{N}(x,x^{\prime}) \coloneqq k(x,x^{\prime})-k(x,\mathbf{x})^{\top}k(\mathbf{x}, \mathbf{x})^{-1}k(x^{\prime},\mathbf{x}),\quad x,x^{\prime}\in\Omega, \tag{3b}\] where \(m(\mathbf{x})\coloneqq[m(x_{1}),\ldots,m(x_{N})]^{\top}\in\mathbb{R}^{N}\) and \(k(x,\mathbf{x})\coloneqq[k(x,x_{1}),\ldots,k(x,x_{N})]^{\top}\in\mathbb{R}^{N}\), and \[k(\mathbf{x},\mathbf{x})\coloneqq\begin{bmatrix}k(x_{1},x_{1})&\ldots&k(x_{1},x_{N})\\ \vdots&\ddots&\vdots\\ k(x_{N},x_{1})&\ldots&k(x_{N},x_{N})\end{bmatrix}\in\mathbb{R}^{N\times N} \tag{4}\] is the Gram matrix. Throughout this paper, we assume that the points \(\mathbf{x}\) are such that the Gram matrix is non-singular. For notational simplicity, we may write the posterior variance as \[k_{N}(x)\coloneqq k_{N}(x,x),\quad x\in\Omega.\] For simplicity and as commonly done, we henceforth assume that the prior mean function \(m\) is the zero function, \(m(\cdot)\equiv 0\). While the GP prior assumes that the unknown function \(f\) is a sample path of the GP with the specified kernel \(k\), this assumption does not hold in general, i.e., model misspecification occurs. In this case, as described in Figures 1 and 2 (left), the posterior standard deviation \(\sqrt{k_{N}(x)}\), which is supposed to quantify the uncertainty of the unknown function value \(f(x)\), may not be well calibrated with the prediction error \(|m_{N}(x)-f(x)|\). One could address this issue by selecting the kernel \(k\) or its parameters from the data \((\mathbf{x},f(\mathbf{x}))\); we will explain this topic next. ### Kernel parameter estimation The selection of the kernel \(k\) is typically performed by defining a parametric family of kernels \(\{k_{\theta}\}_{\theta\in\Theta}\) and selecting the parameter \(\theta\) based on an appropriate criterion. Here \(\Theta\) is a parameter set, and \(k_{\theta}:\Omega\times\Omega\to\mathbb{R}\) for each \(\theta\in\Theta\) is a kernel. Maximum likelihood (ML) estimation.The ML estimator maximises the log-likelihood of the data \((\mathbf{x},f(\mathbf{x}))\) given that \(f\) is a GP with kernel \(k_{\theta}\): \[\log p(f(\mathbf{x})\,|\,\mathbf{x},\theta)=-\frac{1}{2}\bigg{(}f(\mathbf{x})^ {\top}k_{\theta}(\mathbf{x},\mathbf{x})^{-1}f(\mathbf{x})+\log\det k_{\theta}( \mathbf{x},\mathbf{x})+n\log(2\pi)\bigg{)},\] where \(\det k_{\theta}(\mathbf{x},\mathbf{x})\) is the determinant of the Gram matrix \(k_{\theta}(\mathbf{x},\mathbf{x})\) (see, e.g., Rasmussen and Williams 2006, Section 5.4.1). With the additive terms that do not depend on \(\theta\) removed from \(\log p(f(\mathbf{x})\,|\,\mathbf{x},\theta)\), this is equivalent to minimising the loss function \[\mathcal{L}_{\mathrm{ML}}(\theta):=f(\mathbf{x})^{\top}k_{\theta}(\mathbf{x}, \mathbf{x})^{-1}f(\mathbf{x})+\log\det k_{\theta}(\mathbf{x},\mathbf{x}). \tag{5}\] In general, \(\mathcal{L}_{\mathrm{ML}}(\theta)\) may not have a unique minimiser, so that any ML estimator satisfies \[\hat{\theta}_{\mathrm{ML}}\in\operatorname*{arg\,min}_{\theta\in\Theta} \mathcal{L}_{\mathrm{ML}}(\theta).\] Leave-one-out cross-validation (LOO-CV).The LOO-CV estimator (e.g., Rasmussen and Williams, 2006, Section 5.4.2), which we may simply call the CV estimator, is an alternative to the ML estimator. It maximizes the average log-predictive likelihood \[\sum_{n=1}^{N}\log p(f(x_{n})\,|\,x_{n},\mathbf{x}_{\setminus n},f(\mathbf{x }_{\setminus n}),\theta) \tag{6}\] of the held-out data \((x_{n},f(x_{n}))\), where \(n=1,\ldots,N\), based on the data \((\mathbf{x}_{\setminus n},f(\mathbf{x}_{\setminus n}))\), where \(\mathbf{x}_{\setminus n}\) denotes the input points with \(x_{n}\) removed: \[\mathbf{x}_{\setminus n}=[x_{1},\ldots,x_{n-1},x_{n+1},\ldots,x_{N}]^{\top} \in\Omega^{N-1}.\] Let \(m_{\theta,\setminus n}\) and \(k_{\theta,\setminus n}\) denote the posterior mean and covariance functions of GP regression with the kernel \(k_{\theta}\) and the data \((\mathbf{x}_{\setminus n},f(\mathbf{x}_{\setminus n})\). Because each \(p(f(x_{n})\,|\,x_{n},\mathbf{x}_{\setminus n},f(\mathbf{x}_{\setminus n}),\theta)\) is the Gaussian density of \(f(x_{n})\) with mean \(m_{\theta,\setminus n}(x_{n})\) and variance \(k_{\theta,\setminus n}(x_{n}):=k_{\theta,\setminus n}(x_{n},x_{n})\), removing additive terms that do not depend on \(\theta\) and reversing the sign in (6) yields the following CV objective function: \[\mathcal{L}_{\mathrm{CV}}(\theta)=\sum_{n=1}^{N}\frac{\big{[}f(x_{n})-m_{ \theta,\setminus n}(x_{n})\big{]}^{2}}{k_{\theta,\setminus n}(x_{n})}+\log k _{\theta,\setminus n}(x_{n}), \tag{7}\] The CV estimator is then defined as its minimizer: \[\hat{\theta}_{\mathrm{CV}}\in\operatorname*{arg\,min}_{\theta\in\Theta} \mathcal{L}_{\mathrm{CV}}(\theta).\] As for the ML estimator, the CV objective function and its first-order gradients can be computed in closed form in \(\mathcal{O}(N^{3})\) time (Sundararajan and Keerthi, 2001). Scale parameter estimation.As explained in Section 1, we consider the family of kernels \(k_{\sigma}(x,x^{\prime})\coloneqq\sigma^{2}k(x,x^{\prime})\) parametrized with the scale parameter \(\sigma^{2}>\)0, where \(k\) is a fixed kernel, and study the estimation of \(\sigma^{2}\) using the CV and ML estimators, denoted as \(\hat{\sigma}_{\mathrm{CV}}^{2}\) and \(\hat{\sigma}_{\mathrm{ML}}^{2}\), respectively. In this case, both \(\hat{\sigma}_{\mathrm{CV}}^{2}\) and \(\hat{\sigma}_{\mathrm{CV}}^{2}\) can be derived in closed form by differentiating (5) and (7). Let \(m_{n-1}\) and \(k_{n-1}\) be the posterior mean and variance functions of GP regression using the kernel \(k\) and the first \(n-1\) training observations \((x_{1},f(x_{1})),\ldots,(x_{n-1},f(x_{n-1}))\). Let \(m_{0}(\cdot)\coloneqq 0\) and \(k_{0}(x,x)\coloneqq k(x,x)\). Then the ML estimator is given by \[\hat{\sigma}_{\mathrm{ML}}^{2}=\frac{f(\mathbf{x})^{\top}k(\mathbf{x}, \mathbf{x})^{-1}f(\mathbf{x})}{N}=\frac{1}{N}\sum_{n=1}^{N}\frac{[f(x_{n})-m_{ n-1}(x_{n})]^{2}}{k_{n-1}(x_{n})}, \tag{8}\] This expression of the ML estimator is relatively well known; see e.g. Section 4.2.2 in Xu and Stein (2017) or Proposition 7.5 in Karvonen and Oates (2023). On the other hand, the CV estimator \(\hat{\sigma}_{\mathrm{ML}}^{2}\) is given by \[\hat{\sigma}_{\mathrm{CV}}^{2}=\frac{1}{N}\sum_{n=1}^{N}\frac{\big{[}f(x_{n})-m _{\setminus n}(x_{n})\big{]}^{2}}{k_{\setminus n}(x_{n})}, \tag{9}\] where \(m_{\setminus n}\) and \(k_{\setminus n}\) are the posterior mean and covariance functions of GP regression using the kernel \(k\) and data \((\mathbf{x}_{\setminus n},f(\mathbf{x}_{\setminus n}))\) with \((x_{n},f(x_{n}))\) removed: \[m_{\setminus n}(x) =k(\mathbf{x}_{\setminus n},x)^{\top}k(\mathbf{x}_{\setminus n },\mathbf{x}_{\setminus n})^{-1}f(\mathbf{x}_{\setminus n}),\] \[k_{\setminus n}(x,x^{\prime}) =k(x,x^{\prime})-k(\mathbf{x}_{\setminus n},x)^{\top}k(\mathbf{ x}_{\setminus n},\mathbf{x}_{\setminus n})^{-1}k(\mathbf{x}_{\setminus n},x^{ \prime}).\] Notice the similarity between the two expressions (8) and (9). The difference is that the ML estimator uses \(k_{n-1}\) and \(m_{n-1}\), which are based on the first \(n-1\) training observations, while the CV estimator uses \(k_{\setminus n}\) and \(m_{\setminus n}\) obtained with \(N-1\) observations, for each \(n=1,\ldots,N\). Therefore, the CV estimator uses all the data points more evenly than the ML estimator. This difference may be the source of the difference in their asymptotic properties established later. **Remark 1**.: _As suggested by the similarity between (8) and (9), there is a deeper connection between ML and CV estimators in general. For instance, Fong and Holmes (2020, Proposition 2) have shown that the Bayesian marginal likelihood equals the average of leave-\(p\)-out CV scores. We prove this result for the special case of scale parameter estimation in GP regression in Appendix A._ ## 3 Setting This section describes the settings and tools for our theoretical analysis: the Brownian motion kernel in Section 3.1; sequences of partitions in Section 3.2; the Holder class of functions in Section 3.3; fractional Brownian motion in Section 3.4; and functions of finite quadratic variation in Section 3.5. ### Brownian motion kernel As explained in Section 1, for the kernel \(k\) we focus on the Brownian motion kernel on the domain \(\Omega=[0,T]\) for some \(T>0\): \[k(x,x^{\prime})=\min(x,x^{\prime}).\] The resulting kernel \(k_{\sigma}(x,x^{\prime})=\sigma^{2}k(x,x^{\prime})\) induces a Brownian motion prior for GP regression. We assume that the input points \(\mathbf{x}=[x_{1},\ldots x_{N}]^{\top}\) for GP regression are positive and ordered: \[0<x_{1}<x_{2}<\cdots<x_{N}\leq T.\] The positivity ensures that the Gram matrix (4) is non-singular. As is well known and can be seen in Figures 1 and 2, the posterior mean function \(m_{N}\) in (3) using the Brownian motion kernel becomes the _piecewise linear interpolant_ of the observations \((\mathbf{x},f(\mathbf{x}))\). See (24) and (25) in Section 8.1 for the explicit expressions of the posterior mean and covariance functions. ### Sequences of partitions For our asymptotic analysis, we assume that the input points \(x_{1},\ldots,x_{N}\in[0,T]\) cover the domain \([0,T]\) more densely as the sample size \(N\) increases. To make the dependence on the size \(N\) explicit, we write \(\mathcal{P}_{N}\coloneqq(x_{N,n})_{n=1}^{N}\subset[0,T]\) as a point set of size \(N\), and assume that they are ordered as \[0\eqqcolon x_{N,0}<x_{N,1}<x_{N,2}<\cdots<x_{N,N}=T\] Then \(\mathcal{P}_{N}\) defines a partition of \([0,T]\) into \(N\) subintervals \([x_{N,n},x_{N,n+1}]\). When there is no risk of confusion, we may write \(x_{n}\) instead of \(x_{N,n}\) for simplicity. Note that we do _not_ require the nesting \(\mathcal{P}_{N}\subset\mathcal{P}_{N+1}\) of partitions. We define the _mesh_ of partition \(\mathcal{P}_{N}\) as the longest subinterval in the partition: \[\|\mathcal{P}_{N}\|\coloneqq\max_{n\in\{0,1,\ldots,N-1\}}(x_{N,n+1}-x_{N,n})\] The decay rate of the mesh \(\|\mathcal{P}_{N}\|\) quantifies how quickly the points in \(\mathcal{P}_{N}\) cover the interval \([0,T]\). In particular, the decay rate \(\mathcal{P}_{N}=\mathcal{O}(N^{-1})\) implies that the length of every subinterval is asymptotically upper bounded by \(1/N\). At the same time, if each subinterval is asymptotically lower bonded by \(1/N\), we call the sequence of partitions \((\mathcal{P}_{N})_{N\in\mathbb{N}}\)_quasi-uniform_, as more formally defined as follows. **Definition 2**.: For each \(N\in\mathbb{N}\), let \(\mathcal{P}_{N}\coloneqq(x_{N,n})_{n=1}^{N}\subset[0,T]\). Define \(\Delta x_{N,n}\coloneqq x_{N,n+1}-x_{N,n}\). Then the sequence of partitions \((\mathcal{P}_{N})_{N\in\mathbb{N}}\) is called _quasi-uniform_ if there exists a constant \(1\leq C_{\mathrm{qu}}<\infty\) such that \[\sup_{N\in\mathbb{N}}\frac{\max_{n}\Delta x_{N,n}}{\min_{n}\Delta x_{N,n}}=C_{ \mathrm{qu}}.\] Th quasi-uniformity, as defined here, requires that the ratio of the longest subinterval, \(\max_{n}\Delta x_{N,n}\), to the shortest one, \(\min_{n}\Delta x_{N,n}\), is upper-bounded by \(C_{\mathrm{qu}}\) for all \(N\in\mathbb{N}\). Quasi-uniformity implies that all subintervals are asymptotically upper and lower bounded by \(1/N\), as we have, for all \(N\in\mathbb{N}\) and \(n\in\{1,\ldots,N\}\), \[\frac{TN^{-1}}{C_{\mathrm{qu}}}\leq\min_{n}\Delta x_{N,n}\leq\Delta x_{N,n} \leq\max_{n}\Delta x_{N,n}\leq TC_{\mathrm{qu}}N^{-1}. \tag{10}\] For example, equally-spaced points (or uniform grids) satisfy the quasi-uniformity with \(C_{\mathrm{qu}}=1\). ### Holder spaces Section 4.1 studies the deterministic setting where the true unknown function \(f\) is assumed to belong to a Holder space of functions. To define this space, we first need the following definition. **Definition 3**.: For \(0<\alpha\leq 1\), a function \(f:[0,T]\to\mathbb{R}\) is \(\alpha\)_-Holder continuous_ if there exists a constant \(L\geq 0\) such that, for all \(x,x^{\prime}\in[0,T]\), \[|f(x)-f(x^{\prime})|\leq L|x-x^{\prime}|^{\alpha}.\] Any such constant \(L\) is called a _Holder constant_ of \(f\). For \(l\in\mathbb{N}\cup\{0\}\), denote by \(C^{l}([0,T])\) the space of functions \(f\colon[0,T]\to\mathbb{R}\) such that the \(l^{\mathrm{th}}\) derivative \(f^{(l)}\) exists and is continuous. For \(l=0\), this is the space of continuous functions. Holder spaces are now defined as follows. **Definition 4**.: Let \(l\in\mathbb{N}\cup\{0\}\) and \(0<\alpha\leq 1\). The _Holder space_\(C^{l,\alpha}([0,T])\) consists of functions \(f\in C^{l}([0,T])\) whose \(l^{\mathrm{th}}\) derivative \(f^{(l)}\) is \(\alpha\)-Holder continuous. Intuitively, \(l+\alpha\) represents the smoothness of least-smooth functions in \(C^{l,\alpha}([0,T])\). It is well known that a sample path of Brownian motion is almost surely \(\alpha\)-Holder continuous if and only if \(\alpha<1/2\) (e.g., Morters and Peres, 2010, Corollary 1.20), and thus it belongs to the Holder space \(C^{l,\alpha}([0,T])\) with \(l=0\) and \(\alpha=1/2-\varepsilon\) almost surely for arbitrarily small \(\varepsilon>0\); in this sense, the smoothness of a Brownian motion is \(1/2\). As such, as is well known (e.g., Morters and Peres, 2010, Theorem 1.27), a Brownian motion is almost nowhere differentiable almost surely. Note that have the following strict inclusions:1 Footnote 1: These inclusions follow from the following facts: By the definition of Hölder continuity, an \(\alpha_{1}\)-Hölder continuous function is \(\alpha_{2}\)-Hölder continuous if \(\alpha_{1}>\alpha_{2}\); continuously differentiable functions are \(\alpha\)-Holder continuous for any \(0<\alpha\leq 1\); not all Lipschitz functions are differentiable. * \(C^{l_{1},\alpha_{1}}([0,T])\subsetneq C^{l_{2},\alpha_{2}}([0,T])\) if (a) \(l_{1}>l_{2}\) or (b) \(l_{1}=l_{2}\) and \(\alpha_{1}>\alpha_{2}\), * \(C^{l+1}([0,T])\subsetneq C^{l,1}([0,T])\). ### Fractional Brownian motion Section 4.2 considers the random setting where \(f\) is a _fractional (or integrated fractional) Brownian motion_ (e.g., Mandelbrot, 1982, Chapter IX). Examples of these processes can be seen in Figures 1, 2, 5 and 6. A fractional Brownian motion on \([0,T]\) with Hurst parameter \(0<H<1\) is a Gaussian process whose covariance kernel is given by \[k_{0,H}(x,x^{\prime})=\big{(}\,|x|^{2H}+|x^{\prime}|^{2H}-|x-x^{\prime}|^{2H} \big{)}/2. \tag{11}\] Note that if \(H=1/2\), this is the Brownian motion kernel: \(k_{0,1/2}(x,x^{\prime})=\min(x,x^{\prime})\). The Hurst parameter \(H\) quantifies the smoothness of the fractional Brownian motion: for any \(0<H<1\), if \(f_{\mathrm{FBM}}\sim\mathcal{GP}(0,k_{0,H})\), we have \(f_{\mathrm{FBM}}\in C^{0,H-\varepsilon}([0,T])\) almost surely for arbitrarily small \(\varepsilon>0\) (e.g., Nourdin, 2012, Proposition 1.6). An integrated Brownian motion with Hurst parameter \(H\) is defined via the integration of a fractional Brownian motion with the same Hurst parameter: if \(f_{\mathrm{FBM}}\sim\mathcal{GP}(0,k_{0,H})\), then \[f_{\mathrm{iFBM}}(x)=\int_{0}^{x}f_{\mathrm{FBM}}(z)\,\mathrm{d}z,\quad x\in[0,T]\] is an integrated Brownian motion with Hurst parameter \(H\). It is a zero-mean GP with the covariance kernel \[k_{1,H}(x,x^{\prime}) =\int_{0}^{x}\int_{0}^{x^{\prime}}\big{(}|z|^{2H}+|z^{\prime}|^{2H}- |z-z^{\prime}|^{2H}\big{)}/2\,\mathrm{d}z\,\mathrm{d}z^{\prime}\] \[=\frac{1}{2(2H+1)}\bigg{(}x^{\prime}x^{2H+1}+x(x^{\prime})^{2H+1} \tag{12}\] \[\qquad\qquad\qquad-\frac{1}{2(H+1)}\big{[}x^{2H+2}+(x^{\prime})^{ 2H+2}-|x-x^{\prime}|^{2H+2}\big{]}\bigg{)}.\] Because differentiating an integrated fractional Brownian motion \(f_{\mathrm{iFBM}}\sim\mathcal{GP}(0,k_{1,H})\) yields a fractional Brownian motion \(f_{\mathrm{FBM}}\sim\mathcal{GP}(0,k_{0,H})\), a sample path of the former satisfies \(f_{\mathrm{iFBM}}\in C^{1,H-\varepsilon}([0,T])\) almost surely for arbitrarily small \(\varepsilon>0\); therefore the smoothness of \(f_{\mathrm{iFBM}}\) is \(1+H\). ### Functions of finite quadratic variation Some of our asymptotic results use the notion of functions of _finite quadratic variation_, defined below. **Definition 5**.: For each \(N\in\mathbb{N}\), let \(\mathcal{P}_{N}\coloneqq(x_{N,n})_{n=1}^{N}\subset[0,T]\), and suppose that \(\|\mathcal{P}_{N}\|\to 0\) as \(N\to\infty\). Then a function \(f:[0,T]\to\mathbb{R}\) is defined to have _finite quadratic variation_ with respect to \(\mathcal{P}\coloneqq(\mathcal{P}_{N})_{N\in\mathbb{N}}\), if the limit \[V^{2}(f)\coloneqq\lim_{N\to\infty}\sum_{n=1}^{N-1}\big{[}f(x_{N,n+1})-f(x_{N,n })\big{]}^{2} \tag{13}\] exists and is finite. We write \(V^{2}(f,\mathcal{P})\) when it is necessary to indicate the sequence of partitions. Quadratic variation is defined for a specific sequence of partitions \((\mathcal{P}_{N})_{N\in\mathbb{N}}\) and may take different values for different sequences of partitions (Morters and Peres, 2010, Remark 1.36). For conditions that guarantee the invariance of quadratic variation on the sequence of partitions, see, for instance, Cont and Bas (2023). Note also that the notion of quadratic variation differs from that of \(p\)-variation for \(p=2\), which is defined as the supremum over all possible sequences of partitions whose meshes tend to zero. If \(f\in C^{0,\alpha}([0,T])\) with \(\alpha>1/2\) and \(\|\mathcal{P}_{N}\|=\mathcal{O}(N^{-1})\) as \(N\to\infty\), then we have \(V^{2}(f)=0\), because in this case \[\sum_{n=1}^{N-1}\big{[}f(x_{N,n+1})-f(x_{N,n})\big{]}^{2}\leq NC^{2}\max_{n}( \Delta x_{N,n})^{2\alpha}=\mathcal{O}(N^{2\alpha-1})\to 0\] as \(N\to\infty\). Therefore, given the inclusion properties of Holder spaces (see Section 3.3), we arrive at the following standard proposition. **Proposition 6**.: _Suppose that the partitions \((\mathcal{P}_{N})_{N\in\mathbb{N}}\) are such that \(\|\mathcal{P}_{N}\|=\mathcal{O}(N^{-1})\). If \(f\in C^{l,\alpha}([0,T])\) for \(l+\alpha>1/2\), then \(V^{2}(f)=0\)._ If the mesh tends to zero faster than \(1/\log N\), in that \(\|\mathcal{P}_{N}\|=o(1/\log N)\), then the quadratic variation of almost every sample path of the Brownian motion on the interval \([0,T]\) equals \(T\)(Dudley, 1973). This is of course true for partitions which have the faster decay \(\|\mathcal{P}_{N}\|=\mathcal{O}(N^{-1})\). Main results This section presents our main results on the asymptotic properties of the CV and ML estimators, \(\hat{\sigma}^{2}_{\mathrm{CV}}\) and \(\hat{\sigma}^{2}_{\mathrm{ML}}\), for the scale parameter. Section 4.1 considers the deterministic setting where the true function \(f\) is fixed and assumed to belong to a Holder space. Section 4.2 studies the random setting where \(f\) is an (integrated) fractional Brownian motion. ### Deterministic setting We present our main results for the deterministic where the true function \(f\) is fixed and assumed to be in a Holder space \(C^{l,\alpha}([0,T])\). Theorem 7 below provides asymptotic upper bounds on the CV estimator \(\hat{\sigma}^{2}_{\mathrm{CV}}\) for different values of the smoothness parameters \(l\) and \(\alpha\) of the Holder space. **Theorem 7** (Rate of CV decay in Holder spaces).: _Suppose that \(f\) is an element of \(C^{l,\alpha}([0,T])\), with \(l\geq 0\) and \(0<\alpha\leq 1\) such that \(l+\alpha>1/2\), \(f(0)=0\), and the interval partitions \((\mathcal{P}_{N})_{N\in\mathbb{N}}\) have bounded meshes \(\|\mathcal{P}_{N}\|=\mathcal{O}(N^{-1})\) as \(N\to\infty\). Then_ \[\hat{\sigma}^{2}_{\mathrm{CV}}=\mathcal{O}\big{(}N^{1-\min\{2(l+\alpha),3\}} \big{)}=\begin{cases}\mathcal{O}\left(N^{1-2\alpha}\right)&\text{ if }\quad l=0\text{ and }\alpha>1/2,\\ \mathcal{O}\left(N^{-1-2\alpha}\right)&\text{ if }\quad l=1\text{ and }\alpha<1/2,\\ \mathcal{O}\left(N^{-2}\right)&\text{ if }\quad l=1\text{ and }\alpha\geq 1/2,\\ \mathcal{O}\left(N^{-2}\right)&\text{ if }\quad l\geq 2.\end{cases} \tag{14}\] Proof.: See Section 8.2. Theorem 8 below is a corresponding result for the ML estimator \(\hat{\sigma}^{2}_{\mathrm{ML}}\). Note that a similar result has been obtained by Karvonen et al. (2020, Proposition 4.5), where the function \(f\) is assumed to belong to a Sobolev space and the kernel is a Matern-type kernel. Theorem 8 is a version of this result where \(f\) is in a Holder space and the kernel is the Brownian motion kernel; we provide it for completeness and ease of comparison. **Theorem 8** (Rate of ML decay in Holder spaces).: _Suppose that \(f\) is a non-zero element of \(C^{l,\alpha}([0,T])\), with \(l\geq 0\) and \(0<\alpha\leq 1\) such that \(l+\alpha>1/2\), \(f(0)=0\), and the interval partitions \((\mathcal{P}_{N})_{N\in\mathbb{N}}\) have bounded meshes \(\|\mathcal{P}_{N}\|=\mathcal{O}(N^{-1})\) as \(N\to\infty\). Then_ \[\hat{\sigma}^{2}_{\mathrm{ML}}=\mathcal{O}\big{(}N^{1-\min\{2(l+\alpha),2\}} \big{)}=\begin{cases}\mathcal{O}\left(N^{1-2\alpha}\right)&\text{ if }\quad l=0\text{ and }\alpha>1/2,\\ \Theta\left(N^{-1}\right)&\text{ if }\quad l\geq 1.\end{cases} \tag{15}\] Proof.: See Section 8.2. The proof is similar to that of Theorem 7. Figure 3 summarises the rates of Theorems 7 and 8. When \(l+\alpha\leq 1\) (or \(l=0\) and \(\alpha\leq 1\)), the rates of \(\hat{\sigma}^{2}_{\mathrm{CV}}\) and \(\hat{\sigma}^{2}_{\mathrm{ML}}\) are \(\mathcal{O}(N^{1-2\alpha})\), so both of them may decay adaptively to the smoothness \(l+\alpha\) of the function \(f\). However, when \(l+\alpha>1\), the situation is different: the decay rate of \(\hat{\sigma}^{2}_{\mathrm{ML}}\) is always \(\Theta(N^{-1})\) and thus insensitive to \(\alpha\), while that of \(\hat{\sigma}^{2}_{\mathrm{CV}}\) is \(\left(N^{-1-2\alpha}\right)\) for \(l=1\) and \(\alpha\in(0,1/2]\). Therefore the CV estimator may be adaptive to a broader range of the smoothness \(0<l+\alpha\leq 3/2\) of the function \(f\) than the ML estimator (whose range of adaptation is \(0<l+\alpha\leq 1\)). Note that Theorems 7 and 8 provide asymptotic upper bounds (except for the case \(l\geq 1\) of Theorem 8) and may not be tight if the function \(f\) is smoother than "typical" functions in \(C^{l,\alpha}([0,T])\).2 In Section 4.2, we show that the bounds are indeed tight in expectation if \(f\) is a fractional (or integrated fractional) Brownian motion with smoothness \(l+\alpha\). **Remark 9**.: _The proof of Theorem 8 shows that for \(l=1\) we have \(\hat{\sigma}^{2}_{\rm ML}=\Theta(N^{-1})\) whenever \(\|\mathcal{P}_{N}\|\to 0\) as \(N\to\infty\). More precisely, it establishes that_ \[N\hat{\sigma}^{2}_{\rm ML}\to\|f^{\prime}\|_{\mathcal{L}^{2}([0,T])}:=\int_{0}^ {T}f^{\prime}(x)^{2}\,\mathrm{d}x\quad\text{ as }\quad N\to\infty.\] _Note that the \(\mathcal{L}^{2}([0,T])\) norm of \(f^{\prime}\) in the right hand side equals the norm of \(f\) in the reproducing kernel Hilbert space of the Brownian motion kernel (e.g., van der Vaart and van Zanten, 2008, Section 10) Therefore, this fact is consistent with a similar more general statement in Karvonen et al. (2020, Proposition 3.1)._ In addition to the above results, Theorem 10 below shows the limit of the CV estimator \(\hat{\sigma}^{2}_{\rm CV}\) if the true function \(f\) is of finite quadratic variation. **Theorem 10**.: _For each \(N\in\mathbb{N}\), let \(\mathcal{P}_{N}\subset[0,T]\) be the equally-spaced partition of size \(N\). Suppose that \(f:[0,T]\to\mathbb{R}\) has finite quadratic variation \(V^{2}(f)\) with respect to \((\mathcal{P}_{N})_{N\in\mathbb{N}}\), \(f(0)=0\), and \(f\) is continuous on the boundary, i.e., \(\lim_{x\to 0^{+}}f(x)=f(0)\) and \(\lim_{x\to T^{-}}f(x)=f(T)\). Moreover, suppose that the quadratic variation \(V^{2}(f)\) remains the same for all sequences of quasi-uniform partitions with constant \(C_{\rm qu}=2\).3 Then_ Footnote 3: In Appendix B, we discuss the relaxation of this requirement. \[\lim_{N\to\infty}\hat{\sigma}^{2}_{\rm CV}=\frac{V^{2}(f)}{T}. \tag{16}\] Proof.: See Section 8.2. For the ML estimator \(\hat{\sigma}^{2}_{\rm ML}\), it is straightforward to obtain a similar result by using (10) and (27) in Section 8.1: Under the same conditions as Theorem 10, we have \[\lim_{N\to\infty}\hat{\sigma}^{2}_{\rm ML}=\frac{V^{2}(f)}{T}. \tag{17}\] Theorem 10 and (17) are consistent with Theorems 7 and 8, which assume \(f\in C^{l,\alpha}([0,T])\) with \(l+\alpha>1/2\) and imply \(\hat{\sigma}^{2}_{\rm CV}\to 0\) and \(\hat{\sigma}^{2}_{\rm ML}\to 0\) as \(N\to\infty\). As summarized in Proposition 6, we have \(V(f)=0\) for \(f\in C^{l,\alpha}([0,T])\) with \(l+\alpha>1/2\), so Theorem 10 and (17) imply that \(\hat{\sigma}^{2}_{\rm CV}\to 0\) and \(\hat{\sigma}^{2}_{\rm ML}\to 0\) as \(N\to\infty\). When \(f\) is a Brownian motion, in which case the Brownian motion prior is well-specified, the smoothness of \(f\) is \(l+\alpha=1/2\), and the quadratic variation \(V(f)\) becomes a positive constant (Dudley, 1973). Proposition 13 in the next subsection shows that this fact, Theorem 10, and (17) lead to the consistency of the ML and CV estimators in the well-specified setting. Figure 3: Rates of decay for the ML and CV estimators from Theorems 7 and 8 when (a) \(l=0\) and \(1/2<\alpha\leq 1\) and (b) \(l=1\) and \(0<\alpha\leq 1\). Observe that the CV estimator’s range of adaptation to the smoothness \(l+\alpha\) is wider than the ML estimator. ### Random setting In Section 4.1, we obtained asymptotic upper bounds on the CV and ML scale estimators when the true function \(f\) is a fixed function in a Holder space. This section shows that these asymptotic bounds are tight in expectation when \(f\) is a fractional (or integrated fractional) Brownian motion. That is, we consider the asymptotics of the expectations \(\mathbb{E}\hat{\sigma}_{\mathrm{CV}}^{2}\) and \(\mathbb{E}\hat{\sigma}_{\mathrm{ML}}^{2}\) under the assumption that \(f\sim\mathcal{GP}(0,k_{l,H})\), where \(k_{l,H}\) is the kernel of a fractional Brownian motion (11) for \(l=0\) or that of an integrated fractional Brownian motion (12) for \(l=1\), with \(0<H<1\) being the Hurst parameter. Recall that \(f\sim\mathcal{GP}(0,k_{l,H})\) belongs to the Holder space \(C^{l,H-\varepsilon}([0,T])\) almost surely for arbitrarily small \(\varepsilon>0\), so its smoothness is \(l+H\). Figure 4 summarises the obtained upper and lower rates, corroborating the upper rates in Figure 3. Theorems 11 and 12 below establish the asymptotic upper and lower bounds for the CV and ML estimators, respectively. **Theorem 11** (Expected CV rate for fractional Brownian motion).: _Suppose that \((\mathcal{P}_{N})_{N\in\mathbb{N}}\) are quasi-uniform and \(f\sim\mathcal{GP}(0,k_{l,H})\) with \(l\in\{0,1\}\) and \(0<H<1\). Then_ \[\mathbb{E}\hat{\sigma}_{\mathrm{CV}}^{2}=\Theta(N^{1-\min\{2(l+H),3\}})= \begin{cases}\Theta\left(N^{1-2H}\right)&\text{ if }\quad l=0\text{ and }H\in(0,1),\\ \Theta\left(N^{-1-2H}\right)&\text{ if }\quad l=1\text{ and }H<1/2,\\ \Theta\left(N^{-2}\right)&\text{ if }\quad l=1\text{ and }H\geq 1/2.\end{cases}\] Proof.: See Section 8.3. **Theorem 12** (Expected ML rate for fractional Brownian motion).: _Suppose that \((\mathcal{P}_{N})_{N\in\mathbb{N}}\) are quasi-uniform and \(f\sim\mathcal{GP}(0,k_{l,H})\) with \(l\in\{0,1\}\) and \(0<H<1\). Then_ \[\mathbb{E}\hat{\sigma}_{\mathrm{ML}}^{2}=\Theta(N^{1-\min\{2(l+H),2\}})= \begin{cases}\Theta\left(N^{1-2H}\right)&\text{ if }\quad l=0\text{ and }H\in(0,1),\\ \Theta\left(N^{-1}\right)&\text{ if }\quad l=1\text{ and }H\in(0,1).\end{cases}\] Proof.: See Section 8.3. The proof is similar to that of Theorem 11. Theorems 11 and 12 show that the CV estimator is adaptive to the unknown smoothness \(l+H\) of the function \(f\) for a broader range \(0<l+H\leq 3/2\) than the ML estimator, whose range of adaptation is \(0<l+H\leq 1\). These results imply that the CV estimator can be asymptotically well-calibrated for a broader range of unknown smoothness than the ML estimator, as discussed in Section 5. When the smoothness of \(f\) is less than \(1/2\), i.e., when \(l+H<1/2\), the Brownian motion prior, whose smoothness is \(1/2\), is smoother than \(f\). In this case, the expected rates of \(\hat{\sigma}_{\mathrm{CV}}^{2}\) and \(\hat{\sigma}_{\mathrm{ML}}^{2}\) are \(\Theta\left(N^{1-2H}\right)\) and increase as \(N\) increases. The increase of \(\hat{\sigma}_{\mathrm{CV}}^{2}\) and \(\hat{\sigma}_{\mathrm{ML}}^{2}\) can be interpreted as compensating the overconfidence of the posterior standard deviation \(\sqrt{k_{N}(x)}\), which decays too fast to be asymptotically well-calibrated. This interpretation agrees with the illustration in Figure 1. On the other hand, when \(l+H>1/2\), the function \(f\) is smoother than the Brownian motion prior. In this case, \(\hat{\sigma}_{\mathrm{CV}}^{2}\) and \(\hat{\sigma}_{\mathrm{ML}}^{2}\) decrease as \(N\) increases, compensating the under-confidence of the posterior standard deviation \(\sqrt{k_{N}(x)}\). See Figure 2 for an illustration. When \(l+H=1/2\), this is the well-specified case in that the smoothness of \(f\) matches the Brownian motion prior. In this case, Theorems 11 and 12 yield \(\mathbb{E}\hat{\sigma}_{\mathrm{CV}}^{2}=\Theta(1)\) and \(\mathbb{E}\hat{\sigma}_{\mathrm{ML}}^{2}=\Theta(1)\), i.e., the CV and ML estimators converge to a constant. The following proposition, which follows from Theorem 10 and (17), shows that this limiting constant is the true value of the scale parameter \(\sigma_{0}^{2}\) in the well-specified setting \(f\sim\mathcal{GP}(0,\sigma_{0}^{2}k)\), recovering similar results in the literature (e.g., Bachoc et al., 2017, Theorem 2). **Proposition 13**.: _Suppose that \(f\sim\mathcal{GP}(0,\sigma_{0}^{2}k)\) for \(\sigma_{0}>0\) and that partitions \((\mathcal{P}_{N})_{N\in\mathbb{N}}\) are equally-spaced. Then_ \[\lim_{N\to\infty}\hat{\sigma}_{\mathrm{CV}}^{2}=\lim_{N\to\infty}\hat{\sigma}_{ \mathrm{ML}}^{2}=\sigma_{0}^{2}\quad\text{ almost surely.}\] Proof.: Since the quadratic variation of almost all sample paths of the unscaled (i.e., \(\sigma_{0}=1\)) Brownian motion on \([0,T]\) equals \(T\)(Dudley, 1973), the claim follows from (16) and (17). We next discuss the implications of the obtained asymptotic rates of \(\hat{\sigma}_{\mathrm{CV}}^{2}\) and \(\hat{\sigma}_{\mathrm{ML}}^{2}\) on the reliability of the resulting GP uncertainty estimates. ## 5 Consequences for credible intervals This section discusses whether the estimated scale parameter, given by the CV or ML estimator, leads to asymptotically well-calibrated credible intervals. With the kernel \(\hat{\sigma}^{2}k(x,x^{\prime})\), where \(\hat{\sigma}^{2}=\hat{\sigma}_{\mathrm{CV}}^{2}\) or \(\hat{\sigma}^{2}=\hat{\sigma}_{\mathrm{ML}}^{2}\), a GP credible interval at \(x\in[0,T]\) is given by \[[m_{N}(x)-\alpha\hat{\sigma}\sqrt{k_{N}(x)},\quad m_{N}(x)+\alpha\hat{\sigma} \sqrt{k_{N}(x)}] \tag{18}\] where \(\alpha>0\) is a constant (e.g., \(\alpha\approx 1.96\) leads to the 95% credible interval). As discussed in Section 1, this credible interval (18) is asymptotically well-calibrated, if it shrinks to \(0\) at the same speed as the decay of the error \(|m_{N}(x)-f(x)|\) as \(N\) increases, i.e., the ratio \[\frac{|f(x)-m_{N}(x)|}{\hat{\sigma}\sqrt{k_{N}(x)}} \tag{19}\] should neither diverge to infinity nor converge to \(0\). If this ratio diverges to infinity, the credible interval (18) is asymptotically overconfident, in that (18) shrinks to \(0\) faster than the actual error \(|f(x)-m_{N}(x)|\). If the ratio converges to \(0\), the credible interval is asymptotically underconfident, as it increasingly overestimates the actual error. Therefore, the ratio (19) should ideally converge to a positive constant for the credible interval (18) to be reliable. For ease of analysis, we focus on the random setting in Section 4.2 where \(f\) is a fractional (or integrated fractional) Brownian motion and where we obtained asymptotic upper and lower bounds for \(\mathbb{E}\hat{\sigma}_{\mathrm{CV}}^{2}\) and \(\mathbb{E}\hat{\sigma}_{\mathrm{ML}}^{2}\). We study how the expectation of the posterior variance \(\mathbb{E}\hat{\sigma}^{2}k_{N}(x)\) scales with the expected squared error \(\mathbb{E}[f(x)-m_{N}(x)]^{2}\). Specifically, we analyze their ratio for \(\hat{\sigma}^{2}=\hat{\sigma}_{\mathrm{CV}}^{2}\) and \(\hat{\sigma}^{2}=\hat{\sigma}_{\mathrm{ML}}^{2}\): \[R_{\mathrm{CV}}^{\mathbb{E}}(x,N)\coloneqq\frac{\mathbb{E}[f(x)-m_{N}(x)]^{2} }{\mathbb{E}\hat{\sigma}_{\mathrm{CV}}^{2}k_{N}(x)}\quad\text{ and }\quad R_{ \mathrm{ML}}^{\mathbb{E}}(x,N)\coloneqq\frac{\mathbb{E}[f(x)-m_{N}(x)]^{2}}{ \mathbb{E}\hat{\sigma}_{\mathrm{ML}}^{2}k_{N}(x)}. \tag{20}\] Figure 4: Expected decay rates for the ML and CV estimators from Theorems 11 and 12. Observe that the CV estimator’s range of adaptation to the smoothness \(l+H\) is wider than the ML estimator. The ratio diverging to infinity (resp. converging to \(0\)) as \(N\to\infty\) suggests that the credible interval (18) is asymptotically overconfident (resp. underconfident) for a non-zero probability of the realisation of \(f\). Thus ideally, the ratio should converge to a positive constant. Theorem 14 below establishes the asymptotic rates of the ratios in (20). **Theorem 14**.: _Suppose that \((\mathcal{P}_{N})_{N\in\mathbb{N}}\) are quasi-uniform and \(f\sim\mathcal{GP}(0,k_{l,H})\) for \(l\in\{0,1\}\) and \(0<H<1\). Then,_ \[\sup_{x\in[0,T]}R^{\mathbb{E}}_{\mathrm{CV}}(x,N)=\begin{cases}\Theta(1)&\text{ if }\quad l=0\text{ and }H\in(0,1),\\ \Theta(1)&\text{ if }\quad l=1\text{ and }H\in(0,1/2),\\ \Theta\left(N^{1-2H}\right)&\text{ if }\quad l=1\text{ and }H\in(1/2,1)\end{cases}\] _and_ \[\sup_{x\in[0,T]}R^{\mathbb{E}}_{\mathrm{ML}}(x,N)=\begin{cases}\Theta(1)& \text{ if }\quad l=0\text{ and }H\in(0,1),\\ \Theta\left(N^{-2H}\right)&\text{ if }\quad l=1\text{ and }H\in(0,1).\end{cases}\] Proof.: See Section 8.4. We have the following observations from Theorem 14, which suggest an advantage of the CV estimator over the ML estimator for uncertainty quantification: * The ratio for the CV estimator neither diverges to infinity nor decays to \(0\) in the range \(0<l+H<3/2\), which is broader than that of the ML estimator, \(0<l+H<1\). This observation suggests that the CV estimator can yield asymptotically well-calibrated credible intervals for a broader range of the unknown smoothness \(l+H\) of the function \(f\) than the ML estimator. * The ratio decays to \(0\) for the CV estimator in the range \(3/2<l+H<2\) and for the ML estimator in the range \(1<l+H<2\). Therefore, the ML estimator may yield asymptotically underconfident credible intervals for a broader range of the smoothness \(l+H\) than the CV estimator. ## 6 Experiments This section describes numerical experiments to substantiate the theoretical results in Section 4. We define test functions in Section 6.1, show empirical asymptotic results for the CV estimator in Section 6.2, and report comparisons between the CV and ML estimators in Section 6.3. To this end, for a continuous function \(f\), define \(l[f]\in\mathbb{N}\cup\{0\}\) and \(\alpha\in(0,1]\) as \[l[f]:=\sup\{l\in\mathbb{N}\cup\{0\}:f\in C^{l}([0,T])\},\quad\alpha[f]:=\sup \{\alpha\in(0,1]:f\in C^{[f],\alpha}([0,T])\}. \tag{21}\] Then, for arbitrarily small \(\varepsilon_{1}\in\mathbb{N}\) and \(\varepsilon_{2}>0\), we have \[f\in C^{\max(l[f]-\varepsilon_{1},0),\alpha[f]-\varepsilon_{2}}([0,T])\quad \text{ and }\quad f\notin C^{l[f]+\varepsilon_{1},\alpha[f]+\varepsilon_{2}}([0,T]).\] In this sense, \(l[f]\) and \(\alpha[f]\) characterize the smoothness of \(f\). ### Test functions We generate test functions \(f:[0,1]\to\mathbb{R}\) as sample paths of stochastic processes with varying degrees of smoothness, as defined below. The left columns of Figures 5 and 6 show samples of these functions. * To generate nowhere differentiable test functions, we use the Brownian motion (BM), the Ornstein-Uhlenbeck process (OU), and the fractional Brownian motion (FBM4) which are zero-mean GPs with covariance kernels \[k_{\mathrm{BM}}(x,x^{\prime}) =\min(x,x^{\prime}),\quad k_{\mathrm{OU}}(x,x^{\prime})=\big{(}e^{ -\lambda|x-x^{\prime}|}-e^{-\lambda(x+x^{\prime})}\big{)}/4,\] \[k_{\mathrm{FBM}}(x,x^{\prime}) =\big{(}\,|x|^{2H}+|x^{\prime}|^{2H}-|x-x^{\prime}|^{2H}\big{)}/2,\] where \(\lambda>0\) and \(0<H<1\) is the Hurst parameter (recall that the \(\mathrm{FBM}=\mathrm{BM}\) if \(H=1/2\)). We set \(\lambda=0.2\) in the experiments below. Almost all samples \(f\) from these processes satisfy \(l[f]=0\). For BM and OU we have \(\alpha[f]=1/2\) and for FBM \(\alpha[f]=H\) (see Section 3.4). It is well known that the OU process with the kernel \(k_{\mathrm{OU}}\) above satisfies the stochastic differential equation \[\mathrm{d}f(t)=-\lambda f(t)\mathrm{d}t+\sqrt{\frac{\lambda}{2}}\,\mathrm{d}B (t),\] (22) where \(B\) is the standard Brownian motion whose kernel is \(k_{\mathrm{BM}}\). Footnote 4: We use [https://github.com/crflynn/fbm](https://github.com/crflynn/fbm) to sample from FBM. * To generate differentiable test functions, we use once (iFBM) and twice (iiFBM) integrated fractional Brownian motions \[f_{\mathrm{iFBM}}(x)=\int_{0}^{x}f_{\mathrm{FBM}}(z)\,\mathrm{d}z\quad\text{ and }\quad f_{\mathrm{iFBM}}(x)=\int_{0}^{x}f_{\mathrm{iFBM}}(z)\,\mathrm{d}z,\] where \(f_{\mathrm{FBM}}\sim\mathcal{GP}(0,k_{\mathrm{FBM}})\). See (12) for the iFBM covariance kernel. With \(H\) the Hurst parameter of the original FBM, almost all samples \(f\) from the above processes satisfy \(l[f]=1\) and \(\alpha[f]=H\) (iFBM) or \(l[f]=2\) and \(\alpha[f]=H\) (iiFBM). * We also consider a piecewise infinitely differentiable function \(f(x)=\sin 10x+[x>x_{0}]\), where \(x_{0}\) is randomly sampled from the uniform distribution on \([0,1]\) and \([x>x_{0}]\) is \(1\) if \(x>x_{0}\) and \(0\) otherwise. This function is of finite quadratic variation with \(V^{2}(f)=1\). Denote \(\hat{\sigma}^{2}=\lim_{N\to\infty}\hat{\sigma}^{2}_{\mathrm{CV}}\). For the above test functions, with equally-spaced partitions, we expect the following asymptotic behaviours for the CV estimator from Theorems 7, 10 and 11, Proposition 13, the definition of quadratic variation, and Equation (22): \[\mathrm{BM}\;(l[f]=0,\,\alpha[f]=1/2): \hat{\sigma}^{2}_{\mathrm{CV}}=\mathcal{O}(1)\] and \[\hat{\sigma}^{2}=1,\] \[\mathrm{OU}\;(l[f]=0,\,\alpha[f]=1/2): \hat{\sigma}^{2}_{\mathrm{CV}}=\mathcal{O}(1)\] and \[\hat{\sigma}^{2}=\lambda/2,\] \[\mathrm{FBM}\;(l[f]=0,\,\alpha[f]=H): \hat{\sigma}^{2}_{\mathrm{CV}}=\mathcal{O}(N^{1-2H})\] and \[\hat{\sigma}^{2}=0,\] \[\mathrm{iFBM}\;(l[f]=1,\,\alpha[f]=H): \hat{\sigma}^{2}_{\mathrm{CV}}=\mathcal{O}(N^{-1-2H})\] and \[\hat{\sigma}^{2}=0,\] \[\mathrm{iiFBM}\;(l[f]=2,\,\alpha[f]=H): \hat{\sigma}^{2}_{\mathrm{CV}}=\mathcal{O}(N^{-2})\] and \[\hat{\sigma}^{2}=0,\] \[\sin 10x+[x>x_{0}]: \hat{\sigma}^{2}_{\mathrm{CV}}=\mathcal{O}(1)\] and \[\hat{\sigma}^{2}=1.\] Note that the above rate for the iFBM holds for \(0<H\leq 1/2\). The chosen functions allow us to cover a range of \(\alpha[f]\) and \(l[f]\) relevant to the varying rate of convergence in Theorems 7 and 11, as well as a range of \(V^{2}(f)\) relevant to the limit in Theorem 10, \(\lim_{N\to\infty}\hat{\sigma}^{2}_{\mathrm{CV}}=V^{2}(f)/T\). ### Asymptotics of the CV estimator Figure 5 shows the asymptotics of \(\hat{\sigma}^{2}_{\rm CV}\), where each row corresponds to one stochastic process generating test functions \(f\); the rows are displayed in the increasing order of smoothness as quantified by \(l[f]+\alpha[f]\). The estimates are obtained for equally-spaced partitions of sizes \(N=10,10^{2},\ldots,10^{5}\). In each row, the left panel plots a single sample of generated test functions \(f\). The middle panel shows the mean and confidence intervals (of two standard deviations) of \(\hat{\sigma}^{2}_{\rm CV}\) for 100 sample realisations of \(f\) for each sample size \(N\). The right panel describes the convergence rate of \(\hat{\sigma}^{2}_{\rm CV}\) to its limit point \(\hat{\sigma}^{2}=\lim_{N\to\infty}\hat{\sigma}^{2}_{\rm CV}\) on the log scale. We have the following observations: * The first two rows (the FBM and OU) and the last (the piece-wise infinitely differentiable function) confirm Theorem 10, which states the convergence \(\hat{\sigma}^{2}_{\rm CV}\to V^{2}(f)/T\) as \(N\to\infty\). While Theorem 10 does not provide convergence rates, the rates in the first two rows appear to be \(N^{-1/2}\). In the last row the rate is \(N^{-2}\). * The remaining rows show that the observed rates of \(\hat{\sigma}^{2}_{\rm CV}\) to \(0\) are in complete agreement with the rates predicted by Theorems 7 and 11. In particular, the rates are adaptive to the smoothness \(l[f]+\alpha[f]\) of the function if \(l[f]+\alpha[f]\leq 3/2\), as predicted. ### Comparison of CV and ML estimators Figure 6 shows the decay rates of \(\hat{\sigma}^{2}_{\rm CV}\) and \(\hat{\sigma}^{2}_{\rm ML}\) to \(0\) for test functions \(f\) with \(l[f]=1\), under the same setting as for Figure 5. In this case, Theorems 8 and 12 predict that \(\hat{\sigma}^{2}_{\rm ML}\) decays at the rate \(\Theta(N^{-1})\) regardless of the smoothness; this is confirmed in the right column. In contrast, the middle column shows again that \(\hat{\sigma}^{2}_{\rm CV}\) decays with a rate that adapts to \(l[f]\) and \(\alpha[f]\) as long as \(l[f]+\alpha[f]\leq 3/2\), as predicted by Theorems 7 and 11. These results empirically support our theoretical finding that the CV estimator is adaptive to the unknown smoothness \(l[f]+\alpha[f]\) of a function \(f\) for a broader range of smoothness than the ML estimator. ## 7 Conclusion and future work We have analysed the asymptotics of the CV and ML estimators for the scale parameter in GP interpolation with the Brownian motion kernel. As a novel contribution, our analysis covers the misspecified case where the smoothness of the true function \(f\) is different from that of the samples from the GP prior. Our main results in Theorems 7, 8, 11 and 12 indicate that both CV and ML estimators can adapt to the unknown smoothness of \(f\), but the range of smoothness for which this adaptation happens is broader for the CV estimator. Accordingly, the CV estimator can make GP uncertainty estimates asymptotically well-calibrated for a wider range of smoothness than the ML estimator, as indicated in Theorem 14. In this sense, the CV estimator has an advantage over the ML estimator. The experiments provide supporting evidence for the theoretical results. Natural next steps include the following: * Supplement the asymptotic upper bounds in Theorems 7 and 8 of the deterministic setting with matching lower bounds. * Extend the analyses (of both the deterministic and random settings) to more generic finitely smooth kernels and higher dimensions. Figure 5: Asymptotics of CV estimators for functions of varying smoothness as quantified by \(l[f]\) and \(\alpha[l]\) in (21). Runs on individual 100 samples from \(f\) are in gray, means and confidence intervals (of two standard deviations) are in black. The matching lower bounds, if obtained, would allow one to analyse the ratio between the prediction error \(|f(x)-m_{N}(x)|\) and the posterior standard deviation \(\hat{\sigma}\sqrt{k_{N}(x)}\) in the deterministic setting, corresponding to the one in Section 5 for the random setting. Such an analysis would need additional assumptions on the true function \(f\), such as the homogeneity of the smoothness of \(f\) across the input space. It also requires a sharp characterisation of the error \(|f(x)-m_{N}(x)|\), which could use super convergence results in Wendland (2005, Section 11.5) and Schaback (2018). Most natural kernel classes for extension are Materns and other kernels whose RKHS are norm-equivalent to Sobolev spaces. To this end, it would be possible to adapt the techniques used in Karvonen et al. (2020) for analyzing the ML estimator to the CV estimator. In any case, one would need much more advanced techniques than those used here. ## 8 Proofs This section provides the proofs of the main results and other lengthy computations. For \(x_{0}=0\) and \(x_{1},\ldots,x_{N}\in[0,T]\), we will use the following notation whenever it can improve the readability or highlight a point: \[\Delta x_{n} \coloneqq x_{n+1}-x_{n},\quad n=0,1,\ldots,N-1,\] \[f_{n} \coloneqq f(x_{n}),\quad n=0,1,\ldots,N. \tag{23}\] ### Explicit expressions for the CV and ML estimators Let us define \(x_{0}=0\) and use the convention \(f(x_{0})=0\). Then one can show that the posterior mean and covariance functions in (3) can be expressed as \[m_{N}(x)=\begin{cases}\frac{(x_{n}-x)f(x_{n-1})+(x-x_{n-1})f(x_{n})}{x_{n}-x_ {n-1}}&\text{if $x\in[x_{n-1},x_{n}]$ for some $1\leq n\leq N$},\\ f(x_{N})&\text{if $x\in[x_{N},T]$}\end{cases} \tag{24}\] Figure 6: Asymptotics of CV estimator compared to asymptotics of ML estimators, for once differentiable functions. \[k_{N}(x,x^{\prime})=\begin{cases}\frac{(x_{n}-x^{\prime})(x-x_{n-1})}{x_{n}-x_{n-1 }}&\text{ if }x_{n-1}\leq x\leq x^{\prime}\leq x_{n}\text{ for some }1\leq n\leq N,\\ x-x_{N}&\text{ if }x_{N}\leq x\leq x^{\prime}\leq T,\\ 0&\text{ otherwise.}\end{cases} \tag{25}\] We omit the case \(x^{\prime}\leq x\) for \(k_{N}(x,x^{\prime})\) as this case is obtained by the symmetry \(k_{N}(x,x^{\prime})=k_{N}(x^{\prime},x)\). Using these expressions, we have, for each \(1\leq n<N\): \[m_{\backslash n}(x_{n})=\frac{(x_{n}-x_{n+1})f(x_{n-1})+(x_{n-1}-x_{n})f(x_{n+ 1})}{x_{n-1}-x_{n+1}}\] and \[k_{\backslash n}(x_{n})=k_{\backslash n}(x_{n},x_{n})=\frac{(x_{n}-x_{n+1})( x_{n}-x_{n-1})}{x_{n-1}-x_{n+1}}\] For \(n=N\), we have \(m_{\backslash N}(x_{N})=f(x_{N-1})\) and \(k_{\backslash N}(x_{N})=x_{N}-x_{N-1}\). Inserting these expressions in (9) and using the notation (23), the CV estimator can be written as \[\hat{\sigma}^{2}_{\text{CV}}=\frac{1}{N}\Bigg{[} \frac{(x_{2}f_{1}-x_{1}f_{2})^{2}}{x_{1}x_{2}\Delta x_{1}}+\sum_{n =2}^{N-1}\frac{(\Delta x_{n-1}[f_{n+1}-f_{n}]-\Delta x_{n}[f_{n}-f_{n-1}])^{2} }{(\Delta x_{n}+\Delta x_{n-1})\Delta x_{n}\Delta x_{n-1}} \tag{26}\] \[+\frac{(f_{N}-f_{N-1})^{2}}{\Delta x_{N-1}}\Bigg{]}.\] For the ML estimator (8), we obtain the explicit expression \[\hat{\sigma}^{2}_{\text{ML}}=\frac{1}{N}\sum_{n=1}^{N}\frac{[f(x_{n})-f(x_{n-1 })]^{2}}{\Delta x_{n-1}} \tag{27}\] by observing that \(m_{n-1}(x_{n})=f(x_{n})\) and \(k_{n-1}(x_{n})=x_{n}-x_{n-1}\). ### Proofs for Section 4.1 Proof of Theorem 7.: The estimator \(\hat{\sigma}^{2}_{\text{CV}}\) in (26) may be written as \[\hat{\sigma}^{2}_{\text{CV}}=B_{1,N}+I_{N}+B_{2,N} \tag{28}\] in terms of the boundary terms \[B_{1,N}=\frac{1}{N}\cdot\frac{(x_{2}f_{1}-x_{1}f_{2})^{2}}{x_{1}x_{2}\Delta x _{1}}\quad\text{ and }\quad B_{2,N}=\frac{1}{N}\cdot\frac{(f_{N}-f_{N-1})^{2}}{ \Delta x_{N-1}} \tag{29}\] and the interior term \[I_{N}=\frac{1}{N}\sum_{n=2}^{N-1}\frac{(\Delta x_{n-1}[f_{n+1}-f_{n}]-\Delta x _{n}[f_{n}-f_{n-1}])^{2}}{(\Delta x_{n}+\Delta x_{n-1})\Delta x_{n}\Delta x_{ n-1}}. \tag{30}\] The claimed rate in (14) is \(\mathcal{O}(N^{-2})\) if \(l\geq 2\) or \(l=1\) and \(\alpha\geq 1/2\). By the inclusion properties of Holder spaces in Section 3.3, it is therefore sufficient to consider the cases (a) \(l=0\) and \(\alpha\in(1/2,1]\) and (b) \(l=1\) and \(\alpha\in(0,1/2]\). Suppose first that \(l=0\) and \(\alpha\in(1/2,1]\). Let \(L\) be a Holder constant of a function \(f\in C^{0,\alpha}([0,T])\). Using the Holder condition, the bounding assumption on \(\Delta x_{n}\), and \(f_{0}=f(0)=0\), the boundary terms can be bounded as \[B_{1,N}=\frac{1}{N}\cdot\frac{(x_{1}(f_{1}-f_{2})+\Delta x_{1}(f_ {1}-f_{0}))^{2}}{x_{1}x_{2}\Delta x_{1}} \leq\frac{1}{N}\cdot\frac{2(x_{1}^{2}(f_{1}-f_{2})^{2}+\Delta x_{ 1}^{2}(f_{1}-f_{0})^{2})}{x_{1}x_{2}\Delta x_{1}}\] \[\leq\frac{1}{N}\cdot\frac{2L^{2}(x_{1}^{2}\Delta x_{1}^{2\alpha} +x_{1}^{2\alpha}\Delta x_{1}^{2})}{x_{1}x_{2}\Delta x_{1}}\] \[=\mathcal{O}(N^{-1}\Delta x_{1}^{2\alpha-1})\] \[=\mathcal{O}(N^{-2\alpha}) \tag{31}\] and \[B_{2,N}=\frac{1}{N}\cdot\frac{(f_{N}-f_{N-1})^{2}}{\Delta x_{N-1}}\leq\frac{1 }{N}L^{2}\Delta x_{N-1}^{2\alpha-1}=\mathcal{O}(N^{-2\alpha}). \tag{32}\] Similarly, the interior term is bounded as \[I_{N} \leq\frac{2}{N}\sum_{n=2}^{N-1}\frac{\Delta x_{n-1}^{2}(f_{n+1}-f _{n})^{2}+\Delta x_{n}^{2}(f_{n}-f_{n-1})^{2}}{(\Delta x_{n}+\Delta x_{n-1}) \Delta x_{n}\Delta x_{n-1}}\] \[\leq\frac{2L^{2}}{N}\sum_{n=2}^{N-1}\frac{\Delta x_{n-1}^{2} \Delta x_{n}^{2\alpha}+\Delta x_{n}^{2}\Delta x_{n-1}^{2\alpha}}{(\Delta x_{ n}+\Delta x_{n-1})\Delta x_{n}\Delta x_{n-1}}\] \[=\frac{2L^{2}}{N}\sum_{n=2}^{N-1}\frac{\Delta x_{n-1}\Delta x_{n }^{2\alpha-1}+\Delta x_{n}\Delta x_{n-1}^{2\alpha-1}}{\Delta x_{n}+\Delta x_{ n-1}}\] \[=\frac{2L^{2}}{N}\sum_{n=2}^{N-1}\left(\frac{\Delta x_{n-1}}{ \Delta x_{n}+\Delta x_{n-1}}\Delta x_{n}^{2\alpha-1}+\frac{\Delta x_{n}}{ \Delta x_{n}+\Delta x_{n-1}}\Delta x_{n-1}^{2\alpha-1}\right)\] \[\leq\frac{2L^{2}}{N}\sum_{n=2}^{N-1}\left(\Delta x_{n}^{2\alpha- 1}+\Delta x_{n-1}^{2\alpha-1}\right)\] \[=\mathcal{O}(N^{1-2\alpha}).\] Inserting the above bounds in (28) yields \(\hat{\sigma}^{2}_{\mathrm{CV}}=\mathcal{O}(N^{-2\alpha}+N^{1-2\alpha})= \mathcal{O}(N^{1-2\alpha})\), which is the claimed rate when \(l=0\). Suppose then that \(l=1\) and \(\alpha\in(0,1/2]\), so that the first derivative \(f^{\prime}\) of \(f\in C^{1,\alpha}([0,T])\) is \(\alpha\)-Holder and hence continuous. Because a continuously differentiable function is Lipschitz, we may set \(\alpha=1\) in the estimates (31) and (32) for the boundary terms \(B_{1,N}\) and \(B_{2,N}\) in the preceding case. This shows these terms are \(\mathcal{O}(N^{-2})\). Because \(f\) is differentiable, we may use the mean value theorem to write the interior term as \[I_{N} =\frac{1}{N}\sum_{n=2}^{N-1}\frac{\Delta x_{n-1}\Delta x_{n}}{ \Delta x_{n-1}+\Delta x_{n}}\bigg{(}\frac{f_{n+1}-f_{n}}{\Delta x_{n}}-\frac{f _{n}-f_{n-1}}{\Delta x_{n-1}}\bigg{)}^{2}\] \[=\frac{1}{N}\sum_{n=2}^{N-1}\frac{\Delta x_{n-1}\Delta x_{n}}{ \Delta x_{n-1}+\Delta x_{n}}\big{[}f^{\prime}(\tilde{x}_{n})-f^{\prime}( \tilde{x}_{n-1})\big{]}^{2},\] where \(\tilde{x}_{n}\in(x_{n},x_{n+1})\). Let \(L^{\prime}\) be a Holder constant of \(f^{\prime}\). Then the Holder continuity of \(f^{\prime}\) and the assumption that \(\|\mathcal{P}_{N}\|=\mathcal{O}(N^{-1})\) yield \[I_{N}\leq\frac{L^{2}}{N}\sum_{n=2}^{N-1}\frac{\Delta x_{n-1} \Delta x_{n}}{\Delta x_{n-1}+\Delta x_{n}}|\tilde{x}_{n}-\tilde{x}_{n-1}|^{2\alpha} \leq\frac{L^{2}}{N}\sum_{n=2}^{N-1}\frac{\Delta x_{n-1}\Delta x_{ n}}{\Delta x_{n-1}+\Delta x_{n}}(\Delta x_{n-1}+\Delta x_{n})^{2\alpha}\] \[\leq\frac{L^{2}}{N}\sum_{n=2}^{N-1}\Delta x_{n}(\Delta x_{n-1}+ \Delta x_{n})^{2\alpha}\] \[=\mathcal{O}(N^{-2\alpha-1}).\] Using the above bounds in (28) yields \(\hat{\sigma}^{2}_{\mathrm{CV}}=\mathcal{O}(N^{-2}+N^{-2\alpha-1})=\mathcal{O} (N^{-2\alpha-1})\), which is the claimed rate when \(l=1\). Proof of Theorem 8.: From (27) we have \[\hat{\sigma}^{2}_{\mathrm{ML}}=\frac{1}{N}\sum_{n=1}^{N}\frac{(f_{n}-f_{n-1}) ^{2}}{\Delta x_{n-1}}.\] Suppose first that \(l=0\) and \(\alpha\in(1/2,1]\). As in the proof of Theorem 7, we get \[\hat{\sigma}^{2}_{\mathrm{ML}}=\frac{1}{N}\sum_{n=1}^{N}\frac{(f_{n}-f_{n-1}) ^{2}}{\Delta x_{n-1}}\leq\frac{L^{2}}{N}\sum_{n=1}^{N}\Delta x_{n-1}^{2\alpha- 1}=\mathcal{O}\big{(}N^{1-2\alpha}\big{)} \tag{33}\] when \(\|\mathcal{P}_{N}\|=\mathcal{O}(N^{-1})\). Suppose then that \(l=1\). By the mean value theorem there are \(\xi_{n}\in(x_{n-1},x_{n})\) such that \[\hat{\sigma}^{2}_{\mathrm{ML}}=\frac{1}{N}\sum_{n=1}^{N}\frac{(f_{n}-f_{n-1}) ^{2}}{\Delta x_{n-1}}=\frac{1}{N}\sum_{n=1}^{N}\Delta x_{n-1}\bigg{(}\frac{f_{ n}-f_{n-1}}{\Delta x_{n-1}}\bigg{)}^{2}=\frac{1}{N}\sum_{n=1}^{N}\Delta x_{n-1}f^{ \prime}(\xi_{n})^{2}.\] Since \(f^{\prime}\) is continuous on \([0,T]\) and hence Riemann integrable, we obtain the asympotic equivalence \[N\hat{\sigma}^{2}_{\mathrm{ML}}\to\int_{0}^{T}f^{\prime}(x)^{2}\,\mathrm{d}x \quad\text{ as }\quad N\to\infty\] when \(\|\mathcal{P}_{N}\|\to 0\) as \(N\to\infty\). The integral is positive because \(f\) has been assumed non-constant. Proof of Theorem 10.: For equally-spaced partitions, \(\Delta x_{n}=x_{1}=T/N\) for all \(n\in\{2,\dots,N\}\), the estimator \(\hat{\sigma}^{2}_{\mathrm{CV}}\) in (26) takes the form \[\hat{\sigma}^{2}_{\mathrm{CV}}=\frac{1}{T}\Bigg{[}\frac{(x_{2}f_{1}-x_{1}f_{2} )^{2}}{x_{1}x_{2}}+\frac{1}{2}\sum_{n=2}^{N-1}((f_{n+1}-f_{n})-(f_{n}-f_{n-1})) ^{2}+(f_{N}-f_{N-1})^{2}\Bigg{]},\] Recall from the proof of Theorem 7 the decomposition \[\hat{\sigma}^{2}_{\mathrm{CV}}=B_{1,N}+I_{N}+B_{2,N}\] in terms of the boundary terms \(B_{1,N}\) and \(B_{2,N}\) in (29) and the interior term \(I_{N}\) in (30). Because \(f\) is assumed continuous on the boundary and equispaced partitions are quasi-uniform, both \(B_{1,N}\) and \(B_{2,N}\) tend to zero as \(N\to\infty\). We may therefore focus on the interior term, which decomposes as \[I_{N} =\frac{1}{2}\sum_{n=2}^{N-1}((f_{n+1}-f_{n})-(f_{n}-f_{n-1}))^{2}\] \[=\,\sum_{n=2}^{N-1}(f_{n+1}-f_{n})^{2}+(f_{n}-f_{n-1})^{2}-\frac{1 }{2}(f_{n+1}-f_{n-1})^{2}\] The sums \(\sum_{n=2}^{N-1}(f_{n+1}-f_{n})^{2}\) and \(\sum_{n=2}^{N-1}(f_{n}-f_{n-1})^{2}\) tend to \(V^{2}(f)\) by definition. To establish the claimed bound we are therefore left to prove that \[\sum_{n=2}^{N-1}(f_{n+1}-f_{n-1})^{2}\to 2V^{2}(f)\qquad\text{as}\qquad N\to\infty. \tag{34}\] We may write the sum as \[\sum_{n=2}^{N-1}(f_{n+1}-f_{n-1})^{2}=\sum_{n=1}^{\lfloor\frac{N-1}{2} \rfloor}(f_{2n+1}-f_{2n-1})^{2}+\sum_{n=1}^{\lfloor\frac{N-2}{2}\rfloor}(f_{2n +2}-f_{2n})^{2}.\] Consider a sub-partition of \(\mathcal{P}_{N}\) that consists of odd-index points \(x_{1},x_{3},\ldots x_{\lfloor\frac{N-1}{2}\rfloor}\) of \(\mathcal{P}_{N}\). The sequence of these sub-partitions is quasi-uniform with constant \(2\). The assumption that the quadratic variation is \(V^{2}(f)\) for all partitions with quasi-uniformity constant \(2\) implies that \[\lim_{N\to\infty}\sum_{n=1}^{\lfloor\frac{N-1}{2}\rfloor}(f_{2n+1}-f_{2n-1})^{ 2}=V^{2}(f).\] The same will hold for sub-partitions formed of even-index points of \(\mathcal{P}_{N}\), giving \[\lim_{N\to\infty}\sum_{n=1}^{\lfloor\frac{N-2}{2}\rfloor}(f_{2n+2}-f_{2n})^{2} =V^{2}(f).\] Thus, (34) holds. This completes the proof. ### Proofs for Section 4.2 Proof of Theorem 11.: Recall the explicit expression of \(\hat{\sigma}^{2}_{\text{CV}}\) in (26): \[\begin{split}\hat{\sigma}^{2}_{\text{CV}}&=\frac{1} {N}\Bigg{[}\frac{(x_{2}f_{1}-x_{1}f_{2})^{2}}{x_{1}x_{2}\Delta x_{1}}+\sum_{n= 2}^{N-1}\frac{(\Delta x_{n-1}[f_{n+1}-f_{n}]-\Delta x_{n}[f_{n}-f_{n-1}])^{2} }{(\Delta x_{n}+\Delta x_{n-1})\Delta x_{n}\Delta x_{n-1}}\\ &\qquad\qquad+\frac{(f_{N}-f_{N-1})^{2}}{\Delta x_{N-1}}\Bigg{]}.\end{split} \tag{35}\] We consider the cases \(l=0\) and \(l=1\) separately. Recall that \(f\sim\mathcal{GP}(0,k_{l,H})\) implies that \(\mathbb{E}[f(x)f(x^{\prime})]=k_{l,H}(x,x^{\prime})\). Suppose first that \(l=0\), in which case \(f\sim\mathcal{GP}(0,k_{0,H})\) for the fractional Brownian motion kernel \(k_{0,H}\) in (11). In this case the expected values of squared terms in the expression for \(\hat{\sigma}^{2}_{\text{CV}}\) are \(\mathbb{E}[x_{2}f_{1}-x_{1}f_{2}]^{2}=x_{1}x_{2}\Delta x_{1}(x_{1}^{2H-1}-x_{2 }^{2H-1}+\Delta x_{1}^{2H-1})\), \[\mathbb{E}\big{[}\Delta x_{n-1}(f_{n+1}-f_{n})-\Delta x_{n}(f_{n}- f_{n-1})\big{]}^{2}\] \[\qquad=\big{(}\Delta x_{n}^{2H-1}+\Delta x_{n-1}^{2H-1}-(\Delta x _{n-1}+\Delta x_{n})^{2H-1}\big{)}\Delta x_{n-1}\Delta x_{n}(\Delta x_{n}+ \Delta x_{n-1}),\] and \(\mathbb{E}[f_{N}-f_{N-1}]^{2}=\Delta x_{N-1}^{2H}\). Substituting these in the expectation of \(\hat{\sigma}^{2}_{\text{CV}}\) and using the fact that \(\Delta x_{n}=\Theta(N^{-1})\) for all \(n\) by quasi-uniformity we get \[\mathbb{E}\hat{\sigma}^{2}_{\text{CV}} =\frac{1}{N}\Bigg{[}(x_{1}^{2H-1}-x_{2}^{2H-1}+\Delta x_{1}^{2H-1})\] \[\qquad\qquad+\sum_{n=2}^{N-1}\big{(}\Delta x_{n-1}^{2H-1}+\Delta x _{n}^{2H-1}-(\Delta x_{n-1}+\Delta x_{n})^{2H-1}\big{)}+\Delta x_{N-1}^{2H-1} \Bigg{]}\] \[=\Theta(N^{-2H})+\Theta(N^{1-2H})+\Theta(N^{-2H})\] \[=\Theta(N^{1-2H}).\] Suppose then that \(l=1\), in which case \(f\sim\mathcal{GP}(0,k_{1,H})\) for the integrated fractional Brownian motion kernel \(k_{1,H}\) in (12). It is straightforward (though, in the case of the second expectation, somewhat tedious) to compute that the expected values of squared terms in the expression (35) for \(\hat{\sigma}^{2}_{\rm CV}\) are \[\mathbb{E}[x_{2}f_{1}-x_{1}f_{2}]^{2}=\frac{x_{1}x_{2}\Delta x_{1}}{2(H+1)(2H+1 )}\big{(}x_{2}^{2H+1}-x_{1}^{2H+1}-\Delta x_{1}^{2H+1}\big{)}\] and \[\begin{split}&\mathbb{E}\big{[}\Delta x_{n-1}(f_{n+1}-f_{n})- \Delta x_{n}(f_{n}-f_{n-1})\big{]}^{2}\\ &=\frac{\Delta x_{n}\Delta x_{n-1}(\Delta x_{n}+\Delta x_{n-1})}{ 2(H+1)(2H+1)}\big{[}(\Delta x_{n}+\Delta x_{n-1})^{2H+1}-\Delta x_{n}^{2H+1}- \Delta x_{n-1}^{2H+1}\big{]}\end{split} \tag{36}\] and \[\mathbb{E}[f_{N}-f_{N-1}]^{2}=\frac{\Delta x_{N-1}}{2H+1}\bigg{(}x_{N}^{2H+1}- x_{N-1}^{2H+1}-\frac{1}{2(H+1)}\Delta x_{N-1}^{2H+1}\bigg{)}.\] Therefore, by (35), \[\begin{split}\mathbb{E}\hat{\sigma}^{2}_{\rm CV}=& \ \frac{\big{(}x_{2}^{2H+1}-x_{1}^{2H+1}-\Delta x_{1}^{2H+1}\big{)}}{2(H+1)(2H+1) N}\\ &+\frac{1}{2(H+1)(2H+1)N}\sum_{n=2}^{N-1}\big{[}(\Delta x_{n}+ \Delta x_{n-1})^{2H+1}-\Delta x_{n}^{2H+1}-\Delta x_{n-1}^{2H+1}\big{]}\\ &+\frac{1}{(2H+1)N}\bigg{(}x_{N}^{2H+1}-x_{N-1}^{2H+1}-\frac{1}{ 2(H+1)}\Delta x_{N-1}^{2H+1}\bigg{)}\\ =:&\ \frac{1}{2(H+1)(2H+1)}B_{1,N}+\frac{1}{2(H+1)(2H+1 )}I_{N}+\frac{1}{(2H+1)}B_{2,N}.\end{split}\] By quasi-uniformity, \(B_{1,N}\leq N^{-1}x_{2}^{2H+1}=\mathcal{O}(N^{-2-2H})\). Consider then the interior term \[\begin{split} I_{N}&=\frac{1}{N}\sum_{n=2}^{N-1} \Delta x_{n}^{2H+1}\Bigg{[}\bigg{(}1+\frac{\Delta x_{n-1}}{\Delta x_{n}}\bigg{)} ^{2H+1}-\bigg{(}1+\bigg{(}\frac{\Delta x_{n-1}}{\Delta x_{n}}\bigg{)}^{2H+1} \bigg{)}\Bigg{]}\\ &\eqqcolon\frac{1}{N}\sum_{n=2}^{N-1}\Delta x_{n}^{2H+1}c_{n}. \end{split} \tag{37}\] Because the function \(x\mapsto(1+x)^{c}-(1+x^{c})\) is positive and increasing for \(x>0\) if \(c>1\) and \(C_{\rm qu}^{-2}\leq\Delta x_{n-1}/\Delta x_{n}\leq C_{\rm qu}\) by quasi-uniformity, we have \[0<(1+C_{\rm qu}^{-2})^{2H+1}-(1+C_{\rm qu}^{-2H(2H+1)})\leq c_{n}\leq\bigg{(} 1+\frac{\Delta x_{n-1}}{\Delta x_{n}}\bigg{)}^{2H+1}\leq(1+C_{\rm qu})^{2H+1}\] for every \(n\). Because \(N^{-1}\sum_{n=2}^{N-1}\Delta x_{n}^{2H+1}=\Theta(N^{-1-2H})\) by quasi-uniformity, we conclude from (37) that \(I_{N}=\Theta(N^{-1-2H})\). For the last term \(B_{2,N}\), recall that we have set \(x_{N}=T\). Thus \[B_{2,N}=\frac{1}{N}\bigg{(}T^{2H+1}-(T-\Delta x_{N-1})^{2H+1}-\frac{1}{2(H+1)} \Delta x_{N-1}^{2H+1}\bigg{)}.\] By the generalised binomial theorem, \[T^{2H+1}-(T-\Delta x_{N-1})^{2H+1}=(2H+1)T^{2H}\Delta x_{N-1}+\mathcal{O}( \Delta x_{N-1}^{2})\] as \(\Delta x_{N-1}\to 0\). It follows that under quasi-uniformity we have \(B_{2,N}=\Theta(N^{-2})\) for every \(H\in(0,1)\). Putting these bounds for \(B_{1,N}\), \(I_{N}\) and \(B_{2,N}\) together we conclude that \[\mathbb{E}\hat{\sigma}_{\mathrm{CV}}^{2} =\frac{1}{2(H+1)(2H+1)}B_{1,N}+\frac{1}{2(H+1)(2H+1)}I_{N}+\frac{1 }{(2H+1)}B_{2,N}\] \[=\mathcal{O}(N^{-2-2H})+\Theta(N^{-1-2H})+\Theta(N^{-2}),\] which gives \(\mathbb{E}\hat{\sigma}_{\mathrm{CV}}^{2}=\Theta(N^{-1-2H})\) if \(H\in(0,1/2]\) and \(\mathbb{E}\hat{\sigma}_{\mathrm{CV}}^{2}=\Theta(N^{-2})\) if \(H\in[1/2,1)\). Observe that in the proof of Theorem 11 it is the boundary term \(B_{2,N}\) that determines the rate when there is sufficient smoothness, in that \(l=1\) and \(H\in[1/2,1)\). Similar phenomenon occurs in the proof of Theorem 7. The smoother a process is, the more correlation there is between its values at far-away points. Because the Brownian motion (as well as fractional and integrated Brownian motions) has a zero boundary condition at \(x=0\) but no boundary condition at \(x=T\) and no information is available at points beyond \(T\), the importance of \(B_{2,N}\) is caused by the fact that around \(T\) one has the least information about the process. Proof of Theorem 12.: From (27) we get \[\mathbb{E}\hat{\sigma}_{\mathrm{ML}}^{2}=\frac{1}{N}\sum_{n=1}^{N}\frac{ \mathbb{E}[f_{n}-f_{n-1}]^{2}}{\Delta x_{n-1}}.\] We may then proceed as in the proof of Theorem 11 and use quasi-uniformity to show that \[\mathbb{E}\hat{\sigma}_{\mathrm{ML}}^{2}=\frac{1}{N}\sum_{n=1}^{N}\frac{ \mathbb{E}[f_{n}-f_{n-1}]^{2}}{\Delta x_{n-1}}=\frac{1}{N}\sum_{n=1}^{N}\frac{ \Delta x_{n-1}^{2H}}{\Delta x_{n-1}}=\frac{1}{N}\sum_{n=1}^{N}\Delta x_{n-1}^ {2H-1}=\Theta(N^{1-2H})\] when \(l=0\) and \[\mathbb{E}\hat{\sigma}_{\mathrm{ML}}^{2} =\sum_{n=1}^{N}\frac{\mathbb{E}[f_{n}-f_{n-1}]^{2}}{\Delta x_{n-1}}\] \[=\frac{1}{(2H+1)N}\sum_{n=1}^{N}\left(x_{n}^{2H+1}-x_{n-1}^{2H+1 }-\frac{1}{2(H+1)}\Delta x_{n-1}^{2H+1}\right)\] \[=\frac{1}{(2H+1)N}\sum_{n=1}^{N}\left((2H+1)x_{n}^{2H}\Delta x_{ n-1}+\mathcal{O}(\Delta x_{n-1}^{2})-\frac{1}{2(H+1)}\Delta x_{n-1}^{2H+1}\right)\] \[=\Theta(N^{-1})\] when \(l=1\). ### Proofs for Section 5 Proof of Theorem 14.: We only provide the proof for the case \(l=1\) and leave the simpler case \(l=0\) to the reader. Let \(x\in(x_{n-1},x_{n})\). From the expression for \(m_{N}\) in Section 8.1, we get \[\mathbb{E}[f(x)-m_{N}(x)]^{2} =\mathbb{E}\Bigg{[}f(x)-\frac{(x_{n}-x)f(x_{n-1})+(x-x_{n-1})f(x_{ n})}{\Delta x_{n-1}}\Bigg{]}^{2}\] \[=\frac{1}{\Delta x_{n-1}^{2}}\mathbb{E}\big{[}(x-x_{n-1})(f(x_{n} )-f(x))-(x_{n}-x)(f(x)-f(x_{n-1}))\big{]}^{2}.\] Then, we can use (36) with \(x_{n}\) instead of \(x_{n+1}\) and \(x\) instead of \(x_{n}\) to get \[\mathbb{E}[f(x)-m_{N}(x)]^{2}=\frac{(x_{n}-x)(x-x_{n-1})}{C_{H}\Delta x_{n-1}} \big{[}\Delta x_{n-1}^{2H+1}-(x_{n}-x)^{2H+1}-(x-x_{n-1})^{2H+1}\big{]},\] where \(C_{H}=2(H+1)(2H+1)\). The expression for \(k_{N}\) in Section 8.1 gives \[\frac{\mathbb{E}[f(x)-m_{N}(x)]^{2}}{k_{N}(x)}=\frac{1}{C_{H}}\big{[}\Delta x_ {n-1}^{2H+1}-(x_{n}-x)^{2H+1}-(x-x_{n-1})^{2H+1}\big{]}.\] By removing the negative terms and using the quasi-uniformity (10), we obtain \[\sup_{x\in[0,T]}\frac{\mathbb{E}[f(x)-m_{N}(x)]^{2}}{k_{N}(x)}\leq\frac{(TC_{ \text{qu}})^{2H+1}}{C_{H}}N^{-1-2H},\] To see that this bound is tight, observe that for the midpoint \(x=(x_{n}+x_{n-1})/2\) we have \(x_{n}-x=x-x_{n-1}=\Delta x_{n-1}/2\) and \[\frac{\mathbb{E}[f(x)-m_{N}(x)]^{2}}{k_{N}(x)}=\frac{1}{C_{H}}\bigg{(}1-\frac{ 1}{2^{2H}}\bigg{)}\Delta x_{n-1}^{2H+1}\geq\frac{T^{2H+1}}{C_{H}C_{\text{qu}} ^{2H+1}}\bigg{(}1-\frac{1}{2^{2H}}\bigg{)}N^{-1-2H}\] by the quasi-uniformity. Therefore \[\sup_{x\in[0,T]}\frac{\mathbb{E}[f(x)-m_{N}(x)]^{2}}{k_{N}(x)}=\Theta(N^{-1-2H})\] when \(l=1\). One can similarly show that \[\sup_{x\in[0,T]}\frac{\mathbb{E}[f(x)-m_{N}(x)]^{2}}{k_{N}(x)}=\Theta(N^{1-2H})\] when \(l=0\). The claims then follow from the rates for \(\mathbb{E}\hat{\sigma}_{\text{CV}}^{2}\) and \(\mathbb{E}\hat{\sigma}_{\text{ML}}^{2}\) in Theorems 11 and 12. ## Acknowledgements MN acknowledges support from the U.K. Research and Innovation under grant number EP/S021566/1. MK has been supported by the French government, through the 3IA Cote d'Azur Investment in the Future Project managed by the National Research Agency (ANR) with the reference number ANR-19-P3IA-0002. TK was supported by the Academy of Finland postdoctoral researcher grant #338567 "Scalable, adaptive and reliable probabilistic integration". Part of this research was carried out during a visit by TK to EURECOM in May 2023 that was funded by the Institut francais de Finlande, the Embassy of France to Finland, and the Finnish Society of Sciences and Letters. MM gratefully acknowledges financial support by the European Research Council through ERC StG Action 757275 / PANAMA; the DFG Cluster of Excellence "Machine Learning - New Perspectives for Science", EXC 2064/1, project number 390727645; the German Federal Ministry of Education and Research (BMBF) through the Tubingen AI Center (FKZ: 01IS18039A); and funds from the Ministry of Science, Research and Arts of the State of Baden-Wurttemberg. Connection between the ML and CV estimators Here we prove a connection between the ML and CV estimators; see Remark 1. Let \[C(N,p)=\binom{N}{p}=\frac{N!}{p!(N-p)!}\] denote the binomial coefficient. The leave-\(p\)-out cross-validation (LPO-CV) estimator of \(\sigma^{2}\) is \[\hat{\sigma}^{2}_{\text{CV}(p)}=\frac{1}{C(N,p)}\sum_{i=1}^{C(N,p)}\frac{1}{p} \sum_{n=1}^{p}\frac{[f(x_{p,i,n})-m_{\setminus\{p,i\}}(x_{p,i,n})]^{2}}{k_{ \setminus\{p,i\}}(x_{p,i,n})},\] where \(i\) indexes the \(N\)-choose-\(p\) possible sets of held-out datapoints, \(\mathbf{x}_{\setminus\{p,i\}}\), among \(\mathbf{x}\) and \(n\leq p\) the data points left out of each of these sets. That is, for each \(p\) and \(i\) we have \[\mathbf{x}=\mathbf{x}_{\setminus\{p,i\}}\cup\{x_{p,i,1},\ldots,x_{p,i,p}\}.\] The functions \(m_{\setminus\{p,i\}}\) and \(k_{\setminus\{p,i\}}\) are the GP conditional mean and variance based on the set \(\mathbf{x}_{\setminus\{p,i\}}\), which contains \(N-p\) points. The purpose of this section is to prove that \[\hat{\sigma}^{2}_{\text{ML}}=\frac{1}{N}\sum_{p=1}^{N}\hat{\sigma}^{2}_{\text {CV}(p)}. \tag{38}\] Denote \(\nu(\mathbf{x})=f(\mathbf{x})^{\top}k(\mathbf{x},\mathbf{x})^{-1}f(\mathbf{x})\). The block matrix inversion formula applied to \(g(\mathbf{x}_{\setminus\{p,i\}}\cup\{x\})\) and the equations in Section 2 for the conditional mean and variance yield \[\frac{[f(x)-m_{\setminus\{p,i\}}(x)]^{2}}{k_{\setminus\{p,i\}}(x)}=\nu( \mathbf{x}_{\setminus\{p,i\}}\cup\{x\})-\nu(\mathbf{x}_{\setminus\{p,i\}}) \tag{39}\] for any \(1\leq p\leq N\) and \(x\notin\mathbf{x}_{\{p,i\}}\), where we use the convention \(\nu(\mathbf{x}_{\setminus\{N,i\}})=\nu(\emptyset)=0\). For each \(1\leq p\leq N\), \(i\leq C(N,p)\) and \(n\leq p\) there is a unique index \(j(p,i,n)\leq C(N,p-1)\) such that \[\mathbf{x}_{\setminus\{p,i\}}\cup\{x_{p,i,n}\}=\mathbf{x}_{\setminus\{p-1,j (p,i,n)\}}. \tag{40}\] Setting \(x=x_{p,i,n}\) in (39) gives \[\frac{[f(x_{p,i,n})-m_{\setminus\{p,i\}}(x_{p,i,n})]^{2}}{k_{\setminus\{p,i \}}(x_{p,i,n})}=\nu(\mathbf{x}_{\setminus\{p,i\}}\cup\{x_{p,i,n}\})-\nu( \mathbf{x}_{\setminus\{p,i\}}).\] Therefore \[\begin{split}\sum_{p=1}^{N}\hat{\sigma}^{2}_{\text{CV}(p)}& =\frac{1}{N}\sum_{p=1}^{N}\frac{1}{C(N,p)}\sum_{i=1}^{C(N,p)} \frac{1}{p}\sum_{n=1}^{p}\frac{[f(x_{p,i,n})-m_{\setminus\{p,i\}}(x_{p,i,n})] ^{2}}{k_{\setminus\{p,i\}}(x_{p,i,n})}\\ &=\sum_{p=1}^{N}\frac{1}{C(N,p)}\sum_{i=1}^{C(N,p)}\frac{1}{p} \sum_{n=1}^{p}\big{[}\nu(\mathbf{x}_{\setminus\{p,i\}}\cup\{x_{p,i,n}\})-\nu( \mathbf{x}_{\setminus\{p,i\}})\big{]}.\end{split} \tag{41}\] By (40) from each set \(\mathbf{x}_{\setminus\{p,i\}}\) on level \(p\) (i.e., sets from which \(p\) points have been left out) one can obtain \(p\) sets on level \(p-1\) by adding one of the left-out datapoints. However, there are \(C(N,p)\) sets on level \(p\) and \(C(N,p-1)\) sets on level \(p-1\). Hence for each set \(\mathbf{x}_{\setminus\{p-1,j\}}\) on level \(p-1\) there are \[p\cdot\frac{C(N,p)}{C(N,p-1)}=p\cdot\frac{N!(p-1)!(N-p+1)!}{N!p!(N-p)!}=N-p+1\] combinations of sets \(\mathbf{x}_{\setminus\{p,i\}}\) on level \(p\) and points \(x_{p,i,n}\) left out of these sets such that \(\mathbf{x}_{\setminus\{p,i\}}\cup\{x_{p,i,n}\}=\mathbf{x}_{\setminus\{p-1,j\}}\). Therefore \[\sum_{i=1}^{C(N,p)} \frac{1}{p}\sum_{n=1}^{p}\big{[}\nu(\mathbf{x}_{\setminus\{p,i\} }\cup\{x_{p,i,n}\})-\nu(\mathbf{x}_{\setminus\{p,i\}})\big{]}\] \[=\sum_{i=1}^{C(N,p)}\frac{1}{p}\sum_{n=1}^{p}\nu(\mathbf{x}_{ \setminus\{p,i\}}\cup\{x_{p,i,n}\})-\sum_{i=1}^{C(N,p)}\frac{1}{p}\sum_{n=1}^{ p}\nu(\mathbf{x}_{\setminus\{p,i\}})\] \[=\frac{N-p+1}{p}\sum_{j=1}^{C(N,p-1)}\nu(\mathbf{x}_{\setminus\{p -1,j\}})-\sum_{i=1}^{C(N,p)}\nu(\mathbf{x}_{\setminus\{p,i\}})\] and consequently (41) writes \[\sum_{p=1}^{N}\hat{\sigma}_{\mathrm{CV}(p)}^{2} =\sum_{p=1}^{N}\frac{1}{C(N,p)}\Bigg{[}\frac{N-p+1}{p}\sum_{j=1}^ {C(N,p-1)}\nu(\mathbf{x}_{\setminus\{p-1,j\}})-\sum_{i=1}^{C(N,p)}\nu(\mathbf{ x}_{\setminus\{p,i\}})\Bigg{]}\] \[=\sum_{p=1}^{N}\Bigg{[}\frac{1}{C(N,p-1)}\sum_{j=1}^{C(N,p-1)} \nu(\mathbf{x}_{\setminus\{p-1,j\}})-\frac{1}{C(N,p)}\sum_{i=1}^{C(N,p)}\nu( \mathbf{x}_{\setminus\{p,i\}})\Bigg{]},\] which is a telescoping sum. We are left with \[\sum_{p=1}^{N}\hat{\sigma}_{\mathrm{CV}(p)}^{2}=\frac{1}{C(N,0)}\sum_{j=1}^{C (N,0)}\nu(\mathbf{x}_{\setminus\{0,j\}})-\frac{1}{C(N,N)}\sum_{i=1}^{C(N,N)} \nu(\mathbf{x}_{\setminus\{N,i\}}),\] where \(\nu(\mathbf{x}_{\setminus\{0,j\}})=f(\mathbf{x})^{\top}k(\mathbf{x},\mathbf{ x})^{-1}f(\mathbf{x})\) and \(\nu(\mathbf{x}_{\setminus\{N,i\}})=\nu(\emptyset)=0\). Thus \[\frac{1}{N}\sum_{p=1}^{N}\hat{\sigma}_{\mathrm{CV}(p)}^{2}=\frac{f(\mathbf{x} )^{\top}k(\mathbf{x},\mathbf{x})^{-1}f(\mathbf{x})}{N}=\hat{\sigma}_{\mathrm{ ML}}^{2},\] which establishes (38). ## Appendix B Further discussion on Theorem 10 The requirement of having the same \(V^{2}(f)\) for all sequences of partitions quasi-uniform with constant \(2\) can be relaxed somewhat: trivially, it is sufficient that the quadratic variation is \(V^{2}(f)\) specifically with respect to even-points and odd-points sequences of sub-partitions used in the proof in Section 8.2. Furthermore, we may even have different quadratic variations with respect to said sequences. Then the results becomes \[\lim_{N\to\infty}\hat{\sigma}_{\mathrm{CV}}^{2}=\frac{\nu}{T}\qquad\text{for} \qquad\nu=\frac{V_{0}^{2}(f)+V_{1}^{2}(f)}{2},\] where \(V_{0}^{2}(f)\) and \(V_{1}^{2}(f)\) are quadratic variations with respect to the even- and odd-points sub-partitions respectively, meaning that \[V^{2}(f) =\lim_{N\rightarrow\infty}\sum_{n=1}^{N-1}(f_{n+1}-f_{n})^{2},\] \[V_{0}^{2}(f) =\lim_{N\rightarrow\infty}\sum_{n=1}^{\lfloor\frac{N-2}{2} \rfloor}(f_{2n+2}-f_{2n})^{2},\] \[V_{1}^{2}(f) =\lim_{N\rightarrow\infty}\sum_{n=1}^{\lfloor\frac{N-1}{2} \rfloor}(f_{2n+1}-f_{2n-1})^{2}.\]
2304.11754
Silent Abandonment in Contact Centers: Estimating Customer Patience from Uncertain Data
In the quest to improve services, companies offer customers the opportunity to interact with agents through contact centers, where the communication is mainly text-based. This has become one of the favorite channels of communication with companies in recent years. However, contact centers face operational challenges, since the measurement of common proxies for customer experience, such as knowledge of whether customers have abandoned the queue and their willingness to wait for service (patience), are subject to information uncertainty. We focus this research on the impact of a main source of such uncertainty: silent abandonment by customers. These customers leave the system while waiting for a reply to their inquiry, but give no indication of doing so, such as closing the mobile app of the interaction. As a result, the system is unaware that they have left and waste agent time and capacity until this fact is realized. In this paper, we show that 30%-67% of the abandoning customers abandon the system silently, and that such customer behavior reduces system efficiency by 5%-15%. To do so, we develop methodologies to identify silent-abandonment customers in two types of contact centers: chat and messaging systems. We first use text analysis and an SVM model to estimate the actual abandonment level. We then use a parametric estimator and develop an expectation-maximization algorithm to estimate customer patience accurately, as customer patience is an important parameter for fitting queueing models to the data. We show how accounting for silent abandonment in a queueing model improves dramatically the estimation accuracy of key measures of performance. Finally, we suggest strategies to operationally cope with the phenomenon of silent abandonment.
Antonio Castellanos, Galit B. Yom-Tov, Yair Goldberg
2023-04-23T21:43:03Z
http://arxiv.org/abs/2304.11754v2
# Silent Abandonment in Contact Centers: Estimating Customer Patience from Uncertain Data ###### Abstract In the quest to improve services, companies offer customers the opportunity to interact with agents through contact centers, where the communication is mainly text-based. This has become one of the favorite channels of communication with companies in recent years. However, contact centers face operational challenges, since the measurement of common proxies for customer experience, such as knowledge of whether customers have abandoned the queue and their willingness to wait for service (patience), are subject to information uncertainty. We focus this research on the impact of a main source of such uncertainty: _silent abandonment_ by customers. These customers leave the system while waiting for a reply to their inquiry, but give no indication of doing so, such as closing the mobile app of the interaction. As a result, the system is unaware that they have left and waste agent time and capacity until this fact is realized. In this paper, we show that 30%-67% of the abandoning customers abandon the system silently, and that such customer behavior reduces system efficiency by 5%-15%. To do so, we develop methodologies to identify silent-abandonment customers in two types of contact centers: chat and messaging systems. We first use text analysis and an SVM model to estimate the actual abandonment level. We then use a parametric estimator and develop an expectation-maximization algorithm to estimate customer patience accurately, as customer patience is an important parameter for fitting queueing models to the data. We show how accounting for silent abandonment in a queueing model improves dramatically the estimation accuracy of key measures of performance. Finally, we suggest strategies to operationally cope with the phenomenon of silent abandonment. M 1 M 1 M 1 M 1 M 1 M 1 M 1 M 1 M 1 M 1 ## 1 Introduction The field of service engineering relies on the ability to measure proxies for customer experience in a service system. Two of the most common operational measures that are used as such proxies are customer waiting and abandonment of the queue. Both are crucial measures of performance for understanding customer's willingness to wait for service, which in turn is crucial for making operational decisions (Mandelbaum and Zeltyn, 2013; Garnett et al., 2002). Waiting happens when a customer enters the service system, but the system does not have an available service agent to serve her. Abandonment (Ab) naturally occurs when such waiting is too long and exceeds the customer's willingness to wait (henceforth, patience). Different streams of literature study different aspects of customer patience, such as its distribution (e.g., Gans et al., 2003), its connection to service utility (e.g., Aksin et al., 2013), its manipulation (e.g., Armony et al., 2009, Aksin et al., 2017), and more. But, the literature on the estimation of customer patience and its implications for the optimization of operational decisions (e.g., staffing and routing) assumes accurate and complete knowledge of customer abandonment. However, when studying some contact centers, we face the problem of not being able to know whether a customer abandoned or received service, as we will explain shortly. This uncertainty creates a situation where the company is unsure of the service quality they provide to their customers and how efficiently they use their resources. This in turn may lead to problematic operational decisions. In this paper, we concentrate on a specific type of uncertainty that relates to a specific type of customer behavior in contact centers. We term the behavior in question _silent abandonment_ (Sab). A Sab customer is a customer that leaves the system while waiting in the queue but gives no indication of doing so in real time (i.e., she does not close the chat window/application when abandoning). Therefore, when an agent becomes available, the (abandoning) customer is assigned to that agent. Only after all of the agent's inquiries go unanswered for some time does the agent (or system) realize that the customer has abandoned the queue without notifying the system and the agent (or system) closes the chat. We find that this situation creates two problems of **information uncertainty**: (a) _missing data_: the system may not be aware (even in retrospect) whether a customer silently abandoned the queue or was served. Most companies assume the latter, thereby biasing quality measurements (for a detailed definition of the concept of missing data see Little and Rubin, 2002); and (b) _censored data_: the system may be aware that the customer silently abandoned the queue but it does not know exactly when, thereby censoring the data on customer patience (for a discussion of censored data see Smith, 2002). In addition, Sab customers create two operational problems of **agent efficiency**: (a) _idleness_: the agent waits for inquiries from a customer that is no longer there; and (b) _wasted work_: the agent tries to solve problems that have already been solved by the customer herself or by another agent (such as when the customer writes an inquiry before entering the queue and then abandons the queue and uses a different channel of communication such as a phone call), thereby, creating confusion, frustration, and wasted effort. We note that silent abandonment is more likely to happen when the system is overloaded with customers and waits are long. During such periods a significant number of the agents are likely to be either idle or "busy" with abandoning customers, wasting critically needed capacity. Moreover, during these times, the Sab customers are taking the places of customers that want service and are actually waiting in the queue. Finally, we note that silent abandonment results in inaccurate measurements of queue length. Therefore, any algorithm that uses that information (e.g., for delay announcement; see Armony et al., 2009; Ibrahim and Whitt, 2009) would need to be adjusted to allow for silent abandonment. The context of this research is contact centers, which are an important part of the digital revolution the service industry is undergoing. Services are becoming ever more automatic and easy to use, as service companies branch into more accessible service channels such as mobile applications. Technology allows modern-day companies to replace traditional service encounters (face-to-face, telephone) with technology-mediated service encounters (Massad et al., 2006; van Dolen and de Ruyter, 2002), which allow customers and service employees to be in different locations and connect via a technological interface (Schumann et al., 2012; Froehle and Roth, 2004). Nowadays employees and customers can interact through social media (e.g., Twitter or Facebook), corporate websites (e.g., chats), or messaging applications (e.g., WhatsApp and WeChat). This enables customers to interact with agents through contact-center platforms similar to those that they use to contact their family and friends. Therefore, it should come as no surprise that contact centers are slowly substituting call centers as the preferred way for customers to communicate with companies. Indeed, a survey conducted by a cloud-based communications provider found that 78% of the customers preferred to text with the company rather than call their call center (RingCentral, 2012). Our paper uses data from two types of contact centers: chat and messaging service systems. In chat services the customers communicate with the company via a web browser while in messaging services the communication is typically through a mobile application. Even though they are very similar in the sense that customers communicate with agents via short text messages, there are important differences between the two that relate to the main challenges of this paper regarding information uncertainty. In Section 2 we describe in detail how the operations of these two systems differ and present the data we have from each one. It is worthy noting that the digital revolution provides the service industry with new opportunities to improve services (Rafaeli et al., 2017; Altman et al., 2019) but also with new operational challenges. Operating chat- and messaging-based contact centers is substantially different from operating call centers. For example, in chat and messaging service systems, unlike in call centers or face-to-face services, agents can provide service to multiple customers concurrently (Goes et al., 2018; Tezcan and Zhang, 2014; Luo and Zhang, 2013). We claim that the information uncertainty that results from the phenomenon of silent abandonment creates a need to redefine the basic methods of measuring quality and efficiency, as well as to develop methodologies to estimate customer patience. This is the focus of this paper. We also show that models that account for the silent-abandonment phenomenon and incorporate the methodologies we develop here fit the data of chat and messaging centers much more accurately than a regular Erlang-A model (Palm, 1957). Finally, we measure and discuss the implications of silent abandonment on system performance and managerial practices. The phenomenon of silent abandonment may appear also in healthcare systems. For example, in emergency departments (ED) a patient may abandon the queue but tell no one, leaving without being seen (LWBS) by a medical practitioner. ED abandonment increases the risk of a patient suffering an adverse outcome, increases the probability of the patient returning (in the study of Baker et al. 1991, 51% of the abandoning patients saw a physician within a week of leaving the system), and impacts hospital revenue (Batt and Terwiesch 2015). According to Medicare (2018), the national average of LWBS patients was 2% during 2018, for US EDs. Closely related to the phenomenon of silent abandonment in contact centers is the phenomenon of patients' no-shows to medical appointments. A no-show customer does not arrive to a scheduled appointment and fails to notify the system in advance. This creates censored data similar to silent abandonment, but not missing data, since in hindsight complete information is observed regarding patient service (or lack thereof). The scope of no-show customers can be as high as 23% to 34% (Liu 2016). Ho and Lau (1992) showed that no-shows strongly affect system performance because of loss of capacity and forced idleness of physicians. This is something that we claim happens also in contact centers. Several methodologies have been suggested to cope with no-show customers, such as overbooking (Vissers 1979) and reminders (Geraghty et al. 2008). However, in contact centers arrivals are not known in advance, and therefore other mechanisms are needed to cope with the phenomenon. Another difference between no-show customers and Sab customers relates to our points (a) and (b) regarding agent efficiency, presented above. In medical appointments it can be observed whether a patient shows or does not show up for her appointment, and this information is realized as soon as her service is supposed to start (without delay). Therefore, in the no-show case the agent can immediately start serving the next patient instead of waiting for the one who does not show up (assuming that there is a next patient at the clinic). But in contact centers, since the customer is not physically present in front of the agent, there is no indication that the customer has abandoned the queue and this information is realized only after a few minutes of wasted agent effort. Therefore, overbooking can mitigate the efficiency loss of no-shows, whereas silent abandonment requires other solutions, as we suggest in Section 5. Another closely related phenomenon is service failure. For example, Carmeli et al. (2019) analyzed the impact of service failure (they called it abandonment during service) on the design of Interactive Voice Response (IVR) systems and websites, where a customer may or may not successfully complete a self-service. They show the impact of estimating the proportion of customers that had an unsuccessful service (17%) on system design. In the present paper, we only consider queue abandonment, and make a similar claim that silent abandonment has an impact on system design. Our research can also be related to research on queue inference, where queue statistics are deduced from limited information; e.g., Larson (1990). However, to the best of our knowledge no work has addressed the problem of missing data in our context. ### Research Goals The present paper concentrates on the following goals: _Estimate the scope of the silent-abandonment phenomenon_. We want to estimate how many customers silently abandon the queue in contact centers. This is similar to estimating the scope of queue abandonment in healthcare. In fact, our goal is to be more precise than prior studies, by analyzing silent abandonment at the level of the individual customer. Hence, we attempt to solve the missing data problem. In Section 3 we construct classification models that estimate the probability of silent abandonment by a specific customer. Using data on customer and agent behavior, we analyze, among other things, customer sojourn time as well as the text messages of the customer and agent. We find that around one-third of the abandoning customers in the chat system dataset, and around two-thirds in the messaging system dataset, are Sab customers. _Create an algorithm to estimate customer patience in the presence of silent abandonment._Gans et al. (2003) reviewed methods for estimating customer patience, based on call-center applications. As we mentioned, customer behavior in contact centers differs from customer behavior in call centers. To our knowledge no paper has attempted to estimate customer patience in contact centers, although finite patience has been considered in optimization models of contact centers (cf. general patience in Long et al. 2018 and exponential patience in Tezcan and Zhang 2014). To estimate call-center patience, Mandelbaum and Zeltyn (2013) assumed that customer patience time, \(T\), and virtual wait time, \(W\), are exponentially distributed with rates \(\theta\) and \(\gamma\), respectively. Specifically, they developed a maximum likelihood estimator for estimating customer patience from right-censored data. Inspired by the LWBS phenomenon in EDs, Yefenof et al. (2018) extended their estimator to left-censored patience data, which is created by patients who do not announce their abandonment time. (To that end, they developed both parametric and non-parametric methods.) As we will demonstrate later, their estimators are suitable for estimating customer patience in chat systems, where the only type of information uncertainty that exists is censored data. However, in messaging systems, system design and silent abandonment create both of the above-mentioned types of uncertainty in the data (i.e., censored data and missing data). Therefore, we develop a new method for estimating customer patience that addresses the additional problem of missing data. In Section 4, we develop an expectation-maximization algorithm to estimate customer patience given both types of information uncertainty. It is important to estimate system parameters as accurately as possible, since performance measures of queueing systems are sensitive to inaccuracies in such estimations (Whitt, 2006). We show in Section 5 that, indeed, a more accurate estimation of customer patience, one that takes into account the phenomenon of silent abandonment, significantly improves the fit of the queueing model to the data. _Analyze the operational implications of silent abandonment_. In Section 5, we develop a queueing model that captures the dynamics of contact centers in the presence of silent abandonment. We estimate the amount of time companies waste due to the phenomenon of silent abandonment, and analyze the implications of such wasted time on system performance. We conclude with a discussion of how a bot or a classification model for identifying silent abandonment in real time may be used to reduce the impact of Sab customers on the system. ## 2 Data and Research Setting For the purposes of our research we have acquired and analyzed data from both of the aforementioned contact centers, namely, chat and messaging systems. The data was provided by LivePerson Inc., a company that builds computational infrastructures for the contact-center industry. As mentioned above, the differences in the way chat and messaging contact centers operate and in the way people use them have an impact on information uncertainty. Therefore, both environments will be used for this research. The description of the two systems and their data is given in the following two subsections. ### Chat Systems Chat systems are used for browser-based, one-time, short interactions (around 12 minutes on average) with service companies. The process of communication is as follows: a customer requests service by pressing a "contact us" button on the company website. Once the request for service arrives to the system, the system assigns the customer to a service agent. If no service agent is available, the customer enters a queue and waits for an assignment to an available agent. An agent can serve multiple customers concurrently: the maximal level of concurrency in the chat data is three customers per agent. The agent sends a greeting to indicate to the customer that she can proceed to write her inquiry. The full interaction contains several agent and customer messages that the two parties send one another. Due to concurrency, agent response time may also include short waits (if the agent is busy answering another customer). The data is extracted from 18,497 service interactions conducted in February 2017. The data includes general information on each chat as well as on each line written. Each chat is identified by chat ID, employee ID, date, the amount of time the customer waited in the queue before the chat started, whether the customer abandoned the queue by closing the chat window and at what time, the time an agent was assigned to that chat, the time the chat ended, the device used for the communication, type of service (e.g., sales or support), and more. Each chat line in the data contains the following information: a time-stamp of when the line was sent, a notation of who wrote that line (customer, agent, or system), and the number of words written in the line. The data also includes information on the work status of each service agent (online, offline, on break, or idle) during the workday. Each agent's load is estimated by analyzing the agent's activities with customers when the agent is online. The contact center is open 7 days a week from 8:00 to 22:00. The average number of arrivals per hour is 51.58 customers. The arrival rate varies with the hours of the day. The pattern of the hourly arrival rate is typical of service systems. The mean number of agents working per hour is 12.45. Within our data, the average customer length of stay (LOS), or chat duration, is 11.65 minutes (\(SD\!=\!9.98\)) (the average includes the LOS of the Sab customers). The average wait time in queue is 2.37 minutes (\(SD\!=\!3.88\)). ### Messaging Systems Messaging systems are typically used for interacting with known customers, i.e., as part of a long-term relationship the company has with that customer. The communication is usually conducted through smart-phone applications, such as Facebook, WeChat, or iMessage. Compared to chats, we observe that messaging interactions generally have a much longer duration: around 49.2 minutes. The interaction is more casual than in chat systems, and the messaging service reinforces this notion to customers by sending them an automatic message instructing them to address the service as if "talking to a friend." A very important difference between messaging and chat systems is that in messaging systems a customer initiates a service request by writing a detailed inquiry. As a result, explicit information regarding the customer's problem is known before she enters the queue. This small difference is the first source of information uncertainty we mentioned in the Introduction: missing data. Because operational data does not indicate whether a service was provided or not, we need to analyze the written text in order to gain such an indication. We will elaborate on this fact in more detail in Section 3. We acquired, from a messaging contact center, data on 337,224 service interactions conducted during the month of May 2017. It includes detailed information on all the conversations (exactly as with the chat data). The messaging system operates 24/7, and it has a higher load than the chat system. The average number of arrivals is 594.79 per hour. The arrival rate varies with the hours of the day in accord with a typical service system pattern. The mean number of online agents per hour is 134.69. The mean concurrency level of agents is around 5.4 customers per agent (\(SD\!=\!4\)). Average customer LOS (from entering the queue until the last message was written in that conversation) is 49.2 minutes (\(SD\!=\!64\)), including LOS of Sab customers. The average wait in the queue is 9.28 minutes (\(SD\!=\!20.4\)). Estimating the Scope of Silent Abandonment as a Source of Information Uncertainty In this section we aim to build models to define which conversations can be classified as silent abandonment with high probability. This will enable us to estimate the percentage of Sab customers. In addition, such information also enables us to estimate the time it takes for the service agents to realize that a Sab customer has abandoned the queue. We conduct separate analyses of the two types of contact centers we are working with (chat systems and messaging systems) due to the difference in their service process. ### Estimating the Scope of Silent Abandonment in Chat Systems The company that provided us with the chat system dataset erroneously estimates the percentage of abandoning customers by counting only customers that left the system by closing the window of the interaction. Indeed, those customers provided a clear indicator that they abandoned the system. We refer to this type of customer abandonment as _known abandonment_ (Kab). The proportion of Kab customers in the chat data is 14%. We claim that this is an underestimation of the proportion of abandoning customers since it ignores the phenomenon of silent abandonment. That is, the chat company does not account for the customers that arrived to the system, got assigned to an agent, but did not communicate with that agent at all--but instead clearly abandoned the system during wait time. Since these customers gave no indication that they were leaving, the system was unaware of their abandonment and assigned them to an agent. Therefore, we can identify the conversations in which customers silently abandoned the queue by indicating whether the conversations include system and agent messages but do not include any customer messages. Using this method, we found that Sab customers constitute 6% of all customers arriving to the chat system. Therefore, the correct estimation of the probability of abandonment is 20%, emphasizing our claim that the company is unaware of the actual service level it provides. Moreover, out of all the abandoning customers, 30% abandon the queue silently. We can use the silent-abandonment classification to estimate the time it takes for an agent to realize that the customer actually (silently) abandoned the queue. It takes, on average, 4.32 minutes (\(SD=7.15\)) for it to happen. This is the time in which the agent keeps trying to communicate with the customer and gets no reply, i.e., the time the agent "wastes" on that customer. Therefore, if we subtract the silent-abandonment conversations from the conversation data we see that the average _served_ customer LOS is 12.25 minutes (\(SD=10.683\)). Of those 12.25 minutes, 51% are customer response time and 49% are agent response time. From the agent's perspective, 7% of the chats she handles during the day are silent-abandonment chats and 93% are served customer chats. We can compute the percentage of time agents spent on silent-abandonment chats by dividing the time spent on Sab conversations by total work time. Hence, \[\text{Effort}=\frac{0.07*4.32}{0.07*4.32+0.93*12.25*0.49}=0.05,\] i.e., agents spent 5% of their work time engaging in Sab conversations. We will show the impact of this effort in Section 5. ### Estimating the Scope of Silent Abandonment in Messaging Systems In the case of the messaging system, the company also underestimates the proportion of abandoning customers by taking into account only the known abandonments. The proportion of Kab customers in the messaging dataset is 7.2% of the customer population, much lower than in the chat dataset. Here the classification of silent abandonment is much more problematic due to the problem of missing data. As mentioned, in messaging systems, the customers usually write down their problems before entering the queue and, therefore, it is hard to distinguish between short conversations in which the customer was _shortly served_--those with at least one agent reply to the customer inquiry but no customer reaction--and conversations in which the customer _silently abandoned_ the queue before the agent replied--those with agent requests for further details from the customer but no customer reaction. In other words, a customer who is shortly served is a customer who writes an inquiry, the agent solves the problem (as is clear from the agent's reply), but the customer is impolite and does not even say "Thank you." By contrast, the Sab customer writes an inquiry, the agent replies but does not solve the problem (e.g., the agent asks for additional information), and the customer does not respond any further. With this description in mind, we can see that to know which customer silently abandons the queue, in messaging systems, we need to take a closer look at the conversation _text_. We refer to the whole group of uncertain conversations in messaging systems, which includes both the short-service and the silent-abandonment conversations, as _uncertain silent abandonment_ (uSab) conversations. In the messaging dataset, this group accounts for 26.2% of all the conversations. Figure 1 presents two examples of uncertain silent abandonment conversations. Figure 1(a) gives an example of a short-service conversation where the customer inquiry was solved, while Figure 1(b) gives an example of a silent-abandonment conversation where the customer abandoned the queue without indication. Next, we build an automated classification model to distinguish the conversations of customers who silently abandoned the queue. The dataset we use comprises a random sample of 550 uSab conversations. We manually tag those conversations into the two groups--short-service or silent abandonment--by reading the text of the whole service interaction. We compare the performance of several machine-learning classification methods: logistic regression (stepwise backward and with a ridge penalty), support-vector machines (SVM), k-nearest neighbors (k-NN), and classification tree (additionally, we pruned the tree). The classification models use textual features extracted from the conversation transcript as well as meta-data, such as wait time and system load, as described in Section 2.2. The data is randomly separated into training and test sets containing 75% and 25% of the conversations, respectively. We denote by \(\pi_{i}\) the probability that customer \(i\) silently abandoned the queue, given that this conversation is part of the uncertain silent abandonment group. Formally, \(\pi_{i}\triangleq Pr\left\{silent\:abandonment_{i}\:|\:uncertain\:silent\: abandonment_{i}\right\}\). Using the above methods, we are able to estimate \(\pi_{i}\), for each individual conversation \(i\). We compare the classification models using the Receiver Operating Characteristic (ROC) curve, presented in Figure 2. The ROC plots the True Positive Rate (TPR) against the False Positive Rate (FPR) for varying threshold levels. The ROC curve is a recognized method for comparing performance of different classification methods in a visualized way and for selecting the best threshold to work with (Fawcett, 2006). A standard characteristic in that regard is the area under the ROC curve (AUC), presented in Table 1. Using this criterion, we conclude that the best classification methods for our problem are the SVM model and the classification tree, for which the AUC is 0.85. Models with an AUC above 0.80 are considered "excellent" classification models (Hosmer and Lemeshow, 2002). Details about SVM and the classification tree can be found in Appendix A. The SVM model includes, among other things, the following features: specific words written in Figure 1: Examples of Uncertain Silent-Abandonment Messaging Conversations the conversation, customer experience (e.g., amount of time the customer waited in the queue), and agent's work time (e.g., amount of time the agent engaged with the customer). To select a specific threshold level for the SVM model, we find the threshold that maximizes the sensitivity (TPR) and specificity (1-FPR) proportions, i.e., maximizes the proportion of silent-abandonment and short-service conversations that are correctly identified. We find that the optimal threshold is 0.47 with a sensitivity proportion (TPR) of 85% and a specificity proportion (1-FPR) of 76%. With this information, we are able to obtain \(\hat{\pi_{i}}\) for every uSab conversation in our full messaging dataset and to state that out of the group of uSab conversations, which constituted 26.2% of all the conversations in the messaging data, 55% are silent-abandonment conversations and 45% are short-service conversations. This means that the actual proportion of abandoning customers in this dataset is 21.6%, far above the estimation of 7.2% abandonment that the company currently has. Moreover, out of all the abandoning customers, we find that 67% are Sab customers (14.4% of all arriving customers). This information highlights the importance of taking Sab customers into account in order to correctly evaluate performance levels in contact centers. To estimate the LOS of a customer that (silently) abandoned the queue and could not be served, we calculate the average conversation duration of the silent-abandonment conversations (using the above 0.47 threshold). We find that it takes, on average, 19.37 minutes (\(SD\!=\!26.68\)) for an agent to identify a silent-abandonment conversation. This is the average time that channel capacity is wasted while the agent tries to communicate with the departed customer (note that she might be serving other customers concurrently). Given the uncertain conversation classification, we can estimate that the average duration of short-service conversations is 55.63 minutes (\(SD\!=\!105.77\)). This finding reveals that shortly served customers have a longer LOS than Sab customers, which is not easily observed in the distribution of the LOS. Measuring agent effort in treating Sab customers, we find that 15.5% of the inquiries that the agent answers are from Sab customers, 12.7% from short-service customers and 71.8% are from \begin{table} \begin{tabular}{l c} \hline Model & AUC \\ \hline SVM & 0.85 \\ Tree & 0.85 \\ Logistic Regression: Stepwise & 0.83 \\ Tree Pruned & 0.82 \\ Logistic Regression: Ridge & 0.71 \\ k-NN & 0.65 \\ \hline \end{tabular} \end{table} Table 1: Area under the ROC Figure 2: ROC Curve served customers. Dividing the time spent on Sab conversations by the total work time reveals that agents spent 11% of their work time dealing with Sab conversations: \[\text{Effort}=\frac{0.155*19.37}{0.155*19.37+0.127*55.63+0.718*49.2*0.4891}=0.11. \tag{1}\] The effort wasted on Sab customers in the messaging system is two times higher than in the chat system, showing how a small change in the service process in contact centers can drive system inefficiency. This finding highlights the importance of identifying silent abandonment as soon as possible, to improve system efficiency. ## 4 Estimating Customer Patience with Silent Abandonment Our next problem is to estimate customer patience in contact centers. As mentioned in Section 1.1, data on customer patience is censored. When the customer abandons the queue and provides an indication of doing so--a known abandonment--she provides exact information regarding her patience. Indeed, her patience equals her wait time. However, how long the customer would be required to wait if she were to stay in the queue, i.e., her _virtual wait time_, is unknown. Therefore, patience acts as a lower bound for virtual wait time. When the customer is served, her wait time is actually a lower bound for her true patience. Therefore, the data is right-censored by the virtual wait time (itself uncensored). This type of right-censoring was studied by Mandelbaum and Zeltyn (2013) using call-center data; we refer to their estimator as _Method 1_. In contrast to call centers, in contact centers data on patience is also left-censored due to the silent-abandonment phenomenon. Indeed, when a customer abandons the queue without indicating that she has done so--a silent abandonment--her wait time equals her virtual wait time (itself uncensored); thus, the wait time is an upper bound for her real patience as her patience was clearly less than her wait time. Yefenof et al. (2018) addressed this situation, motivated by LWBS in EDs; we refer to their estimator as _Method 2_. As we mentioned, in chat systems we have complete data; therefore, Method 2 can be used to estimate customer patience since similar conditions exist. We apply Method 2 to chat data in Section 5. But messaging systems require a new methodology for patience estimation, because of the added complexity missing data brings to customer classification. Indeed, this situation requires a different approach, which will be the focus of this section. In Section 4.1 we develop our expectation-maximization (EM) algorithm for estimating customer patience in messaging systems, and in Section 4.2 we validate its accuracy, sensitivity, and robustness. ### The EM Algorithm: Model Assumption and Formulation The problem of missing information on uSab customers stems from the fact that we do not know whether they received short service, in which case their patience would be right-censored, or whether they silently abandoned, in which case their patience would be left-censored. Nonetheless, we know that the length of time these customers waited in the queue, and hence their virtual wait time is uncensored. Following the formulation of Yefenof et al. (2018), let \(T\) be customer patience time (failure time) and assume that it has a cumulative distribution function (cdf) \(F\) and a probability distribution function (pdf) \(f\). Assume that \(T\sim exp(\theta)\). This assumption follows Brown et al. (2005) who showed, using call-center data on served and abandoning customers, that patience distribution has an exponential tail. We also show, in Section 5, that queueing models with exponentially distributed patience fit contact-center data better compared to queueing models with generally distributed patience, providing further support for our assumption. Let \(W\) be the virtual wait time (censoring time), i.e., the time the customer is required to wait by the system, and assume that it has a cdf \(G\) and a pdf \(g\). We know from queueing theory that in overloaded systems, like the contact centers we are investigating, wait time is close to exponentially distributed (Kingman, 1962). In addition, Brown et al. (2005) showed that in call-center data with served and abandoning customers, virtual wait time is close to exponentially distributed. In our dataset we have served and abandoning customers; hence, we can make a realistic assumption that the virtual wait time is exponentially distributed. This assumption is confirmed by fitting an exponential distribution to the simulated virtual wait time distribution in the queueing model in Section 5. Formally, assume that \(W\sim exp(\gamma)\). Let \(\Delta\) be an indicator for the case where the customer lost patience before the agent replied, i.e., \(\Delta\triangleq 1_{\{T\leq W\}}\). Conversations in which information regarding \(\Delta_{i}\) is missing are assigned a null value. Let \(Y\) be a random variable indicating whether the customer will inform the system when abandoning. We assume that \(Y\sim Bernoulli(q)\), where \(q\) is the probability that the customer will inform the system when abandoning; formally, \(q\triangleq Pr\left\{Indicate\, abandonment\right\}\). Assume that \(W\) and \(T\) are independent, as is frequently done in right-censoring survival analysis (e.g., Smith, 2002; Mandelbaum and Zeltyn, 2013; Yefenof et al., 2018). Moreover, this is a natural assumption in contact centers since patience is decided by the individual customer while the virtual wait time is decided by the company. This is indeed the case in our contact centers where no delay information is provided to the customer, such as her place in queue (that may remind her that she is waiting in the queue). Additionally, we assume that \(Y\) and \(W\) are independent. That is, the decision of a customer to indicate whether she is abandoning the queue is independent of her wait time. For example, a customer might tend to leave windows open in the computer even if she is not using them; therefore, this tendency would be independent of the wait time. We assume that \(Y\) and \(W\) are independent for tractability reasons; currently we don't have evidence to support this assumption and suggest that it be relaxed in future research. Finally, let \(U\) be the system's observed time. For each arriving customer \(i\) we observe the vector of data \((U_{i},Y_{i},\Delta_{i})\), \(i=1,...,n\). Summarizing, our model rests on the following assumption: **Assumption 1**: 1. _[label=()]_ 2. _Patient wait time is_ \(W\sim exp(\theta)\)_._ 3. _Customer abandonment indicator is_ \(Y\sim Bernoulli(q)\)_._ 4. \(W\) _and_ \(T\) _are independent._ 5. \(Y\) _and_ \(W\) _are independent._ #### 4.1.1 Customer Classes with Complete Data In Table 2 we formally define three customer classes under the assumption of complete data on which customers abandoned. The table identifies each customer class by type, notation indicator, and formal definition (based on values \(\Delta\) and \(Y\), and observed time \(U\)). Remark: Note that in chat systems the data on which customers abandoned is complete; i.e., there are no missing values in \(\Delta\). Therefore, we can categorize the conversations into the above three classes with complete certainty; i.e., we know exactly to which class each customer belongs. #### 4.1.2 Customer Classes with Missing Data Due to the problem of missing data on the uSab conversations in the messaging system, we are not able to categorize all the conversations into just one of the classes we defined in Section 4.1.1. Therefore, we need to formulate additional class indicators. Let \(M\) denote the customer classes in a system in which there is missing data on which individual customers abandoned. These classes are defined in Table 3. Formally: \[M^{i}=1 \Longrightarrow C_{1}^{i}=1.\] \[M^{i}=2 \Longleftrightarrow C_{2}^{i}=1.\] \[M^{i}=0 \Longrightarrow C_{1}^{i}=1\text{ or }C_{3}^{i}=1.\] \begin{table} \begin{tabular}{l c c c c c} \hline Class Type & Notation Indicator & Formal Definition & \(\Delta\) & \(Y\) & \(U\) \\ \hline Service & \(C_{1}=1\) & \(1-\Delta\) & \(0\) & \(0\) & \(W\) \\ Known Abandonment & \(C_{2}=1\) & \(Y\Delta\) & \(1\) & \(1\) & \(T\) \\ Silent Abandonment & \(C_{3}=1\) & \((1-Y)\Delta\) & \(1\) & \(0\) & \(W\) \\ \hline \end{tabular} \end{table} Table 2: Classes of Customers: Complete Data #### 4.1.3 The EM Algorithm Formulation The EM algorithm estimates the following parameters simultaneously: the rate at which customers lose patience, \(\theta\), the probability of informing the system when abandoning, \(q\), and the rate of the virtual wait time distribution, \(\gamma\). The optimization problem is defined to maximize the likelihood function. The likelihood function measures the probability that the observations are given from the assumed distributions given the parameters \((\theta,q,\gamma)\). We write the likelihood of the observed data \(D\triangleq\{(U_{i},Y_{i},\Delta_{i}),\,i=1,...,n\}\) as follows: \[\begin{split} L(D;\theta,q,\gamma)=&\prod_{i=1}^{n }\left\{e^{-\theta U_{i}}\gamma e^{-\gamma U_{i}}\right\}^{C_{1}^{i}}\left\{q \theta e^{-\theta U_{i}}e^{-\gamma U_{i}}\right\}^{C_{2}^{i}}\left\{(1-q)(1-e ^{-\theta U_{i}})\gamma e^{-\gamma U_{i}}\right\}^{C_{3}^{i}}\\ =&\prod_{i=1}^{n}\left\{e^{-\theta U_{i}}\gamma e^{- \gamma U_{i}}\right\}^{1-\Delta_{i}}\left\{q\theta e^{-\theta U_{i}}e^{- \gamma U_{i}}\right\}^{\Delta_{i}Y_{i}}\left\{(1-q)(1-e^{-\theta U_{i}}) \gamma e^{-\gamma U_{i}}\right\}^{(1-Y_{i})\Delta_{i}}.\end{split} \tag{2}\] The function is formulated following Yefenof et al. (2018): the first part is for the the served customer (\(C_{1}^{i}=1\)), where we multiply the survival function of the customer patience \((1-F_{T}\left(u\right))\) by the pdf of the customer's wait time. The second part is for the known-abandonment customer (\(C_{2}^{i}=1\)), where we multiply the probability of informing when abandoning by the pdf of the customer patience and the survival function of the customer's wait time \((1-G_{W}\left(u\right))\). Finally, the third part is for the Sab customer (\(C_{3}^{i}=1\)), where we multiply the probability of not informing when abandoning by the cdf of the customer patience and the pdf of the customer's wait time. However, this likelihood function depends on knowing the complete data. Recall that some of the observations belong to the class \(M=0\) since they have missing data in \(\Delta\). Therefore, we cannot find the parameters by simply solving the maximization problem. Instead, we need to formulate an EM algorithm (see Algorithm 1), a well-known computing strategy for dealing with problems of missing data including censoring, since censoring is a special case of missing data (see Chapters 7 and 8 of Little and Rubin 2002). The algorithm estimates the parameters \((\theta,q,\,\gamma)\), using Theorems 1 and 2. Specifically, it estimates starting parameter values and subsequently iterates between the expectation step (E-step)--using Theorem 1--and the maximization step (M-step)--using Theorem 2--and updates these estimators until convergence. In the \(t\)th iteration, the E-step consists of finding a surrogate function (given in Equation (3)) that is a lower bound on the log-likelihood function (given in Equation (7)) and is tangent to the log-likelihood at \((\widehat{\theta^{(t)}},\widehat{q^{(t)}},\widehat{\gamma^{(t)}})\). In practice, it is enough to compute the expectation of the log-likelihood given the information of the previous iteration, which is presented in Equation (4) of Theorem 1. \[l(D,\theta,q,\gamma) = \sum_{i=1}^{n}\left\{\left(\widehat{C_{1,t}^{i}}\right)\left(\log \gamma-\gamma U_{i}-\theta U_{i}\right)\right\}\] \[+ \sum_{i=1}^{n}\left\{\left(\widehat{C_{2,t}^{i}}\right)\left[\log \theta-\theta U_{i}-\gamma U_{i}+\log q\right]\right\}\] \[+ \sum_{i=1}^{n}\left\{\left(\widehat{C_{3,t}^{i}}\right)\left[\log \left(1-q\right)+\log(1-e^{-\theta U_{i}})+\log\gamma-\gamma U_{i}\right] \right\}.\] ``` Result:\(\widehat{\theta^{(t+1)}}\), \(\widehat{q^{(t+1)}}\) and \(\widehat{\gamma^{(t+1)}}\). Initialization: For every customer \(i\), use Equation (4) to calculate \(\widehat{C_{1,0}^{i}}\) and \(\widehat{C_{2,0}^{i}}\) and \(\widehat{C_{3,0}^{i}}=\hat{\pi}_{i}1_{\{M^{i}=0\}}\), where \(\hat{\pi}_{i}\in[0,1]\) is chosen randomly. To obtain the starting parameters, \(\widehat{(\theta^{(1)},q^{(1)},\widehat{\gamma^{(1)}})}\), solve Equations (6) and (5), respectively. while\(|\:\widehat{\theta^{(t)}}-\widehat{\theta^{(t+1)}}\:|+|\:\widehat{q^{(t)}}- \widehat{q^{(t+1)}}\:|+|\:\widehat{\gamma^{(t)}}-\widehat{\gamma^{(t+1)}}\:| >\epsilon\)do E-step: Compute given the observed data \(D=\{(U_{i},Y_{i},\Delta_{i})\)\(i=1,...,n\}\) and the current estimations of the parameters \((\widehat{\theta^{(t)}},\widehat{q^{(t)}},\widehat{\gamma^{(t)}})\), \(\widehat{C_{j,t}^{i}}\), \(j=1,2,3\)\(\forall i=1,...,n\) using Equation (4). M-step: Maximize to obtain \((\widehat{\theta^{(t+1)}},\widehat{q^{(t+1)}},\widehat{\gamma^{(t+1)}})\). That is, update the estimations of the parameters using Equations (6) and (5), respectively. end while ``` **Algorithm 1**The EM Algorithm Theorem 1.: _Under Assumption 1, \(\widehat{C_{1,t}^{i}}\), \(\widehat{C_{2,t}^{i}}\) and \(\widehat{C_{3,t}^{i}}\) are given by_ \[\widehat{C_{1,t}^{i}} = (1-\widehat{C_{3,j}^{i}})1_{\{M^{i}=0\}}+1_{\{M^{i}=1\}};\] \[\widehat{C_{2,t}^{i}} = 1_{\{M^{i}=2\}}; \tag{4}\] \[\widehat{C_{3,t}^{i}} = 1_{\{M^{i}=0\}}\left(1-e^{-\widehat{\theta^{(t)}}U_{i}}\right).\] The proof is given in Appendix B.1. The notations \(\widehat{C_{1,t}^{i}}\), \(\widehat{C_{2,t}^{i}}\), and \(\widehat{C_{3,t}^{i}}\) represent the probabilities (weights) for the \(i\)th customer to belong to class \(C_{1},C_{2}\), or \(C_{3}\), respectively, given the parameters from the iteration \(t-1\), \((\widehat{\theta^{(t)}},\widehat{q^{(t)}},\widehat{\gamma^{(t)}})\), and the observed data. Note that the EM's update of the weights with missing data in the \(t-1\) iteration, \(\widehat{C_{j,t-1}^{i}}\)\(j=1,3\), is different for each observation \(i\) in the data class \(M^{i}=0\). That is, \(\widehat{C_{3,t-1}^{i}}\) need not to equal \(\widehat{C_{3,t-1}^{k}}\), given that \(M^{i}=M^{k}=0\). In the M-step of the \(t\)th iteration, \((\widehat{\theta^{(t+1)}},\widehat{q^{(t+1)}},\widehat{\gamma^{(t+1)}})\) are found (in Equations (6) and (5), respectively) to be the maximizers of the surrogate function Equation (3). **Theorem 2**: _Under Assumption 1, the parameters \(\widehat{q^{(t+1)}}\), \(\widehat{\gamma^{(t+1)}}\) are given by_ \[\widehat{q^{(t+1)}} =\left\{\sum_{i=1}^{n}\widehat{C_{2,t}^{i}}\right\}\left\{\sum_{i=1 }^{n}\left(1-\widehat{C_{1,t}^{i}}\right)\right\}^{-1}, \tag{5}\] \[\widehat{\gamma^{(t+1)}} =\left\{\sum_{i=1}^{n}\left(1-\widehat{C_{2,t}^{i}}\right) \right\}\left\{\sum_{i=1}^{n}U_{i}\right\}^{-1},\] _and the parameter \(\widehat{\theta^{(t+1)}}\) is given as a solution to the following equation:_ \[\widehat{\theta^{(t+1)}}\left\{\sum_{i=1}^{n}\left(\widehat{C_{3,t}^{i}}-1 \right)U_{i}\right\}+\sum_{i=1}^{n}\widehat{C_{2,t}^{i}}+\widehat{\theta^{(t+ 1)}}\left\{\sum_{i=1}^{n}\widehat{C_{3,t}^{i}}\frac{U_{i}e^{-\widehat{\theta^{ (t+1)}}U_{i}}}{1-e^{-\widehat{\theta^{(t+1)}}U_{i}}}\right\}=0. \tag{6}\] The proof is given in Appendix B.2. We repeat the E-step and the M-step until convergence for some predetermined \(\epsilon>0\). The procedure ends when we find a maximum of the likelihood function that yields estimators for the parameters \((\widehat{\theta^{(t+1)}},\widehat{q^{(t+1)}},\widehat{\gamma^{(t+1)}})\). More details on the EM algorithm are provided in Appendix B. ### Validation of the EM Algorithm We perform several performance evaluations to validate the use of our EM algorithm in practice. In Section 4.2.1, we compare the accuracy of the EM algorithm to previous methods of estimating customer patience. In Section 4.2.2, we examine the sensitivity of the algorithm under the initial conditions, and in Section 4.2.3 we validate the accuracy of the EM estimators using real data. (In all the tests throughout this paper we set \(\epsilon=10^{-6}\).) #### 4.2.1 Accuracy As a first examination, we want to evaluate the accuracy of the estimations provided by the EM algorithm, and to compare them with the accuracy of previous methods suggested in the literature, i.e., Mandelbaum and Zeltyn (2013) (Method 1) and Yefenof et al. (2018) (Method 2). For this purpose we simulate data for \(T\), \(W\), and \(Y\), with specific parameters, \(\theta\), \(q\), and \(\gamma\). We compute \(\Delta\) from the realization of \(T\) and \(W\) according to its definition (\(\Delta=1_{\{T\leq W\}}\)). We then estimate \(\widehat{\theta}\), \(\widehat{q}\), and \(\widehat{\gamma}\) using the EM algorithm to evaluate accuracy. Hence, in this validation strategy, all the assumptions of the EM algorithm hold. As mentioned, the EM algorithm can cope with the missing data, but the other two methods cannot. In order to use them for this comparison, therefore, we need to make certain assumptions to enable them to cope with the conversations in the uSab class (\(M=0\)). To apply Yefenof et al. (2018) we have three options of how to classify \(M=0\) conversations: either as served (Sr) customers (\(C_{1}=1\)), as silent-abandonment customers (\(C_{3}=1\)), or classify them using an SVM model as suggested in Section 3.2. Here, we simulate the last option by classifying correctly 85% of the silent-abandonment conversations and 76% of the short-service conversations--which are the same as the sensitivity and specificity proportions of the optimal cutoff of the SVM model (see SS3). To apply the method of Mandelbaum and Zeltyn (2013), we have two options of how to classify \(M=0\) conversations either as served customers (\(C_{1}=1\)) or as (known) abandonments (\(C_{2}=1\)), since this method cannot deal with left-censored conversations. We generate 200 samples of 2,000 customer conversations. For each sample we estimate the parameters using the six methods mentioned above. We use 100 repetitions of the estimation of the parameters with the six methods to create the boxplots (Figure 4). Figure 3 presents the accuracy results for estimating \(\theta\) in a logarithmic scale. Figure 3(a) presents the mean squared errors (MSE) for each model, while Figure 3(b) shows the ratio between the MSE of the specific model and the MSE of the EM algorithm (the EM is the baseline). The x-axis, in both figures, is the proportion of silent abandonments of all arriving customers. Note that we do not report the results of any proportion of silent abandonments that is greater than 45%, since we would not expect any company to find itself in such a position. Most of the parameters of these simulations are taken from Yefenof et al. (2018) (Chapter 6), namely, \(\theta=4\) and \(\gamma=10\) customers per hour (i.e., \(E[T]=15\) and \(E[W]=6\) minutes). We set \(q\) to be in the set \(\{1,0.9,...,0.1\}\), resulting in a proportion of silent abandonments between 0% and 26%. To create higher proportions of Sab customers between 27% and 44%, we need to reduce \(\gamma\); we use \(\gamma\in\{9,7,5,4.1\}\) to achieve those abandonment rates. Note that the setting where \(\theta<\gamma\) is plausible, since Brown et al. (2005) found that in call centers, average customer patience is greater than average virtual wait time, \(E[T]>E[W]\). This result has been confirmed to hold in other service environments by several empirical studies, e.g., Yefenof et al. (2018) who obtained this result when analyzing data from an ED. All the parameter combinations we choose are designed to keep the simulation within the same \(\theta\) less than \(\gamma\) setting. Figure 3: Comparison of Accuracy of Customer Patience Estimations (Log Scale) Figure 3(a) shows that the errors of the EM are quite small (less than \(0.2\%\)) in all of the parameter combinations. Figure 3(b) shows that both ways of implementing Method 1 (which accounts only for right-censored data) are very inaccurate. Specifically, estimating customer patience while ignoring the silent-abandonment phenomenon altogether results in an error rate that is \(O(10^{8})\) higher than the error rate in the EM baseline. A similar picture emerges when implementing Method 2, which assumes that all the uncertain conversations as served. Here too the error rate is \(O(10^{8})\) higher than the error rate in the EM. If we take silent-abandonment conversations into account to the extent that we regard them as left-censored conversations but ignore the missing data we obtain a (relatively) lower error rate. This is apparent when we look at the other two ways of implementing Method 2: either by considering all missing data to be silent-abandonment conversations or by completing the data with an SVM model. The problem with the latter approach is that the classification is considered correct, whereas a classification model is not completely accurate but has certain sensitivity and specificity proportions. However, both of the above-mentioned options yield less accurate results than the EM: the respective error rates are \(O(10^{5})\) and \(O(10^{7})\) greater than the error rate of the EM. To conclude, our algorithm outperforms all other methods for estimating customer patience. Note that when there is no silent abandonment in the system (\(0\%\) in Figure 3), all methods achieve the same performance level; this suggests that the EM algorithm can be used also in cases where the company does not have Sab customers or is unsure whether they exist. Accuracy results for \(q\) and \(\gamma\) are presented in Appendix C.1. Our algorithm provides an accurate estimation of \(\gamma\) and \(q\) too. Since \(q\) is a unique feature of our algorithm, we include there only an MSE graph without comparison to other methods. In order to analyze whether the estimations are biased or just have larger variance, we present the boxplots in Figure 4. Due to space constraints, we include boxplots only for three of the parameter combinations we simulated. The parameters were chosen to enable compar Figure 4: Accuracy of Customer Patience Estimations for Low, Moderate, and High Sab Proportions parameters that result in low (2%), moderate (17%), and high (40%) levels of silent abandonment (the parameters are stated in each figure). We see that regardless of the level of silent abandonment, the EM algorithm produces the most accurate estimation of customer patience, followed by Method 2 taking uSab as Sab (M2-Sab), which overestimates \(\theta\) (underestimates average customer patience). #### 4.2.2 Sensitivity analysis The next tests are designed to investigate the sensitivity of the EM algorithm under the initiation conditions. In addition, we would like to know whether starting the algorithm under some sophisticated initial conditions, for example, by using a classification model, such as the one we developed in Section 3.2, helps the model to converge to a more accurate estimation. Accordingly, we first investigate the sensitivity of the EM to \(\hat{\pi}_{i}\). Note that by Algorithm 1, \(\hat{\pi}_{i}\) affects \(\widehat{C_{3,0}^{i}}\) and \(\widehat{C_{1,0}^{i}}\) only for the class of uSab customers since the data classes of known-abandonment customers and served customers is complete. We generated 200 samples of 2,000 customer conversations, with the following parameters: \(\theta=4,\gamma=10\), and \(q=0.5\). For each sample we estimate the parameters \((\widehat{\theta},\widehat{q},\,\widehat{\gamma})\) using the EM algorithm (with 100 repetitions), and consider the average of those parameters as the final estimator for that sample. We present here four variants for the starting weights, for all the conversations for which \(M^{i}=0\). _All Sab:_: Setting all \(M^{i}=0\) conversations to be silent-abandonment conversations with probability 1. Formally, \(\widehat{C_{3,0}^{i}}=1\) and \(\widehat{C_{1,0}^{i}}=0\) for all conversations with \(M^{i}=0\). _All Sr:_: Setting all \(M^{i}=0\) conversations to be short-service conversations with probability 1. Formally, \(\widehat{C_{3,0}^{i}}=0\) and \(\widehat{C_{1,0}^{i}}=1\) for all conversations with \(M^{i}=0\). _50:0_: Setting 50% of the conversations to be short-service conversations and 50% to be Sab conversations, i.e., for 50% of the conversations with \(M^{i}=0\) we set \(\widehat{C_{3,0}^{i}}=1\) and for the rest of \(M^{i}=0\) we set \(\widehat{C_{1,0}^{i}}=1\). We choose this option because within our data about 50% of the conversations are Sab and about 50% are short service (see SS3). _Best classifier:_: For conversations with \(M^{i}=0\), we simulate a classification with sensitivity and specificity proportions according to our best classification model from Section 3; therefore, 85% of the Sab conversations are classified correctly and 76% of the short-service conversations are classified correctly. That is, 85% of the actual \(C_{3}=1\) are identified as such and 76% of the actual \(C_{1}=1\) are identified as such and, therefore, we set the correct values to \(\widehat{C_{3,0}^{i}}\) and \(\widehat{C_{1,0}^{i}}\). For the remainder of the conversations we set wrong values on \(\widehat{C_{3,0}^{i}}=1\) and \(\widehat{C_{1,0}^{i}}=1\); e.g., for an actual \(C_{3}=1\): \(\widehat{C_{3,0}^{i}}=0\), \(\widehat{C_{1,0}^{i}}=1\). Figure 5 shows that the estimations of customer patience are stable and do not change when different initial values are inserted in the EM algorithm.This suggests that one may not need to use the output of the classification model we developed in Section 3 (or any model with similar sensitivity and specificity proportions) as starting probabilities in the EM algorithm. Appendix C.2 presents the same type of analysis of the sensitivity of the EM algorithm under the initiation conditions when estimating \(q\) and \(\gamma\). We show there that these estimations are not sensitive to the starting weights either. #### 4.2.3 Real messaging system data and robustness All previous tests used simulated data that clearly adhered to our model assumptions. In this section we perform tests that rely on real data that may not adhere to those assumptions. This will provide us with greater confidence in applying the method we developed here in practice. Using the messaging contact center dataset described in Section 2, we compare the same six methods for estimating customer patience as in the previous accuracy test (SS4.2.1). The results are presented in Table 4. The differences between the patience estimations are huge (13-188 minutes). Note that the estimations are consistent with previous tests, where Method 1 and Method 2 overestimate and underestimate customer patience depending on the variation of the method. The main challenge we are confronted with in this comparison is the lack of ground truth, because we do not know the true value of customer patience. We overcome this challenge by using the manually tagged data described in Section 3.2. Since this data is tagged it has complete data on which customers abandoned, allowing us to apply the method of Yefenof et al. (2018). The resulting estimation of customer patience, based on that data, is 81.9 minutes (row 1 of Table 4). This is very similar to the EM algorithm estimation of customer patience that is based on the monthly data: 81.11 minutes (row 6 of Table 4). On the other hand, it is very far from the estimations done using the other methods. Therefore, we can conclude that the EM algorithm (Algorithm 1) is able to cope with the missing data and obtains an accurate estimation of customer patience. Going back to Table 4, we notice the large bias missing data generates in the estimations. When we ignore both silent abandonment and missing data by regarding all uncertain silent abandonment (\(M=0\)) as service (\(C_{1}=1\)) and by estimating customer patience using either Method 1 or Method 2. Figure 5: Sensibility Analysis (Setting: \(\theta=4\)) 2, we overestimate patience by twice or more (rows 2 and 3 of Table 4). Note that this is the current practice of many companies. They use Method 1 (row 2) while ignoring the concept of silent abandonment that creates left-censoring and missing data. A more advanced company may have a better understanding of its system and an awareness of silent abandonment. However, if it is still unaware of the existence of missing data, it will consider all of the conversations in class \(M=0\) to be silent-abandonment conversations (\(C_{3}=1\)) and apply either Method 1 (i.e., and ignore left-censoring) or Method 2 (i.e., and not ignore the left-censoring). In both cases it underestimates customer willingness to wait (rows 4 and 5). One might comment on our finding that the customers in messaging systems are willing to wait for more than 1 hour (row 1 of Table 4). We think such enduring patience is reasonable for three reasons. (a) When reading the content of the conversations, we saw that in this particular contact center customers receive an automatic message instructing them to "go on with their daily activities" (while waiting for a reply) and to address the service as if "talking to a friend." These customers therefore expect longer waits and adjust their patience accordingly. (b) Messaging systems are used to support the continuance of the relationship between customers and companies. As a result, they have a high proportion of returning customers that are expected to have realistic expectations of the virtual wait time, which was found to be 8.77 minutes. The fact that customer patience outlasts the virtual wait time is consistent with similar results from call centers (Brown et al., 2005). (c) Mandelbaum and Zeltyn (2013) showed that customers are willing to wait around 2 (or more) times longer than their service requirement. Recall that here service time is 49.2 minutes, which fits our findings well. A potential problem with EM algorithms is that they might converge to a saddle point (Chapter 8 of Little and Rubin, 2002). To verify that this does not happen here, we started our EM algorithm with different weights. Specifically, we estimated the parameters by using the EM and taking the starting weights \(\widehat{C_{3,0}^{i}}\) for the conversations for which \(M^{i}=0\) to be 1, 0,.5, or \(\hat{\pi_{i}}\) from the SVM model, as in Section 4.2.2. Note that in the last case the classification model is not simulated; it is the real SVM presented in Section 3. In every case the obtained parameters (\(\widehat{\theta},\widehat{q}\), \(\widehat{\gamma}\)) were consistent, verifying that the algorithm did not converge to a saddle point when applied to the real messaging data. Finally, we performed several robustness checks, by dividing the data set into 10-15 samples and estimating patience in each one of them, using the EM 100 times. We performed these tests to make sure that the results that we obtained from the monthly data (\(\hat{\theta}=0.739\), \(\hat{q}=0.58\), and \(\hat{\gamma}=6.78\)) are robust. We find that the estimations of \(\theta\) from subsamples of the dataset are consistent with those of the whole dataset corpus (Figure 6). We present our results for the estimations of \(q\) and \(\gamma\) in Appendix C.3, which are as accurate. In Appendix C.4 we provide a final validation in a more realistic setting, with the help of a simulated queueing model with parameters from real data. In this setting the assumptions of the EM algorithm (Assumption 1) may not necessarily be true. We find that in this setting too the EM estimation of \(\theta\) is the most accurate among the compared methods. We also find that the differences with the other methods are more pronounced than with the previous validations in the present section. Additionally, we find that the EM estimations of \(q\) and \(\gamma\) are also the most accurate among the compared methods. ## 5 Incorporating Silent Abandonment into a Queueing Model and Managerial Implications In this section we analyze how the phenomenon of silent abandonment affects system efficiency and what decision-makers can do about it. As explained, silent abandonment affects system efficiency in two ways: (a) the Sab customer holds a service slot within the concurrency system, preventing other customers from entering service while idling the agent who is waiting for the customer's response. (b) The agent may waste time on solving the no longer relevant problem of the Sab customer. Both forms of system inefficiency reduce the system's capacity in high-load moments, when available capacity is most crucial. In our chat and messaging system datasets, system capacity is reduced by 5% and 11%, respectively. According to queueing theory, such a reduction in agent availability should have a huge impact on system performance in overloaded systems (Koole and \begin{table} \begin{tabular}{l l c} \hline \hline Row & Method & Average Patience (Minutes) \\ \hline 1 & Method 2—Using sample of labeled conversations & 81.90 \\ 2 & Method 1—Uncertain silent abandonment is service & 166.42 \\ 3 & Method 2—Uncertain silent abandonment is service & 188.07 \\ 4 & Method 1—Uncertain silent abandonment is abandonment & 28.27 \\ 5 & Method 2—Uncertain silent abandonment is silent abandonment & 13.17 \\ 6 & EM & 81.11 \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison of Estimations of Average Customer Patience: Messaging System Dataset (May 2017) Figure 6: Estimations of the Parameter of Customer Patience (\(\theta\)) using EM Algorithm in Subsamples of the Messaging Dataset (May 2017). Horizontal Line Indicates Estimation Based on the Entire Dataset \(\hat{\theta}=0.739\) Mandelbaum 2002). The aims of the present section are as follows. First, we introduce a queueing model that takes the Sab phenomenon into account. Then we show that such a queueing model is able to predict contact-center performance measures better than models that neglect to account for silent abandonment. Finally, we use the model to analyze how much the loss of capacity due to Sab harms system performance, and discuss several ways one might avoid such a problem. We propose the queueing model presented in Figure 7 to capture the phenomenon of silent abandonment. We assume that the arrival rate is according to a Poisson process with rate \(\lambda\). Customers entering the queue have finite patience that is exponentially distributed at rate \(\theta\). The probability that an abandoning customer will indicate her abandonment is denoted by \(q\). Customers who don't provide that indication stay in the queue and are assigned to a service agent (when one becomes available). Queueing policy is first-come, first-served (FCFS). The company can provide service to \(n\) customers in parallel; i.e., there are \(n\) service slots of statistically identical agents. Service time is exponentially distributed with rate \(\mu_{Sr}\) for served customers (i.e., those who belong to class \(C_{1}\)) and rate \(\mu_{Sab}\) for Sab customers (i.e., those who belong to class \(C_{3}\)). This model is very similar to the Erlang-A (M/M/N+M) model, with the important difference that a customer that abandons the queue, but does not notify the system of her abandonment, is assumed by the system to be in the queue (e.g., the gray customer in Figure 7) and, when assigned to an agent, receives some service time, albeit at a different service rate. This enables us to capture the loss of capacity that results from Sab customers. To verify that this queueing model is of merit, we fit the model to the chat system dataset described in Section 2.1. The Erlang-A (M/M/N+M) model which takes into account Kab customers (Palm 1957, Mandelbaum and Zeltyn 2007) is used as a baseline. We then test new models by gradually adding features of silent abandonment. We consider the following five variants of fitting a queueing model to the chat dataset: * An Erlang-A queueing model that ignores Sab both in the queue dynamics and in the parametric estimation of customer patience. Labeled as _"(1) Ignoring Sab."_ Figure 7: Queueing Model with Silent Abandonment Model (2):An Erlang-A queueing model that ignores Sab in the queue dynamics, but considers it in the parametric estimation of customer patience. Labeled as _"(2) Considering Sab as Kab"_. * Models (3) and (4):A queueing model with Sab, no loss of capacity due to Sab, and that considers Sab in the estimation of customer patience, i.e., the model in Figure 7 with \(\mu_{Sab}=\infty\). We check two versions of this model: one with a nonparametric estimation of customer patience (Model (3)) and the other with a parametric estimation of customer patience (Model (4)). Labeled as _"(3) Sab as left-censored, nonparametric"_ and _"(4) Sab as left-censored, parametric,"_ respectively. * Model (5):A queueing model with Sab, loss of capacity due to Sab, and that considers Sab in the parametric estimation of customer patience, i.e., the model in Figure 7. Labeled as _"(5) Considering Sab as time-consuming."_ In Model (1) we estimate customer patience based on Mandelbaum and Zeltyn (2013) (Method 1), and the service rate is calculated by averaging the service time of all the customers that were assigned to an agent, regardless of whether they were served or silently abandoned the queue. In Model (2) we estimate customer patience based on Method 1 as well, and the service rate is estimated only for served customers (\(C_{1}=1\)), i.e., \(\mu=\mu_{Sr}\). In Models (3) and (4) we estimate customer patience based on Yefenof et al. (2018) (Method 2). In both models service time is calculated only for served customers. Finally, in Model (5) we estimate customer patience based on Yefenof et al. (2018) (Method 2). Service time is calculated separately for served customers and Sab customers. (Note that using the EM in the case of Models (4) and (5) gives the same customer patience estimation.) Table 5 provides the estimations of customer patience from real chat data used in this section. This dataset has complete data, which is consistent with the assumption of both versions of Method 2 (rows 3 and 4). We will show next, with the help of our simulation experiments which estimator works better. We compared the differences between the simulated performance measures of the five queueing models described above and the real performance measures calculated from the dataset (shown in Appendix D). Table 6 presents the differences using the root mean square error (RMSE) score. \begin{table} \begin{tabular}{l l c} \hline \hline Row & Method & Average Patience (Minutes) \\ \hline 1 & Method 1—Ignoring Sab & 33.9 \\ 2 & Method 1—Considering Sab as Kab & 17.1 \\ 3 & Method 2—Sab as left-censored, nonparametric & 7.8 \\ 4 & Method 2—Sab as left-censored, parametric & 2.0 \\ \hline \hline \end{tabular} \end{table} Table 5: Comparison of Estimations of Average Customer Patience: Chat System Dataset (February 2017) The simulation parameters \(\lambda_{t}\), \(\mu_{t}\), and \(n_{t}\) were estimated for each hour over the month, while the parameters of customer patience distribution were kept constant over time. The parameters \(\theta\) and \(\mu_{t}\) were estimated differently for each model according to the description above. Note that \(n_{t}\) is the number of available slots, i.e., the number of online agents times a fixed concurrency level of 3 customers per agent. (As mentioned in Section 2.1, the concurrency level is the maximal number of customers that can be served in parallel, such that if all the slots are occupied and an additional customer enters the system, she will need to wait in the queue.) Model (1) (in Table 6) was designed to provide a baseline of the fit between the model and the data when the phenomenon of silent abandonment is ignored altogether. We see that the fit of the queueing model to the data in this case is the worst among all the compared models. Model (2) is designed to represent a case where the company acknowledges the presence of Sab customers, but does not deal with them correctly when estimating customer patience. That is, instead of understanding that they represent left-censored data, they simply consider them as Kab occurring before the assignment time. We see that this strategy is too simplistic, and yields a poor fit of the queueing model to the data. In Models (3) and (4) the company understands that silent abandonment occurs and that the data is left-censored, but ignores the impact of Sab on the available capacity. By comparing the two versions we note that the fit of the parametric model to the data is much better than that of the nonparametric model. This give us higher confidence that the assumptions we made in Assumption 1 are actually very reasonable for the contact-center environment. Finally, we observe that Model (5) which considers Sab both in terms of patience estimation and in terms of efficiency loss, is the best fit, emphasizing the importance of taking the phenomenon of silent abandonment into account when modeling contact centers. Figure 8(a) presents a comparison of the estimation of E[Wait] for Models (4) and (5), and the real E[Wait] in the dataset of the chat system, as a function of the hour of the day (with 95% confidence intervals). We clearly see that Model (4) underestimates customers' wait time relative to Model (5). Comparing Models (4) and (5) enables us to understand the impact of the capacity loss, caused by the Sab customers, on performance measures. If the company is able to eliminate all capacity loss (5% in our case), the expected wait time of all customers would be reduced by 1.6 \begin{table} \begin{tabular}{l c c c c c} \hline \hline Performance & (1) Ignoring & (2) Considering & (3) Sab as left-censored & (4) Sab as left-censored & (5) Considering Sab as time-consuming \\ Measure & Sab & Sab as Kab & nonparametric & parametric & time-consuming \\ \hline \(P\{\text{Wait}>0\}\) & 0.27 & **0.26** & 0.28 & 0.31 & 0.27 \\ \(P\{\text{Ab}\}\) & 0.12 & 0.11 & 0.09 & 0.08 & **0.07** \\ E[Queue] & 3.27 & 1.68 & 1.18 & 0.96 & **0.87** \\ E[Wait] & 169.56 & 85.13 & 75.69 & 70.29 & **63.44** \\ E[Wait][Served] & 198.29 & 103.04 & 83.63 & 63.44 & **62.47** \\ \hline \hline \end{tabular} \end{table} Table 6: RMSE between Queueing Models and Chat System Dataset (February 2017) minutes (67% in absolute percentage), the expected wait time of served customers would be reduced by 1.5 minutes (83%), the probability of waiting by 3% (8%), the probability of abandonment by 4% (16%), and the expected number of people waiting in queue--E[Queue]--by 0.16 (21%). Figure 8(b) illustrates what happens if this capacity loss is partially reduced. To create this figure we simulated Model (5) with various LOS of Sab customers (\(1/\mu_{Sab}\)); as \(1/\mu_{Sab}\) increases it takes more time to understand that the Sab customer indeed abandoned the queue. For this graph we use the parameters on a typical Monday (13:00-14:00), where \(\lambda=56\) customers per hour, LOS of served customers is 12.3 minutes, \(q=0.7\), and average patience is 2 minutes (\(\theta=30\) customers per hour). The case where LOS of Sab is 0 is in fact Model (4), and the highest LOS value of Sab (5.6 minutes) resembles Model (5). How can the company eliminate capacity loss? The first way is to design a bot that is able to identify cases of silent abandonment without the involvement of an agent. Such a bot can manage the beginning of the interaction automatically and transfer the conversation to the agent only after the customer reacts. In chat systems (where the customer does not write anything before joining the queue) such a bot can manage the initiation stages of the chat, namely, the introduction and an inquiry about the customer's problem. In messaging systems (in which the customer writes an inquiry before joining the queue) the bot can ask whether the customer's inquiry is still relevant. As some customers might find such a question annoying, the bot can be programmed to use that method only for _suspected_ Sab customers. To identify suspected Sab customers, the company can design a prediction model in the spirit of the classification model we presented in Section 3, where information about customers' wait time, class, and initial messages is used to identify Sab customers. As all classification models have some margin of error, even the best such system will assign some Sab customers to agents and waste agent time, but hopefully to a lesser extent than before. Figure 8: Estimations of E[Wait] Another way to reduce capacity loss is for the system, when computing agent concurrency levels, to consider suspected Sab customers as fractional customers (as opposed to full ones) until they write something. For example, as long as a customer writes nothing she will be considered a suspected Sab customer and be assigned a value of 0.5, but as soon as she writes something she will be assigned a value of 1. Therefore, an agent that has 2 suspected Sab customers and 2 in-service ones is equivalent to an agent that handles 3 customers. This will reduce the amount of blocking that Sab customers impose on the other customers in the queue. A final possible solution is to handle queue priorities according to existing information on suspected Sab customers. For example, the bot can send a suspected Sab customer to the end of the queue. Therefore, when the Sab customer's turn for service arrives, the agent will have reached his idle period, which means that the effect of the Sab customer on system performance would be diminished. We think that this solution is appropriate mostly for customers who enter the queue when the contact center is closed (e.g., at night) and who would be loading the agent's capacity at the beginning of the workday without actually being there, thereby delaying the new arrivals significantly. In such a scenario, the "cost" imposed by this unfair policy of requiring suspected Sab customers to wait for one extra busy period may be worth it. ## 6 Discussion In this article we identified and defined the phenomenon of silent abandonment as an important source of uncertainty in contact centers. Our work exposed and analyzed how a small difference in the service process of two environments of contact centers--chat systems and messaging systems--changes the way we estimate performance and patience for each one. Specifically, we showed that the timing of the submission of a customer's inquiry (i.e., before entering the queue or after being assigned to an agent) and the customer's management of her service window/application create uncertainty that affects a company's ability to know which customers have abandoned the queue and which have been served. We argued that although enabling/denying customer messages before entering the queue is a design decision of the company, the fact that not all people close their application or are impolite (i.e., abandoning without indication) is a behavioral phenomenon that the company cannot control, but needs to deal with. We further analyzed the impact of silent abandonment on estimations of customer patience and abandonment estimations. We showed that silent abandonment needs to be considered as left-censored observations of customer patience and as time-consuming tasks in order to obtain more accurate measures of performance in contact centers. We suggested a queueing model that takes Sab customers into account, and showed that it captures system dynamics well, whereas queueing models that ignore Sab customers do not fit the data. Using our queueing model we showed the impact of capacity loss, caused by customer behavior, on performance measures. We then made several suggestions for operational changes in concurrency management and prioritization to avoid that problem. We are in the process of analytically analyzing this queueing model. We believe that it can be used as a tool to evaluate further the operational implications of silent abandonment, as well as a tool to validate recommendations of new operational policies that will be able to cope better with those implications. When comparing customer patience in chat and messaging systems, we notice a huge difference. The EM algorithm estimated customer patience in the messaging system to be 81.1 minutes and customer patience in the chat system to be much shorter, only 2 minutes. The higher patience in the messaging system is consistent with previous literature that shows a connection between customer LOS and willingness to wait; e.g., in Mandelbaum and Zeltyn (2013), customer LOS in the messaging system is much longer than in the chat system, 49.2 and 11.65 minutes, respectively. Even so, patience in chat systems seems short. We conjecture that the difference in customer patience between the two contact-center environments is related to the different nature of the service in those systems. In the messaging system, the communication is usually through a smart phone, which are always with us, whereas, in the chat system, the communication is usually through a desktop computer, which obligates us to remain stationary. This conjecture relates to the work of Westphal (2018) who shows that a customer is more likely to abandon the queue when waiting for an online service when forced to focus on a waiting screen on a computer than when waiting for an online service when free to shift attention to other websites. When analyzing the total percentage of abandoning customers in both environments we see that it is almost the same, around 20%. However, the percentage of Sab customers is higher in the messaging system, where the wait is longer too. This is somewhat similar to the increase in the no-show rate as the wait time from appointment booking to physician visit increases (Folkins et al., 1980; Galucci et al., 2005; Liu et al., 2010). The authors of the first of these articles claim that in the setting of a mental health center, it may be the case that no-shows happen due to customers having to wait longer solve their problems on their own. We conjecture that this may also be true for textual services. This raises the question whether there is a connection between \(q\) and wait time. We therefore think that future research on patience estimation can relax the assumptions we made for the EM algorithm on the independence between \(q\) and wait time. Another interesting comparison can be made between silent abandonment and no-shows vis-a-vis the scope of these phenomena and their operational implications. Our findings suggest that 6%-14.3% of the customers abandon queues of contact centers without notification, compared to 23%-34% of no-shows in medical appointments. In terms of operational implications, Moore et al. (2001) found that in a family medical practice no-shows are responsible for 25.4% of scheduled wasted time. Here too we showed that silent abandonment reduces system capacity, but at a lower magnitude of 5%-11%. However, here it translates to wasted tasks performed by the agent and occupied slots held by the silent-abandonment customers in the system. From a different perspective we note that agents may use the silent-abandonment phenomenon to their advantage. If a Sab customer is assigned to an agent, the agent seems to be busy while in practice she may rest a little. Therefore, agents may lack incentive to close suspected Sab conversations quickly. The company will want to prevent such strategic behavior by agents, but should proceed carefully in order to avoid situations where a long-waiting customer conversation is prematurely terminated. For example, it is possible that the customer did not notice that the agent finally answered. Hence, finding technological answers to handling capacity loss, like the ones we suggested in Section 5, is important. Investigating the strategic behavior of agents may be interesting in its own right and a worthy topic of future research. To conclude, we believe that the phenomenon of silent abandonment has an impact beyond the framework discussed in this paper, and therefore calls for further mathematical and behavioral modeling in the context of chat- and messaging-based services.
2305.06101
Access-Redundancy Tradeoffs in Quantized Linear Computations
Linear real-valued computations over distributed datasets are common in many applications, most notably as part of machine learning inference. In particular, linear computations that are quantized, i.e., where the coefficients are restricted to a predetermined set of values (such as $\pm 1$), have gained increasing interest lately due to their role in efficient, robust, or private machine learning models. Given a dataset to store in a distributed system, we wish to encode it so that all such computations could be conducted by accessing a small number of servers, called the access parameter of the system. Doing so relieves the remaining servers to execute other tasks. Minimizing the access parameter gives rise to an access-redundancy tradeoff, where a smaller access parameter requires more redundancy in the system, and vice versa. In this paper, we study this tradeoff and provide several explicit low-access schemes for $\{\pm1\}$ quantized linear computations based on covering codes in a novel way. While the connection to covering codes has been observed in the past, our results strictly outperform the state-of-the-art for two-valued linear computations. We further show that the same storage scheme can be used to retrieve any linear combination with two distinct coefficients -- regardless of what those coefficients are -- with the same access parameter. This universality result is then extended to all possible quantizations with any number of values; while the storage remains identical, the access parameter increases according to a new additive-combinatorics property we call coefficient complexity. We then turn to study the coefficient complexity -- we characterize the complexity of small sets of coefficients, provide bounds, and identify coefficient sets having the highest and lowest complexity.
Vinayak Ramkumar, Netanel Raviv, Itzhak Tamo
2023-05-10T12:42:25Z
http://arxiv.org/abs/2305.06101v2
# Access-Redundancy Tradeoffs in ###### Abstract Linear real-valued computations over distributed datasets are common in many applications, most notably as part of machine learning inference. In particular, linear computations which are quantized, i.e., where the coefficients are restricted to a predetermined set of values (such as \(\pm 1\)), gained increasing interest lately due to their role in efficient, robust, or private machine learning models. Given a dataset to store in a distributed system, we wish to encode it so that all such computations could be conducted by accessing a small number of servers, called the _access parameter_ of the system. Doing so relieves the remaining servers to execute other tasks, and reduces the overall communication in the system. Minimizing the access parameter gives rise to an _access-redundancy_ tradeoff, where smaller access parameter requires more redundancy in the system, and vice versa. In this paper we study this tradeoff, and provide several explicit code constructions based on covering codes in a novel way. While the connection to covering codes has been observed in the past, our results strictly outperform the state-of-the-art, and extend the framework to new families of computations. Access-Redundancy, distributed systems, coded computation, covering codes. ## I Introduction Adding redundancy to stored data is a common and well-known practice, both in the theory of error correcting codes and in its applications in distributed systems. Traditionally, in general-purpose storage codes, redundancy is added in order to prevent data loss in cases of hardware failures. More recently, attention has been given to storage schemes that are specifically tailored for certain future uses of the data (e.g., computation of polynomials [14]). The role of redundancy in the latter is to expedite the computation by combating the effect of straggling nodes; a central server which orchestrates the computation contacts _all_ the nodes in the system with a computation query, and must be able to conclude that computation even if a certain number of servers fail to respond in a timely manner. In contrast, this paper addresses systems in which the central server ("master" or "user") is limited in the _number_ of storage servers ("nodes") it can contact for a given computation query. This strategy optimizes the fraction of the system which becomes occupied by the current query, and frees the remaining nodes to handle other tasks or serve other users. Formally, given a family \(\mathcal{F}\) of computations of interest, and an _access_ parameter \(\ell\), data is stored so that at any point in the future, any function \(f\in\mathcal{F}\) can be computed by accessing at most \(\ell\) nodes. Clearly, an _access/storage tradeoff_ arises--at one endpoint the data is stored without any redundancy, and computations require accessing all nodes (min. storage, max. access). At the other endpoint, every possible \(f\in\mathcal{F}\) is computed a priori and stored in a separate node; computation is then done by accessing that node (max. storage, min. access). By and large, the purpose of this area of study is to characterize all the intermediate points in this tradeoff. That is, one wishes to characterize the feasible region (or the Pareto optimal front) of the set of all feasible access/redundancy pairs; precise problem definition will be given in the sequel. In this paper we focus on linear computations over \(\mathbb{R}\), whose coefficients are quantized to a finite set of values (e.g., \(\{\pm 1\}\)). Such computations have gained increasing attention of late, mostly for applications relating to machine learning inference, in which they have proven beneficial in terms of robustness [7, 11, 13] and privacy [12]. The problem of quantized computations over \(\mathbb{R}\) has been studied in the past, mostly for applications in databases. When the set of coefficients is restricted to \(\{0,1\}\), Ref. [6] provided several nontrivial schemes based on _covering codes_; these are sets of \(\{0,1\}\) vectors such that every \(\{0,1\}\) vector lies within some bounded Hamming distance from a vector in the set. Our techniques also rely on covering codes, yet in a novel way, and require covering codes with certain closure properties. By constructing such covering codes using existing tools, our results outperform all of the schemes given in [6]. Furthermore, we show that all two-valued linear computations (i.e., where the vector of coefficients contains at most two different values) are equivalent in some sense, and provide a respective lower bound. ## II Preliminaries This section begins with an introduction to covering codes. Since subsequent schemes require covering codes with closure properties that have not been studied in the past, we construct several new families of covering codes using known tools1, and hence a brief introduction to these tools is given as well. The section then continues with a formal problem statement, and a summary of previous works. Footnote 1: The novelty of these codes is limited to their closure properties, and they do not improve upon the best known parameters. ### _Covering codes_ The covering radius (abbrv. radius) of a code is the minimum integer \(r\) such that Hamming balls of radius \(r\) that are centered in all codewords cover the entire space. Formally, for a code \(\mathcal{C}\) of length \(n\) over an alphabet \(\Sigma\), the covering radius \(r=r(\mathcal{C})\) is defined as \[r(\mathcal{C})=\min\{r^{\prime}|\cup_{e\in\mathcal{C}}B_{H}(\mathbf{c},r^{ \prime})=\Sigma^{n}\}\] where \(B_{H}(\mathbf{c},r^{\prime})\) is the Hamming ball centered at \(\mathbf{c}\) with radius \(r^{\prime}\). The covering radius is a fundamental topic in the study of error correcting codes, and many constructions and bounds are well-known [3]. We briefly review some known techniques for constructing covering codes; for full details the reader is referred to the respective references. The challenge in covering codes is constructing small codes with small radius. Clearly, the Cartesian product \(\mathcal{C}_{1}\times\mathcal{C}_{2}=\{(\mathbf{c}_{1},\mathbf{c}_{2})| \mathbf{c}_{1}\in\mathcal{C}_{1},\mathbf{c}_{2}\in\mathcal{C}_{2}\}\) of a code \(\mathcal{C}_{1}\) with radius \(r_{1}\) and length \(n_{1}\) and a code \(\mathcal{C}_{2}\) with radius \(r_{2}\) and length \(n_{2}\) has radius \(r_{1}+r_{2}\) and length \(n_{1}+n_{2}\). The following framework, called _the amalgamated direct sum_ and developed in [2, 5, 8], shows that under certain conditions, a code of radius \(r_{1}+r_{2}\) and length \(n_{1}+n_{2}-1\) can be constructed. This framework, described next, improves the rate of the Cartesian product (i.e., reduces its size-to-length ratio) without altering its radius, and yielded some of the best known covering codes. The norm \(N^{(i)}\) of entry \(i\) of a code \(\mathcal{C}\) with radius \(r\) over \(\Sigma\) is defined as \[N^{(i)}\triangleq\max_{\mathbf{w}\in\Sigma^{n}}\Biggl{\{}\sum_{a\in\Sigma}d_{ H}(\mathbf{w},\mathcal{C}_{a}^{(i)})\Biggr{\}},\] where \(\mathcal{C}_{a}^{(i)}\) is the subset of \(\mathcal{C}\) containing all codewords whose \(i\)'th entry is \(a\), and \(d_{H}(\mathbf{w},\mathcal{C}_{a}^{(i)})\triangleq\min_{\mathbf{c}\in\mathcal{C }_{a}^{(i)}}d_{H}(\mathbf{w},\mathbf{c})\), where \(d_{H}\) denotes the Hamming distance. If for some \(i\) we have \(N^{(i)}\leq(r+1)|\Sigma|-1\) then the \(i\)'th coordinate of \(\mathcal{C}\) is called _acceptable_, and \(\mathcal{C}\) is called _normal_. These technical tools play a role in the simple proof of the following theorem. **Theorem 1** ([2, 5, 8]).: _Let \(\mathcal{A},\mathcal{B}\) be codes of length \(n_{A},n_{B}\) and radii \(r_{A},r_{B}\) over an alphabet \(\Sigma\). Further, the last coordinate of \(\mathcal{A}\) and the first coordinate of \(\mathcal{B}\) are acceptable, and \(\mathcal{A}_{a}^{(n_{A})},\mathcal{B}_{a}^{(1)}\neq\varnothing\) for all \(a\in\Sigma\). Then, the amalgamated direct sum_ \[\mathcal{A}\dot{+}\mathcal{B}\triangleq\bigcup_{a\in\Sigma}\{(\mathbf{v},a, \mathbf{w})|(\mathbf{v},a)\in\mathcal{A},(a,\mathbf{w})\in\mathcal{B}\}\] _is a code of length \(n_{A}+n_{B}-1\) and radius at most \(r_{A}+r_{B}\)._ When the codes \(\mathcal{A},\mathcal{B}\) are linear with generator matrices \(G_{A},G_{B}\), respectively, the generator matrix of \(\mathcal{A}\dot{+}\mathcal{B}\) is given by constructing a matrix which contains \(G_{A}\) and \(G_{B}\) on its diagonal (in block form), and \(G_{A}\) and \(G_{B}\) intersect on one element, i.e., the lower-right element of \(G_{A}\) coincides with the upper-left element of \(G_{B}\). Good covering radii are also obtained in _piecewise constant codes_[2, Sec. III.A]. In these codes the length \(n\) is partitioned as \(n=n_{1}+\ldots+n_{t}\), \(n_{i}>0\) for all \(i\), and each codeword is partitioned respectively \(\mathbf{c}=(\mathbf{c}_{1},\ldots,\mathbf{c}_{t})\). A code is piecewise constant if whenever it contains a word with \[w_{H}(\mathbf{c}_{1})=w_{1},\ldots,w_{H}(\mathbf{c}_{t})=w_{t}, \tag{1}\] for some nonnegative integers \(w_{1},\ldots,w_{t}\), where \(w_{H}\) denotes the Hamming weight, then it also contains _all_ such words. Constructing piecewise constant covering codes can be seen as covering a multi-dimensional array with Manhattan balls, as follows. Consider a \(t\)-dimensional array in which the \(i\)'th axis is indexed by \(0,1,\ldots,n_{i}\), and the \((w_{1},\ldots,w_{t})\) entry contains the number of words satisfying (1), i.e., \(\prod_{i=1}^{t}\binom{n_{i}}{w_{i}}\). It is an easy exercise to show that if one manages to cover this array using Manhattan balls of radius \(r\) centered at some \(m\) entries \(\{(w_{i,1},\ldots,w_{i,t})\}_{i=1}^{m}\), then the union of the \(m\) sets of words corresponding to these \(m\) entries is a piecewise constant code of radius \(r\). An example of such construction is given in Fig. 2 which follows. ### _Problem statement_ Given \(\mathbf{x}\in\mathbb{R}^{k}\), we wish to encode it as \(\mathbf{y}\in\mathbb{R}^{n}\) for \(n\geq k\), and store it in a distributed manner over \(n\) servers, one \(\mathbb{R}\)-element \(y_{i}\) in each server2. We focus on linear codes, in which every \(y_{i}\) is a linear combination of entries of \(\mathbf{x}\). For a given family of quantized linear computations \(\mathcal{F}\), we wish to devise an encoding mechanism such that any \(f\in\mathcal{F}\) can be computed by accessing some \(\ell\) servers, where \(\ell\) is the _access parameter_ of the system. While \(\mathcal{F}\) is known at the time of encoding, the specific \(f\in\mathcal{F}\) is not. Given the desired \(f\in\mathcal{F}\) to compute, the user accesses \(\ell\) servers, whose identity is uniquely determined by \(f\in\mathcal{F}\), downloads their content, and computes \(f(\mathbf{x})\). To provide lower bounds, it is further assumed that the user combines the \(\ell\) downloaded entries of \(\mathbf{y}\) linearly. Footnote 2: Typically, one would like to store multiple \(\mathbf{x}\)’s, e.g., data-points \(\mathbf{x}_{1},\ldots,\mathbf{x}_{N}\) in a large dataset. In this paper we focus on \(N=1\), and a general \(N\) follows by encoding all \(\mathbf{x}_{i}\)’s in the same fashion. The main goal of this paper is to understand and optimize the correspondence between \(n\) and \(\ell\) in the asymptotic regime, for several quantized linear computation families \(\mathcal{F}\). Clearly, one would wish to minimize both \(n\) and \(\ell\) simultaneously, but a clear tradeoff exists. In one end of the tradeoff we have \(n=|\mathcal{F}|\) and \(\ell=1\) by pre-computing every \(f\in\mathcal{F}\); in the other we have \(n=\ell=k\), where the user downloads \(\mathbf{x}\) in its entirety and computes \(f(\mathbf{x})\) locally. We focus on the access ratio \(\ell/k\) and the redundancy ratio \(n/k\) in the asymptotic regime. Specifically, for \(\alpha,\beta\in\mathbb{R}\) we say that the pair \((\alpha,\beta)\) is \(\mathcal{F}\)-feasible if there exists an infinite family of linear codes with parameters \(\{(k_{i},n_{i},\ell_{i})\}_{i\geq 0}\) (with \(k_{i}\)'s strictly increasing) such that \(\lim_{i\to\infty}(n_{i}/k_{i},\ell_{i}/k_{i})=(\alpha,\beta)\). The goal is to understand the Pareto front of all \(\mathcal{F}\)-feasible pairs, where \(\mathcal{F}\) is one of several families of quantized linear functions whose coefficients are restricted to two real values, as will be explained shortly. A solution to this problem comes in the form of an encoding (i.e., storage) scheme, coupled with an algorithm that is given \(f\in\mathcal{F}\), and outputs the identities of \(\ell\) servers which need to be contacted, alongside a recipe for combining their responses. We collectively refer to these operations, namely, the storage, access, and computation, as an \(\mathcal{F}\)_-protocol_ (protocol, in short). ### _Previous work_ Variants of the above problem have been studied in many previous works. First, the \(\mathbb{F}_{q}\) variant of the problem, where \(\mathbf{x}\) is over some finite field \(\mathbb{F}_{q}\) and \(\mathcal{F}=\{f(\mathbf{x})=\mathbf{w}\mathbf{x}^{\intercal}|\mathbf{w}\in \mathbb{F}_{q}^{k}\}\) is a folklore result in the area. It can be shown [3] that a protocol with a given \(n,k,\ell\) as above exists if and only if there exists a linear code of length \(n\), dimension \(n-k\), and covering radius \(r=\ell\). However, this method does not work for linear computation over the reals, which is the focus of the current paper. When it comes to linear computation over \(\mathbb{R}\), the family \(\mathcal{F}_{0,1}=\{f(\mathbf{x})=\mathbf{w}\mathbf{x}^{\intercal}|\mathbf{w} \in\{0,1\}^{k}\}\) has been studied in [6], and covering codes are employed in a different way than the finite field case. In the following section it is shown that in fact, all two-valued linear computations are equivalent. Furthermore, our results strictly improve (i.e., both in terms of redundancy and in terms of access) all the results from [6]. It should be noted that even though [6] uses the term "partial sum" to describe \(\mathcal{F}_{0,1}\), an identical term is widely used [4, 9] for a different problem where \(\mathcal{F}=\{f_{i,j}(\mathbf{x})=\sum_{\ell=1}^{j}x_{i}|1\leq i\leq j\leq n\}\) (i.e., consecutive indices, which in some variations only begin with \(1\)), or its multidimensional variant [1]. The access aspect of these computations is often studied under the name "bit-probe" or "cell-probe," e.g., [10]. More broadly, our results are also related to recent trends in _coded computation_. This recently popularized area studies various distributed computation tasks under the _straggler_ effect--the user accesses _all_ servers, and completes the computation from the responses of the fastest ones. This work can be seen as a complementary study which addresses the _number_ of servers that should be accessed. In the straggler problem, the identity of the fastest servers is not known a priori, whereas, in the access problem under consideration here, the identity of the servers that are accessed is a deterministic function of the computation task, and these servers are not necessarily faster than others. Extending the results in this paper to other computation tasks of interest (e.g., polynomials) or to address the straggler effect, is left for future work. ## III Two-value equivalence As mentioned earlier, all linear computations which only require two real coefficients are equivalent in a sense that will be clarified shortly. To this end, for distinct \(a,b\in\mathbb{R}\) let \(\mathcal{F}_{a,b}=\{f(\mathbf{x})=\mathbf{w}\mathbf{x}^{\intercal}|\mathbf{w} \in\{a,b\}^{k}\}\), and recall the definition of an \(\mathcal{F}_{a,b}\)-feasible pair (Section II-B). For simplicity of notation, we will use \(\{a,b\}\)-feasible instead. **Proposition 1**.: _For any distinct \(a,b\in\mathbb{R}\), a pair \((\alpha,\beta)\) is \(\{\pm 1\}\)-feasible if and only if it is \(\{a,b\}\)-feasible. Furthermore, data stored using an \(\mathcal{F}_{\pm 1}\)-protocol can be used to retrieve \(\mathbf{w}\mathbf{x}^{\intercal}\) for any \(\mathbf{w}\in\{a,b\}^{k}\) and any distinct \(a,b\in\mathbb{R}\) by using at most one additional node._ Proof.: Given an \(\mathcal{F}_{a,b}\)-protocol, store \(\mathbf{x}\) according to it, with an additional node containing \(\mathbf{1}\mathbf{x}^{\intercal}\) (if not already present). Then, given any \(\mathbf{w}\in\{\pm 1\}^{k}\), let \(\mathbf{w}^{\prime}\in\{a,b\}^{k}\) be such that \(w^{\prime}_{i}=a\) if \(w_{i}=1\), and \(w^{\prime}_{i}=b\) if \(w_{i}=-1\). By following the protocol for retrieving \(\mathbf{w}^{\prime}\mathbf{x}^{\intercal}\), and retrieving \(\mathbf{1}\mathbf{x}^{\intercal}\) from its designated node, the user can compute \[\frac{2}{a-b}\cdot\mathbf{w}^{\prime}\mathbf{x}^{\intercal}- \frac{a+b}{a-b}\cdot\mathbf{1}\mathbf{x}^{\intercal}\] \[= \sum_{j|w_{j}=1}\frac{2a}{a-b}x_{j}+\sum_{j|w_{j}=-1}\frac{2b}{a- b}x_{j}-\frac{a+b}{a-b}\mathbf{1}\mathbf{x}^{\intercal}\] \[= \sum_{j|w_{j}=1}\frac{2a-a-b}{a-b}x_{j}+\sum_{j|w_{j}=-1}\frac{2b- a-b}{a-b}x_{j}=\mathbf{w}\mathbf{x}^{\intercal}.\] Conversely, given an \(\mathcal{F}_{\pm 1}\)-protocol, store \(\mathbf{x}\) according to it, with an additional node containing \(\mathbf{1}\mathbf{x}^{\intercal}\) (if not already present). Then, given any \(\mathbf{w}\in\{a,b\}^{k}\), let \(\mathbf{w}^{\prime}\in\{\pm 1\}^{k}\) such that \(w^{\prime}_{i}=1\) if \(w_{i}=a\), and \(w^{\prime}_{i}=-1\) if \(w_{i}=b\). By following the protocol for retrieving \(\mathbf{w}^{\prime}\mathbf{x}^{\intercal}\), and retrieving \(\mathbf{1}\mathbf{x}^{\intercal}\) from its designated node, the user can compute \[\frac{a-b}{2}\mathbf{w}^{\prime}\mathbf{x}^{\intercal}+\frac{a+b}{2} \mathbf{1}\mathbf{x}^{\intercal}=\] \[=\frac{a-b}{2}\sum_{j|w_{j}=a}x_{i}-\frac{a-b}{2}\sum_{j|w_{j}=b }x_{j}+\frac{a+b}{2}\mathbf{1}\mathbf{x}^{\intercal}\] \[=\sum_{j|w_{j}=a}x_{i}(\frac{a-b}{2}+\frac{a+b}{2})+\sum_{j|w_{j}= b}x_{i}(\frac{b-a}{2}+\frac{b+a}{2})=\mathbf{w}\mathbf{x}^{\intercal}.\] In either directions, since only at most one additional node is used for storage, and at most one additional node is accessed during computation, it follows that the protocols give rise to identical feasible pairs due to the limit operation, and the claim follows. The "furthermore" part is clear from (2). By transitivity, Proposition 1 readily implies that all two-valued computations have the same set of feasible pairs, hence the same Pareto front in the access-redundancy regime. Moreover, one can devise a protocol exclusively for \(\mathcal{F}_{\pm 1}\) and then use it for \(\mathcal{F}_{a,b}\) for any distinct \(a,b\in\mathbb{R}\). Therefore, an \(\mathcal{F}_{\pm 1}\)-protocol can be easily extended to an \(\mathcal{F}_{2}\)-protocol, where \(\mathcal{F}_{2}\triangleq\bigcup_{a\neq b}\mathcal{F}_{a,b}\). In the sequel, we focus on \(\mathcal{F}_{\pm 1}\) due to favorable closure properties (described shortly), which by the above discussion extend to \(\mathcal{F}_{2}\). In particular, our techniques produce families of codes which not only improve upon existing ones (that were tailored exclusively for \(\mathcal{F}_{0,1}\)[6]) both in the access ratio and the redundancy ratio, but also extend them to all two-valued computations. From a practical perspective, it is preferable that the encoding is systematic, i.e., for each \(x_{i}\) there exists a node storing it in the uncoded form. The schemes in [6] and the schemes we present in Section IV (that outperform the results in [6]) employ systematic storage. However, if non-systematic encoding schemes are permitted, it is possible to do better than our Pareto optimal front under the systematic approach, as demonstrated in Section VI. ## IV Systematic Storage Approach As a motivating example (noted in [6]), it is readily seen that the pair \((1,0.5)\) is \(\{0,1\}\)-feasible (and hence \(\mathcal{F}_{2}\)-feasible) by simply storing an additional node containing \(\mathbb{1}\mathbf{x}^{\intercal}\), on top of a node for each \(x_{i}\) (i.e., \(n=k+1\)). Then, if \(w_{H}(\mathbf{w})\leq k/2\), one can compute \(\mathbf{w}\mathbf{x}^{\intercal}\) by accessing at most \(k/2\) entries from the systematic part. Otherwise, it is answered by accessing the node containing \(\mathbb{1}\mathbf{x}^{\intercal}\), all the nodes \(x_{i}\) such that \(w_{i}=0\), and subtracting; a respective protocol for any two-value computation then follows from Proposition 1. This example can be seen as a special case of the following framework, which provides a low-access protocol from any binary covering code, and is particularly useful when the underlying covering code is closed under complement. Intuitively, when employing binary covering codes in their \(\{\pm 1\}\)-representation, closure under complement translates to negation over the reals, and approximately half of the storage costs can be saved; this is the crux of the improvement over [6]. In the sequel several suitable covering code constructions are given, alongside a comparison to [6]. ### _A general framework_ As mentioned earlier, we focus on protocols for \(\mathcal{F}_{\pm 1}\), which by Proposition 1 imply protocols for all two-valued computations. To this end, we use \(\mathbb{F}_{2}\)-arithmetic, and refer to vectors over \(\mathbb{F}_{2}\) in their \(\{\pm 1\}\)-representation, i.e., use the real \(1\) instead of the Boolean \(0\) and the real \(-1\) instead of the Boolean \(1\). The framework is based on the following simple definition, in which \(\oplus\) denotes addition in \(\mathbb{F}_{2}\) (i.e., point-wise exclusive OR). **Definition 1**.: _For a code \(\mathcal{C}\) over \(\mathbb{F}_{2}\), let \(\hat{\mathcal{C}}\subseteq\mathcal{C}\) which contains exactly one of \(\{\mathbf{c},\mathbf{c}\oplus\mathbb{1}\}\), for every \(\mathbf{c}\in\mathcal{C}\). That is, if \(\{\mathbf{c},\mathbf{c}\oplus\mathbb{1}\}\subseteq\mathcal{C}\), then exactly one of \(\{\mathbf{c},\mathbf{c}\oplus\mathbb{1}\}\) is in \(\hat{\mathcal{C}}\). If \(\mathbf{c}\oplus\mathbb{1}\notin\mathcal{C}\) for some \(\mathbf{c}\in\mathcal{C}\), then \(\mathbf{c}\) is also in \(\hat{\mathcal{C}}\). When the code \(\mathcal{C}\) is clear from the context, we denote \(\hat{c}\triangleq|\hat{\mathcal{C}}|\)._ Clearly, there are many ways to generate \(\hat{\mathcal{C}}\) from \(\mathcal{C}\), all of which result in \(\hat{\mathcal{C}}\) of the same size. Since only the size of \(\hat{\mathcal{C}}\) matters in our context, we assume some unspecified canonical way of constructing a unique \(\hat{\mathcal{C}}\) from \(\mathcal{C}\). As an example, consider the following observation. **Observation 1**.: _A (not-necessarily linear) code \(\mathcal{C}\) that is closed under complement (i.e., \(\mathbf{c}\in\mathcal{C}\) if and only if \(\mathbb{1}\oplus\mathbf{c}\in\mathcal{C}\)) satisfies \(|\hat{\mathcal{C}}|=|\mathcal{C}|/2\). Notice that a linear code \(\mathcal{C}\) is closed under complement if and only if \(\mathbb{1}\in\mathcal{C}\)._ This gives rise to the following theorem. **Theorem 2**.: _If there exists a (not-necessarily linear) code \(\mathcal{C}\) of length \(p\) and covering radius \(r\) over \(\mathbb{F}_{2}\), then the pair \((\frac{p+\hat{c}}{p},\frac{r+1}{p})\) is \(\{\pm 1\}\)-feasible. Moreover, this pair can be obtained using systematic storage schemes._ Proof.: Construct the matrix \(M=(B|I)\in\{0,\pm 1\}^{p\times(p+\hat{c})}\), where \(I\) is the identity matrix and \(B\) contains all \(\hat{c}\) vectors of \(\hat{\mathcal{C}}\) in their \(\{\pm 1\}\)-representation as columns. For \(t\in\mathbb{N}\) consider \(k=tp\), and partition \(\mathbf{x}\in\mathbb{R}^{tp}\) to \(t\) parts \(\mathbf{x}_{1},\ldots,\mathbf{x}_{t}\) of size \(p\) each. To encode, let \(\mathbf{y}_{i}\triangleq\mathbf{x}_{i}M\), and let \(\mathbf{y}=(\mathbf{y}_{1},\ldots\mathbf{y}_{t})\in\mathbb{R}^{t(p+\hat{c})}\). It is easy to see that this encoding is systematic. The storage overhead of the resulting scheme is clearly \(n/k=\frac{p+\hat{c}}{p}\). To retrieve a product \(\mathbf{w}\mathbf{x}^{\intercal}\) for some \(\mathbf{w}\in\{\pm 1\}^{k}\), write \(\mathbf{w}=(\mathbf{w}_{1},\ldots,\mathbf{w}_{t})\) with \(\mathbf{w}_{i}\in\{\pm 1\}^{p}\) for all \(i\) and retrieve each \(\mathbf{w}_{i}\mathbf{x}_{i}^{\intercal}\) by accessing at most \(r+1\) entries from \(\mathbf{y}_{i}\), as follows. Since \(\mathcal{C}\) is a covering code of radius \(r\), there exists \(\mathbf{c}_{i}\in\mathcal{C}\) such that \(d_{H}(\mathbf{c}_{i},\mathbf{w}_{i})\leq r\). Consider two cases: 1. If \(\mathbf{c}_{i}\in\hat{\mathcal{C}}\), the user accesses the node which stores \(\mathbf{c}_{i}\mathbf{x}_{i}^{\intercal}\), and at most \(r\) which store \(x_{i,j}\) for indices \(j\in[p]\) on which \(\mathbf{c}_{i}\) and \(\mathbf{w}_{i}\) differ. The user computes \[\mathbf{c}_{i}\mathbf{x}_{i}^{\intercal}+2\sum_{j|c_{i,j}=-w_{i, j}}w_{i,j}x_{i,j}=\] \[\sum_{j|c_{i,j}=w_{i,j}}w_{i,j}x_{i,j}-\sum_{j|c_{i,j}=-w_{i,j}}w _{i,j}x_{i,j}+\] \[2\cdot\sum_{j|c_{i,j}=-w_{i,j}}w_{i,j}x_{i,j}=\mathbf{w}_{i} \mathbf{x}_{i}^{\intercal}.\] 2. If \(\mathbf{c}_{i}\notin\hat{\mathcal{C}}\) it follows that the vector \(\mathbf{c}_{i}^{\prime}\triangleq\mathbf{c}_{i}\oplus\mathbb{1}\) is in \(\hat{\mathcal{C}}\) (see Definition 1). The user accesses the node which stores \(\mathbf{c}_{i}^{\prime}\mathbf{x}_{i}^{\intercal}\), and at most \(r\) nodes which store \(x_{i,j}\) for indices \(j\in[p]\) on which \(\mathbf{c}_{i}\) and \(\mathbf{w}_{i}\) differ. The user computes \[-\mathbf{c}_{i}^{\prime}\mathbf{x}_{i}^{\intercal}+2\sum_{j|c_{i,j}=-w_{i,j}}w_{i,j}x_{i,j}\] \[=(\mathbf{c}_{i}^{\prime}\oplus\mathbb{1})\mathbf{x}_{i}^{\intercal }+2\sum_{j|c_{i,j}=-w_{i,j}}w_{i,j}x_{i,j}\] \[=\mathbf{c}_{i}\mathbf{x}_{i}^{\intercal}+2\sum_{j|c_{i,j}=-w_{i, j}}w_{i,j}x_{i,j}=\mathbf{w}_{i}\mathbf{x}_{i}^{\intercal}.\] Clearly, in both cases the user accesses at most \(r+1\) nodes, hence \(\ell/k=(r+1)t/pt\), which concludes the proof. In the remaining subsections we present corollaries of the above theorem using linear and nonlinear covering codes. The resulting Pareto optimal front and a comparison to [6] are summarized in Fig. 1. ### _Hamming codes_ It is well-known that the \([7,4]_{2}\) Hamming code is a perfect code of minimum distance \(3\), and hence also a covering code of radius \(1\). Furthermore, by the amalgamated construction of [5, Thm. 20.i] (Section II-A), the \([7,4]_{2}\) Hamming code can be extended any number \(i\geq 0\) of times to a linear code of the same dimension \(4\), length \(7+2i\) and covering radius \(1+i\). This is done using an amalgamated direct sum with the repetition code of length \(2i+1\) (whose covering radius is \(i\)), or equivalently, by extending the bottom row of the generator matrix by two \(1\)'s at a time, and extending the top three rows by two \(0\)'s at a time: \[\begin{pmatrix}1&0&0&0&1&1&0\\ 0&1&0&0&1&0&1\\ 0&0&1&0&0&1&1\\ 0&0&0&1&1&1&1\end{pmatrix}\mapsto\begin{pmatrix}1&0&0&0&1&1&0&0^{2i}\\ 0&1&0&0&1&0&1&0^{2i}\\ 0&0&1&0&0&1&1&0^{2i}\\ 0&0&0&1&1&1&1&1^{2i}\end{pmatrix}\] The resulting extended codes are linear by definition, and clearly contain \(\mathbb{1}\), and hence are closed under complement (see Observation 1). Therefore, they can be used in Theorem 2 with \(\hat{c}=|\mathcal{C}|/2=8\), \(r=1+i\), and \(p=7+2i\). The resulting feasible pairs are \(\{(\frac{15+2i}{7+2i},\frac{2+i}{7+2i})\}_{i\geq 0}\), the \(i\)'th of which is referred to as HamAmal\({}_{i}\). Additionally, for any covering code \(\mathcal{C}\) of length \(p\), covering radius \(r\), and size \(s\), the code \(\mathcal{D}_{i}\triangleq\mathcal{C}\times\mathbb{F}_{2}^{i}\) is a covering code of length \(p+i\), size \(2^{i}s\), and identical covering radius \(r\). This follows easily by using a (not-amalgamated) direct sum of covering codes, and the fact that the covering radius of \(\mathbb{F}_{2}^{i}\) is zero. Moreover, it is an easy exercise to show that \(\hat{d}=\hat{c}2^{i}\). Applying this method with \(\mathcal{C}\) being the \([7,4]_{2}\) Hamming code results in covering codes \(\mathcal{D}_{i}\) of length \(7+i\) and covering radius \(1\), for which \(\hat{d}_{i}=2^{i+3}\). In turn, the codes \(\{\mathcal{D}_{i}\}_{i\geq 0}\) give rise to the feasible pairs \(\{(\frac{i+3}{i+7},\frac{2}{i+7})\}_{i\geq 0}\) by Theorem 2, the \(i\)'th of which is referred to as HamExp\({}_{i}\). ### _Half space_ The entire space \(\mathcal{C}=\mathbb{F}_{2}^{p}\) is a covering code of size \(2^{p}\) and covering radius \(r=0\). Clearly, \(\mathcal{C}\) is closed under complement and \(\hat{c}=2^{p-1}\). Therefore, by Theorem 2 the pairs \(\{(\frac{i+2+i}{i},\frac{1}{i})\}_{i\geq 1}\) are feasible, the \(i\)'th of which is referred to as HalfSpace\({}_{i}\). ### _Known nonlinear covering codes_ Ref. [2] further extended the amalgamated construction technique of [5] to nonlinear codes. Specifically, a certain \(12\)-word nonlinear code of length \(6\) and covering radius \(1\) can be extended any number \(i\geq 0\) of times (by amalgamating it with the repetition code of length \(2i+1\)) to get a \(12\)-word code of length \(6+2i\) and covering radius \(1+i\), as shown below. \[\begin{array}{lcccccc}\text{Word }1:&0&0&0&1&0&0&0^{2i}\\ \text{Word }2:&0&0&0&0&1&0&0^{2i}\\ \text{Word }3:&0&0&0&0&0&1&1^{2i}\\ \text{Word }4:&1&0&0&1&1&1&1^{2i}\\ \text{Word }5:&0&1&0&1&1&1&1^{2i}\\ \text{Word }6:&0&0&1&1&1&1&1^{2i}\\ \text{Word }7:&1&1&1&0&1&1&1^{2i}\\ \text{Word }8:&1&1&1&1&0&1&1^{2i}\\ \text{Word }9:&1&1&1&1&1&0&0^{2i}\\ \text{Word }10:&0&1&1&0&0&0&0^{2i}\\ \text{Word }11:&1&0&1&0&0&0&0^{2i}\\ \text{Word }12:&1&1&0&0&0&0&0^{2i}\end{array}\] It is readily verified that all the extensions are closed under complement; row \(j\) is the complement of row \(6+j\) for every \(j\in[6]\). Therefore, Theorem 2 can be used with \(\hat{c}=6\), \(r=1+i\), and \(p=6+2i\) to obtain that the pairs \(\{(\frac{12+2i}{6+2i},\frac{2+i}{6+2i})\}_{i\geq 0}\) are feasible, the \(i\)'th of which is referred to as NonLinAmal\({}_{i}\). ### _New nonlinear covering codes_ We show that additional pairs are feasible by constructing a new covering code that is closed under complement, using the tools in [2]. We then verify the code's normality with a simple computer program, and extend it using the amalgamation technique. Specifically, using the two-dimensional array in Figure 2, we build an \(8\)-word code of length \(5\) and covering radius \(1\), and then extend it any number \(i\geq 0\) of times to get Fig. 1: (a) The Pareto optimal front of the suggested systematic solutions in Section IV-B, Section IV-C, Section IV-D, and Section IV-E, truncated to two decimal points, when varying \(i\) from \(0\) to \(9\) and omitting solutions whose redundancy factor is not feasible (\(\nu=n/k>10\)). (b) Graphical depiction of these solutions, and comparison to [6]. In particular: The red points are the Pareto optimal front of all systematic solutions. The blue points are the results from [6, Table 9]. An \(\times\) mark represents a point which is dominated by another. The black points are the non-systematic solutions in Section VI. The black line is a numeric evaluation of the bound in (4). an \(8\)-word code of length \(5+2i\) and covering radius \(1+i\), as follows. \[\begin{array}{l}\text{Word }1:\ \ 0\ \ 0\ \ 1\ \ \ 0\ \ 0\ \ 0^{2i}\\ \text{Word }2:\ \ 0\ \ 0\ \ 0\ \ 1\ \ 0\ \ 0^{2i}\\ \text{Word }3:\ \ 0\ \ 0\ \ 0\ \ 0\ \ 1\ \ 1^{2i}\\ \text{Word }4:\ \ 0\ \ 0\ \ 1\ \ 1\ \ 1\ \ 1^{2i}\\ \text{Word }5:\ \ 1\ \ 1\ \ 0\ \ 1\ \ 1\ \ 1^{2i}\\ \text{Word }6:\ \ 1\ \ 1\ \ 1\ \ 0\ \ 1\ \ 1^{2i}\\ \text{Word }7:\ \ 1\ \ 1\ \ 1\ \ 1\ \ 0\ \ 0^{2i}\\ \text{Word }8:\ \ 1\ \ 1\ \ 0\ \ 0\ \ 0\ \ 0^{2i}\end{array}\] It is readily verified that all the extensions are closed under complement; row \(j\) is the complement of row \(4+j\) for every \(j\in[4]\). Therefore, Theorem 2 can be used with \(\hat{c}=4\), \(r=1+i\), and \(p=5+2i\) to obtain that the pairs \(\{(\frac{9+2i}{5+2i},\frac{2+i}{5+2i})\}_{i\geq 0}\) are feasible, the \(i\)'th of which is referred to as PiecewiseAmal\({}_{i}\). ## V A simple lower bounds for two-valued computations It is well-known that the maximum number of \(\{\pm 1\}\) vectors that belong to any given \(\ell\)-dimensional \(\mathbb{R}\)-subspace is \(2^{\ell}\) (e.g., [12, Lemma 7]). Assuming that the user linearly combines the data which is downloaded from the nodes, this gives rise to the following bound. **Theorem 3**.: _An \(\mathcal{F}_{2}\)-protocol with a given \(n\), \(k\), \(\ell\), and linear decoding must satisfy that \(\binom{n}{\ell}2^{\ell}\geq 2^{k}\)._ Proof.: According to the discussion in Section III, there exists a protocol for \(\mathcal{F}_{2}\) if and only if there exists a protocol for \(\mathcal{F}_{\pm 1}\) with identical parameters, and hence we may focus on protocols for \(\mathcal{F}_{\pm 1}\). By the definition of a protocol (Section II-B), each of the \(2^{k}\) vectors \(\mathbf{w}\in\{\pm 1\}^{k}\) has a corresponding set of \(\ell\) nodes which must be accessed to retrieve \(\mathbf{w}\mathbf{x}^{\intercal}\). Clearly, there are at most \(\binom{n}{\ell}\) such sets, each of which contains at most \(2^{\ell}\)\(\{\pm 1\}\)-vectors in its span. We proceed to evaluate the above bound with respect to the constructions in Section IV. Theorem 3 implies that \[\frac{\ell}{k}+\frac{\log\binom{n}{\ell}}{k}\geq 1. \tag{3}\] Denote \(\nu\triangleq\frac{n}{\ell}\) and \(\lambda\triangleq\frac{\ell}{\ell}\); in light of the results in Section IV we restrict our attention to \(1\leq\nu\leq 10\) and \(0<\lambda\leq 1/2\) (see Fig. 1). A known bound asserts that \[\nu kH(\lambda/\nu)-\log(\nu k+1)\leq\log\binom{\nu k}{\lambda k}\leq\nu kH (\lambda/\nu),\] where \(H\) is the binary entropy function, and hence in the regime where \(k\) is large we may use the approximation \[\frac{\log\binom{n}{\ell}}{k}=\frac{\log\binom{\nu k}{\lambda k}}{k}\approx \nu H(\lambda/\nu).\] Plugging this approximation into (3) implies that \[H(\lambda/\nu)\geq\frac{1-\lambda}{\nu}. \tag{4}\] A numeric evaluation of this bound is given in Fig. 1. It is apparent that even under the assumption of linear decoding a substantial gap exists between our constructions and bounds. Under systematicity assumption, we can improve the bound in Theorem 3 to \(\binom{n}{\ell}-\binom{k}{\ell}\geq 2^{k-\ell}\), however it results in negligible improvements when \(k\) is large. ## VI A Non-systematic Approach If the systematic storage requirement is relaxed, then it is possible to obtain feasible pairs that dominate most of the Pareto optimal pairs under the systematic approach (see Fig. 1). The protocol corresponding to Theorem 2 does not require access to systematic symbols if the covering radius of \(\mathcal{C}\) is zero. Therefore, if non-systematic encoding schemes are allowed, one can remove systematic nodes from HalfSpace\({}_{i}\) to obtain feasible pairs \(\{(\frac{2^{i-1}}{i},\frac{1}{i})\}_{i\geq 1}\). These non-systematic solutions are summarized in Fig. 1.
2303.14853
Homochiral antiferromagnetic merons, antimerons and bimerons realized in synthetic antiferromagnets
The ever-growing demand for device miniaturization and energy efficiency in data storage and computing technology has prompted a shift towards antiferromagnetic (AFM) topological spin textures as information carriers, owing to their negligible stray fields, leading to possible high device density and potentially ultrafast dynamics. We realize, in this work, such chiral in-plane (IP) topological antiferromagnetic spin textures, namely merons, antimerons, and bimerons in synthetic antiferromagnets by concurrently engineering the effective perpendicular magnetic anisotropy, the interlayer exchange coupling, and the magnetic compensation ratio. We demonstrate by three-dimensional vector imaging of the N\'eel order parameter, the topology of those spin textures and reveal globally a well-defined chirality, which is a crucial requirement for controlled current-induced dynamics. Our analysis reveals that the interplay between interlayer exchange and interlayer magnetic dipolar interactions plays a key role in significantly reducing the critical strength of the Dzyaloshinskii-Moriya interaction required to stabilize topological spin textures, such as AFM merons, making synthetic antiferromagnets a promising platform for next-generation spintronics applications.
Mona Bhukta, Takaaki Dohi, Venkata Krishna Bharadwaj, Ricardo Zarzuela, Maria-Andromachi Syskaki, Michael Foerster, Miguel Angel Niño, Jairo Sinova, Robert Frömter, Mathias Kläui
2023-03-26T23:34:04Z
http://arxiv.org/abs/2303.14853v1
# Homochiral antiferromagnetic merons, antimerons and bimerons realized in synthetic antiferromagnets ###### Abstract The ever-growing demand for device miniaturization and energy efficiency in data storage and computing technology has prompted a shift towards antiferromagnetic (AFM) topological spin textures as information carriers, owing to their negligible stray fields, leading to possible high device density and potentially ultrafast dynamics. We realize, in this work, such chiral in-plane (IP) topological antiferromagnetic spin textures, namely merons, antimerons, and bimerons in synthetic antiferromagnets by concurrently engineering the effective perpendicular magnetic anisotropy, the interlayer exchange coupling, and the magnetic compensation ratio. We demonstrate by three-dimensional vector imaging of the Neel order parameter, the topology of those spin textures and reveal globally a well-defined chirality, which is a crucial requirement for controlled current-induced dynamics. Our analysis reveals that the interplay between interlayer exchange and interlayer magnetic dipolar interactions plays a key role in significantly reducing the critical strength of the Dzyaloshinskii-Moriya interaction required to stabilize topological spin textures, such as AFM merons, making synthetic antiferromagnets a promising platform for next-generation spintronics applications. ## Introduction The recent years have witnessed an increasing interest in chiral magnetic topological spin textures stabilized by the Dzyaloshinskii-Moriya interaction (DMI), such as skyrmions[1, 2], biskyrmions[3, 4, 5], hopfions[6, 7], chiral bobbers[8, 9], and skyrmionic cocoons[10] due to their potential use as information carriers for high-density data storage and (un)conventional computing [11, 12, 13, 14, 15]. For instance, skyrmions exhibit significant topological robustness [16] and are amenable to electrical control [17, 18, 19, 20], but also entail disadvantages such as skyrmion Hall effect [18, 19, 20]. The growing demand for high-speed, low-power technologies has therefore boosted the search for more complex topological spin textures beyond the skyrmion paradigm. Topological spin textures in in-plane magnets, such as merons and bimerons [21, 22, 23] are recently being explored, by virtue of their richer current-induced dynamics compared to skyrmions [24] and the stackability property that allows for denser quasi-one-dimensional racetracks in three dimensions, resulting in higher storage density. Bimerons are robust topological textures that are homeomorphic to skyrmions and offer more topological states than conventional skyrmions, which makes them an important focus in fundamental quasi-particle research as well as topology-based computing approaches. Despite the advantage of stabilizing pure homochiral spin textures, ferromagnetic (FM) topological spin textures suffer from limitations in scalability with respect to sufficient thermal stability [25], stackability due to long-range magnetic dipolar interactions [26] and, gyrotropic forces resulting from their net intrinsic spin angular momentum. Antiferromagnetic (AFM) systems can naturally overcome these inherent limitations of FM textures, due to their compensated spin angular momentum and negligible stray fields [27, 28, 29, 30, 31, 32, 33]. While one could envisage using single-crystalline AFMs, the inherent technological advantages are challenged by the difficulty of stabilizing pure homochiral spin textures. These challenges stem from the absence of significant Lifshitz invariants, resulting in the observed spin structures having random chirality [34, 35, 36, 37]. So far, the observation of in-plane (IP) topological spin textures such as bimerons has been limited in antiferromagnets to observing their helicity, in spite of recent advances in their creation via sophisticated protocols [34, 35, 36, 37]. For the dynamics, the anticipated motion of topological spin textures in the presence of spin-orbit torques is heavily influenced by their helicity, leading to Bloch-type and Neel-type structures moving perpendicular and along the direction of spin-orbit torque (SOT), respectively [20]. However, the lack of homochirality of spin textures in native single-crystalline antiferromagnets limits their use for controlled dynamics of spin textures and thus prevents their utilization in future spintronics devices. An ideal system to explore and manipulate both structural and dynamical properties of (IP) topological spin textures are synthetic antiferromagnetic (SyAFM) platforms [28, 29, 30, 38], consisting of a multi-layered heterostructure made of FM thin films separated by nonmagnetic metallic spacers and antiferromagnetically coupled via the interlayer exchange interaction [39, 40]. By tailoring the amount of compensation they can exhibit an arbitrarily small magnetic moment and, therefore, combine the most interesting features of both FM and AFM scenarios: minimal stray fields, the ability to stabilize homochiral spin textures, and the potential for ultrafast spin dynamics; all in a device-compatible easy to fabricate polycrystalline multilayer setting. A precondition to assess the topological nature of such spin textures is to be able to measure their chirality. In this regard, SyAFMs offer the advantage to employ the advanced surface- or element-sensitive imaging methods available for FM. Nonetheless, these AFM IP topological spin textures can be formed locally during the magnetization reversal process[41], spontaneously stable homochiral spin textures on a global scale are hitherto not observed. In this article, we employ three-dimensional (3D) vector imaging of the staggered magnetization to demonstrate the successful stabilization of all memebers of the class of IP AFM topological spin textures emerging in a newly designed layered SyAFM, namely merons, antimerons, and bimerons at zero magnetic fields. Our experiments combine magnetic force microscopy (MFM), scanning electron microscopy with polarization analysis (SEMPA), and element-specific photoemission electron microscopy using the X-ray magnetic circular dichroism (XMCD-PEEM) that enable us to identify spin textures possessing enhanced stability, classified by an integer topological invariants and those that are topologically trivial. We find that in the vicinity of the spin-reorientation transition (SRT), where the effective anisotropy vanishes, a SyAFM platform can host homochiral AFM merons, as determined from their helicity and core polarity. Furthermore, their helicity can be easily tailored by the degree of magnetic compensation of the SyAFM, indicating thus that interlayer dipolar interactions play a significant role in the stabilization of these spin textures. Our micromagnetic and analytical calculations can fully explain the experimental observations, elucidate the mechanism for the stabilization of AFM topological textures in synthetic antiferromagnets, and describe the corresponding phase diagram. Our findings provide crucial insights into the formation and stability of homochiral IP AFM topological textures, which pave the way towards better scalable soliton-based technologies beyond the skyrmion paradigm. ### Antiferromagnetic merons/antimerons in SyAFM platforms The magnetic order of the SyAFM platform can be described phenomenologically by two parameters that reflect both its AFM and FM character, depending on the chosen ratio of magnetic compensation: The Neel order parameter \(\mathbf{L}=\mathbf{M}_{\rm t}-\mathbf{M}_{\rm b}\) and the macroscopic spin density \(\mathbf{M}=\mathbf{M}_{\rm t}+\mathbf{M}_{\rm b}\), where \(\mathbf{M}_{\rm t}\) and \(\mathbf{M}_{\rm b}\) denote the magnetization fields of the top and bottom FM layers in each double layer, respectively. As illustrated in Figure 1, the topological spin textures in the SyAFM can be classified by their winding number \(w\), topological charge \(Q\), and helicity \(\gamma\), which are defined as follows: 1) \(w\) provides a measure of the wrapping of the Neel order around the unit sphere and it reads \(w=\pm 1\) for skyrmions/bimerons, whereas it becomes \(w=\frac{1}{2}\) for a meron and \(w=-\frac{1}{2}\) for an antimeron. 2) \(Q\) can be cast as the product of the winding number and the core polarity, namely \(Q=w\cdot L_{z}|_{\rm core}\), the core polarity being defined as the \(z\) component of the Neel order at the texture core. 3) \(\gamma\) is given, akin to skyrmions[11], by the angle between the IP projection of the Neel order and the radial direction. The helicity of these (anti)meron composites can be obtained from the IP rotation of the top-layer magnetization since the latter determines the direction of the Neel order. Neel-type merons are characterized by \(\gamma=0\) or \(\pi\), depending on the sign of the stabilizing DMI. Their spin structure is sketched in panels a) and c) of Figure 1. Bloch-type merons are characterized by \(\gamma=\frac{\pi}{2}\) or \(\frac{3\pi}{2}\), as sketched in panels b) and d) of Figure 1. Note that spin textures in adjacent FM layers exhibit identical winding numbers but opposite core polarities, so their helicities differ by a factor of \(\pi\). ### Tuning the magnetic properties to stabilize AFM (anti)merons Choosing FM materials with low pinning, negligible perpendicular magnetic anisotropy (PMA) and finite DMI, as well as a strong AFM coupling between adjacent FM layers is a key requirement for the stabilization of (anti)merons and bimerons in SyAFM platforms as devised later in the discussion section. We have therefore optimized a Pt/CoFeB/Ir-based multilayer SyAFM (see Methods for details). The FM films consist of the bilayer Fe\({}_{0.6}\)Co\({}_{0.2}\)B\({}_{0.2}\)(FCB)/Co\({}_{0.6}\)Fe\({}_{0.2}\)B\({}_{0.2}\)(CFB) and is sandwiched by nonmagnetic spacers made of a heavy metal bilayer (Pt and Ir). The latter breaks the mirror symmetry of the heterostructure and thus provides a finite DMI \(D\). The top and bottom FM layers are denoted by FM\({}_{\rm t}\) and FM\({}_{\rm b}\), respectively, and their saturation magnetization by \(M_{\rm s,t}\) and \(M_{\rm s,b}\). The CFB layer induces PMA (\(K_{\rm u}\)) at the interface with the heavy metal, whereas the thickness ratio to the FCB layer is used to control the magnetic dipolar anisotropy \(K_{\rm d}=-\frac{1}{2}\mu_{0}M_{\rm s}^{2}\). Panels 2(b)-(d) depict the \(M(H)\) loops of the stacks #1, #2a and #3, respectively, where the red (blue) curve represents the magnetic hysteresis loop for an IP (OOP) configuration of the external magnetic field. The stack #1 has a small positive effective anisotropy. In Fig. 2(c), red and blue hysteresis loops coincide, indicating a very Figure 1: **Spin configuration of AFM merons and antimerons in a SyAFM platform (a)–(d)** AFM merons with helicities \(\gamma=0,\frac{\pi}{2},\pi\), and \(\frac{3\pi}{2}\), respectively. **(e)** AFM antimeron. Black and white arrows represent upward and downward core polarities, respectively, while the IP component of the moments is given by the color map in the top left corner. small effective anisotropy \(K_{\rm eff}=-0.04\) MJ\(\cdot\)m\({}^{-2}\). Furthermore, the zero remanence makes the stack #2a a potential candidate for hosting (bi)merons, as the formation of a multi-domain magnetic ground state is expected for this stack at zero fields. The effect of the magnetic compensation in synthetic antiferromagnets on the formation of meron structures has been studied via the stacks #2b and #3: the FM layers of the former have small (normalized) uncompensated magnetization \(m_{\rm uncom}=\frac{|M_{\rm tr,-}M_{\rm s,\parallel}|}{M_{\rm tr,-}M_{\rm s, \parallel}M_{\rm s,\parallel}}=0.05\), which enables the detection of the OOP spin components of the meron textures via MFM imaging (see supplementary material (SM) section 4 for the \(M(H)\) curve of stack #2b). Figure 2(d) shows the hysteresis loops of the stack #3, which indicates a negative value for \(K_{\rm eff}\). Figure 2: **Material stack and magnetic properties of the SyAFM.****(a)** Multilayer structure for the SyAFM, where FM\({}_{\rm t}\) and FM\({}_{\rm b}\) denote the AFM-coupled top and bottom FM layers, respectively. The unit in parenthesis is in nm **(b)–(d)** OOP (blue curve) and IP (red curve) hysteresis loops measured by means of SQUID magnetometry for the stack **(b)** #1 (fully compensated OOP), **(c)** #2a (fully compensated IP) and **(d)** #3 (uncompensated IP). The fully compensated stack #2a shows a complete overlap between the IP and OOP \(M(H)\) curves, which indicates that the multilayer is in the vicinity of the SRT. ## 3D-vector imaging of merons/antimerons in the SyAFM Next we reveal the topological spin textures present and for the full vector reconstruction of the Neel order requires imaging of both IP and OOP spin components of the meron spin structures is required. Since the stack #2a has full magnetic compensation, its OOP spin components are almost impossible to visualize via MFM. However, since the Figure 3: **Imaging the Néel order parameter of (anti)merons and bimerons in synthetic antiferromagnets.****(a)** SEMPA image showing the IP spin components of the meron texture in the stack #2b. **(b)** MFM image depicting the OOP spin component of the meron structure in the same area. Dark brown and white MFM contrasts indicate the upward and the downward direction, respectively. The color map for the SEMPA image is shown on the left side. Black dotted circles represent merons of helicity \(\gamma=\pi\), whereas double black dotted circles indicate antimerons with \(Q=-\frac{1}{2}\). White circles represent merons having arbitrary helicity with \(Q=\frac{1}{2}\) and white double circles denote merons of helicity \(\gamma=0\) with \(Q=\frac{1}{2}\). Two adjacent black circles (single and double) are identified as bimerons with net topological charge \(Q=-1\) and additionally highlighted by ellipses. **(c)** Histogram of the next-neighbour separation between merons, antimerons and meron-antimeron composites obtained from SEMPA images. **(d)** Histogram of the next-neighbour separation between up-up, down-down, and up-down core polarities from MFM contrast. stacks #2b and #3 produce small stray fields, by using the combination of SEMPA and MFM imaging techniques at the same spot we can reconstruct the Neel order describing the topological textures in these SyAFM platforms. Panels 3(a) and 3(b) show the SEMPA image (IP spin components) and the MFM image (OOP spin component) taken in the same area of the stack #2b at room temperature and zero magnetic fields. The observed topological textures are created by applying a damped oscillating OOP magnetic field applied to imaging. White and dark brown contrasts in Figure 3(b) show the downward and upward core polarity, respectively. Furthermore, black and white circles in both images correspond to \(Q=-\frac{1}{2}\) and \(Q=\frac{1}{2}\), respectively. We note that \(M_{\text{s,t}}>M_{\text{s,b}}\), so the stray field detected in the MFM measurement is dominated by that of the topmost layer. The analysis of both images yields the presence of different topological spin textures: black dotted circles represent Neel-type merons with core polarity pointing downward (\(L_{z}=-1\)), namely the \(\gamma=\pi\) merons from Fig. 1(c). White double circles correspond to merons of helicity \(\gamma=0\) and upward core polarity, as described in panel 1(a). We note that the helicities \(\gamma=0,\pi\) largely correspond to the core polarities \(L_{z}=1\) and \(-1\), respectively, which indicates the presence of homochiral merons in the system. Double black dotted circles mark antimerons with topological charge \(Q=-\frac{1}{2}\), see Fig. 1(e). Changes in \(\gamma\) do not affect the topological structure of the antimeron except for a geometrical rotation of its IP spin components, so that we consider all of them to be topologically equivalent. The combination of a single black circle adjacent to a double one is identified as a bimeron with \(Q=-1\) and marked by an ellipse. AFM coupling between the meronic spin textures present in the adjacent FM layers having has been confirmed by means of XMCD-PEEM layer-resolved imaging (see SM section 2). By observing the arrangement shown in 3(a), it is evident that merons and antimerons are positioned in close proximity to each other. To gain a more comprehensive understanding of the range of their interaction, we conducted a statistical analysis of the distance between meronic textures over a larger sample area. Panel 3(c) shows the histogram of separations between the constituents of various meron composites as seen in the SEMPA images. Meron-antimeron pairs average at a separation of (490 \(\pm\) 32) nm, closer than that between two antimerons or two merons, suggesting a different interaction potential with an energy minimum at smaller distance between merons and antimerons due to the presence of DMI. Based on the core polarities, meron-antimeron composites can have non-zero topological charge or be trivial spin textures with \(Q\) = 0. In panel 3(d), histograms of the next-neighbour distances between core polarities as seen in MFM images are presented. The average separation between up-down core polarities, (425 \(\pm\) 40) nm, coincides within the experimental uncertainty with the average separation between merons and antimerons in the SEMPA images and is smaller than the separation between up-up or down-down polarities. This supports the existence of non-zero topological charges in the meron-antimeron pairs and the dominance of bimerons in the sample as shown in Panels 3(a) and 3(b). ### Tailoring the helicity of (anti)merons in synthetic antiferromagnets The measured helicity values are essential in identifying the mechanism of stabilization and in engineering the SOT-induced dynamics of meron structures. In this section, we demonstrate the control of helicity in synthetic antiferromagnets through variation of the magnetic compensation ratio. Figures 4(a) shows a SEMPA images of the topmost-layer magnetization, hence the direction of the Neel order, Figure 4: **Manipulating the helicity of (anti)merons in SyAFM.****(a)** SEMPA image showing the IP spin components of the meronic spin texture in the stack #2a indicating the IP orientation of the staggered magnetization. Black and white circles denote antimerons and merons, respectively. **b)** Distribution of helicities of the meron present in the SyAFM. The abudance of helicity at 0 and \(\pi\) indicates homochiral Néel meron in the stack. **c)** SEMPA image showing the IP spin components of the meronic spin texture the (uncompensated) case of stack #3. **(d)** shows the dominance of Bloch merons having helicity \(\pi/2\) and \(3\pi/2\) in the stack. for the stack #2a in the absence of magnetic field. We have marked all merons with white circles and all antimerons with black circles, and find an almost equal proportion of both types of topological spin textures. We elucidate the relevance of the helicity by analyzing its values for the emergent merons through a histogram, as shown in panel 4(b). This histogram has been generated by considering additional SEMPA images of the stack #2a obtained under comparable conditions. Values of \(\gamma=0,\pi\) are significantly favoured in this SyAFM platform, which corresponds to the Neel-type rotation, and therefore confirms that the stabilization mechanism for merons in the compensated case originates from the DMI [42, 43, 24]. Similarly, Neel bimerons are energetically favourable in the same range of DMI [24]. The more uncompensated case has been studied in stack #3, which presents an uncompensated magnetization of \(m_{\rm uncomp}=0.20\) and thus a small but significant interlayer dipolar field. SEMPA images of its topmost layer magnetization is shown in figure 4(c) with the direction of the net IP magnetization. We observe again a nearly equal number of merons and antimerons (same white/black color convention as before). The analysis of the distribution of meron helicities, see panel 4(d), yields a prevalence of values around \(\gamma=\frac{\pi}{2},\frac{3\pi}{2}\), which indicates a Bloch-type rotation. We conclude that the presence of a small uncompensated magnetization in the SyAFM promotes the stabilization of Bloch-meron textures. Thus by tuning the compensation ratio, we can effectively manipulate the helicity and consequently, the resulting SOT-induced dynamics. ### Theoretical model and Discussion In the preceding sections, we experimentally demonstrate the occurrence of AFM merons, antimerons, and bimerons in the SyAFM platform. The stabilization of meronic spin textures in SyAFMs results from the subtle interplay between interlayer exchange, interlayer magnetic dipolar interactions (IMD), and interfacial DMI, as well as the effective anisotropies of the FM layers. This mechanism of bimeron stabilization in a SyAFM platform is analyzed and explained in this section, supported by theoretical models and micromagnetic simulations. We start with a SyAFM that can be effectively described, irrespective of its magnetic compensation ratio, as a ferrimagnetic platform with magnetic sublattices given by the top and bottom FM layers, respectively. In the compensated case, the minimal model for the SyAFM contains exchange, DMI, and anisotropy contributions, and thus its total free energy reads \[\mathcal{E}[\mathbf{L}]=\int_{\mathcal{S}}d^{2}\mathbf{r}\left[\tfrac{A}{2}(\nabla\bm {L})^{2}+D\mathbf{L}\cdot(\tilde{\nabla}\times\mathbf{L})-KL_{z}^{2}\right], \tag{1}\] where \(A\) is the AFM spin stiffness constant and \(\mathcal{S}\) denotes the SyAFM surface, see SM. The effective (easy-axis) anisotropy constant for the Neel order, \(K=K_{\rm eff}-\tfrac{H_{\rm d}^{2}}{2\lambda\mathbf{L}^{2}}\), has an additional contribution originating in the competition between the interlayer exchange and IMD interactions. Here, \(H_{\rm d}\) and \(\lambda\) denote the interlayer stray field and half of the interlayer exchange constant, respectively. In the vicinity of the SRT point, \(K_{\rm eff}\sim\frac{H_{\rm d}^{2}}{2\lambda L^{2}}\ll K_{\rm u}\), and therefore the IMD field can induce the reorientation (from OOP to IP) of the staggered magnetization describing the SyAFM, which in turn favours the stability of in-plane meron textures. We note that FM systems can be tuned close to zero effective anisotropy but lack the IMD field, whereas AFM platforms have a spin-flop contribution to the effective anisotropy but their SRT is usually driven by temperature [34]. AFM lacks inversion symmetry due to antiferromagnetic ordering, which significantly reduces the magnitude of the Lifshitz invariants, and therefore it lacks the necessary criteria to stabilize homochiral spin structures. The interplay between the tunable (near-zero) effective FM-layer anisotropy and the IMD field lowers significantly the DMI needed to stabilize IP AFM textures, which makes synthetic antiferromagnets optimal platforms to explore the physics of these AFM spin textures. Furthermore, in the uncompensated scenario, a Zeeman-like interaction \(-\frac{\Theta}{\mathbf{L}^{2}}H_{\rm d}L_{z}\) contributes to the energetics of the SyAFM, which favours the OOP orientation of the Neel order and, therefore, the stability of Bloch-type IP merons at low DMI. Here, \(\Theta=M_{s,t}^{2}-M_{s,b}^{2}\) parametrizes the absence of magnetic compensation in the SyAFM. Panel 5(a) depicts the dependence of the critical DMI (\(D_{\rm c}\)) marking the onset of the meron instability (towards Figure 5: **Critical point and micromagnetics of bimerons in synthetic antiferromagnets.****(a)** Dependence of the critical DMI \(D_{\rm c}\) on the interlayer exchange constant \(2\lambda\). The inset shows the phase diagram corresponding to the point A along the curve \(D_{\rm c}(\lambda)\). The ground-state phases coalesce at the triple point \((\lambda,D_{\rm c})\) in the \(D-K_{\rm eff}\) phase diagram, marked with a red star. Dark green, yellow and bronze colors illustrate the uniform IP, uniform OOP and helical in the \(rz\) plane ground-state phases, respectively, with \(r\) being any radial direction. Light green color depicts the region where AFM merons are stabilized in micromagnetic simulations. The magenta zigzag line illustrates the lower bound of optimal AFM interlayer exchange (energies) accessible in our stacks. The phase diagrams corresponding to the points B and C along the critical curve, which illustrate a displacement of the triple point to the right, are discussed in the SM section 5. **(b)** Illustration of an AFM bimeron corresponding to a system with parameters at point A. Black and grey arrows depict the antiparallel alignment of the spins of the top and bottom FM layers, respectively. Green and red colors show the OOP projections of the localized spins. the helical phase) on the interlayer exchange constant \(2\lambda\). For a strongly AFM-coupled SyAFM (i.e., large \(\lambda\)) in the vicinity of the SRT point (namely, \(K_{\rm eff}\simeq 0\)), only a very small DMI is needed to induce the phase transition from the uniform IP configuration to a helical phase (denoted by green and bronze domains in the inset). We note that the expression for the DMI at the triple point can be always obtained from the condition \(K_{\rm eff}=0\), which yields \(D_{\rm c}=\frac{4}{\pi}\sqrt{A\left[\frac{H_{\rm d}^{2}}{2\lambda L^{2}}\right]}\) (see Methods). The low \(D_{\rm c}\) stems from the fact that the only contribution to the effective anisotropy at the SRT point comes from the IMD field. As the SyAFM is tuned away from the SRT point towards an IP configuration (i.e, \(K<0\)), the critical value \(D_{\rm c}\) increases since one needs to overcome a larger anisotropy barrier to induce the OOP tilting of the staggered magnetization. The phase boundary between the uniform IP and helical phases has been calculated analytically (see Methods) and described parametrically by the curve \(D_{\rm c}^{\rm IP}=\frac{4}{\pi}\sqrt{A\left[\frac{H_{\rm d}^{2}}{2\lambda L^{ 2}}-K_{\rm eff}\right]}\). Furthermore, the ground state is the uniform OOP configuration when \(K>0\) (depicted as a yellow domain in the inset) and, as the DMI increases above \(D_{\rm c}\), a phase transition towards the helical state is induced, which is well known in magnetic films with PMA. We conclude the discussion by noting that, as seen in the inset C of SM figure s9, attaining the critical value \(D_{\rm c}\propto\frac{1}{\sqrt{\lambda}}\) for weak AFM interlayer couplings can be very difficult experimentally, consequently, systems with strong AFM interlayer coupling offer a more viable route for stabilizing bimerons. ### Outlook In conclusion, we demonstrate the presence of chiral merons, antimerons, and topologically stabilized bimerons in synthetic antiferromagnets at zero magnetic fields. The direction of the net magnetization and the emergent field created by topology in bimerons are mutually orthogonal, a key difference to their out-of-plane counterparts, skyrmions. Thus meronic spin textures offer an approach to directly explore and tune the topological Hall physics. Their Hall signal will be directly sensitive to the topology, enabling the electrical readout of the topological winding number and leading to new possibilities for the designing of magnetic topology-based technology where the topology can encode the informations. Our findings show that these AFM textures can be detected with accurate helicity and topological charge through a combination of surface-sensitive SEMPA imaging and MFM imaging. The fully compensated synthetic antiferromagnets host homochiral Neel bimerons that are stable at room temperature. In SyAFM, bimerons exhibit an important advantage over previously demonstrated AFM bimerons due to the presence of homochiral spin textures, which makes them amenable to controlled manipulation using spin-orbit torques and in turn opens up new possibilities for designing spintronic devices in SyAFMs. This combine the advantages of FM, such as easy detection, with those of AFMs, such as the absence of the long-range stray fields. ## Acknowledgments The authors thank A. Bose, A. Rajan and E. Galindez Ruales for their participation in additional experiments included in the Supplementary Material. This work has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie Grant Agreement No. 860060 "Magnetism and the effect of Electric Field" (MagnEFi) as well as from Synergy Grant No. 856538, project "3D-MAGiC". It has also been supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - TRR 173 - 268565370 (project A03), the Grant Agency of the Czech Republic grant no. 19-28375X, and the Dynamics and Topology Centre TopDyn funded by the State of Rhineland Palatinate. ## Methods ### Material deposition The thin-film material stacks were deposited on thermally oxidized Si/SiO2 substrates employing the Singulus Rotaris magnetron sputtering tool, which provides reproducibility and sub-Angstrom thickness accuracy. DC-magnetron sputtering at a base pressure of 4 \(\times\) 10\({}^{-8}\) mbar was employed for the growth of the metallic layers Ta, Pt, Ir, Co\({}_{0.6}\)Fe\({}_{0.2}\)B\({}_{0.2}\)(CFB), Fe\({}_{0.6}\)Co\({}_{0.2}\)B\({}_{0.2}\)(FCB), and Co\({}_{0.8}\)B\({}_{0.2}\) at room temperature. The respective deposition rates were determined with X-ray reflectivity to be 0.54, 0.91, 0.56, 0.51, 0.66, and 0.37 As\({}^{-1}\) under pure Ar flow used as sputtering gas. The top Pt layer serves as capping for the material stack to prevent oxidation over time and during post processing (patterning). The thickness \(d_{\rm Ir}=0.4\) nm of the Ir layer is chosen so as to maximize the AFM interlayer exchange between the FM layers. The value of \(x\) is 0.2 in stack #1, whereas it is 0.7 in #2a. In #3, the FM layers correspond to different values of the parameter \(x\) (\(x_{\rm FM_{t}}=0.7\) and \(x_{\rm FM_{b}}=0.2\)). We have kept the FM layers as thin as possible to maximize the interlayer exchange coupling (\(d_{\rm FM}=0.9\) nm) as well as optimized the ratio between the FCB (\(x\) nm) and CFB (\(0.9-x\) nm) thicknesses to be in the vicinity of the SRT (namely, to obtain a vanishing effective anisotropy \(K_{\rm eff}=K_{\rm u}+K_{\rm d}\)). As depicted in Figure 2(a), a heterostructure consisting of multiple repetitions (14 times) of the single bilayer SyAFM has been prepared to 1) reduce the thermal diffusion of (anti)merons across the low-pinning SyAFM, and 2) obtain a high saturation field in the hysteresis loops. In Stack 3, bottom and top FM layers are made of Co\({}_{0.8}\)B\({}_{0.2}\) and FCB, respectively, and have thicknesses of \(d_{\rm FM_{b}}=1.305\) nm and \(d_{\rm FM_{t}}=0.9\) nm, which yields the value \(m_{\rm uncom}=0.20\) for the uncompensated magnetization. ### SEMPA imaging For imaging the in-plane component of the magnetic spin texture we used a surface-sensitive technique, the Scanning Electron Microscope with Polarization Analysis (SEMPA)[44]. SEMPA is a powerful in-house imaging technique that uses the spin-polarized secondary electrons emitted from a magnetic material and gives a two-dimensional (2D) vector map of the IP magnetization. The sensitivity of SEMPA is limited to \(1-2\) nm depth from the surface, which enables us to image the topological spin textures present only in the topmost magnetic layer. This unique feature of SEMPA is especially effective on synthetic antiferromagnets enabling us to investigate the formation of topological spin textures even in a fully compensated composition. SEMPA color-coded images enable us to determine the winding number of the topological spin textures and classify them accordingly. Also, the sense of the in-plane rotation gives information about the exact helicity of these meronic spin structures. We note that SEMPA images do not differentiate the OOP component of the magnetization, hence merons and antimerons of equal helicity (e.g., 1 (a) and 1 (f)) give similar SEMPA color contrast, as shown in figure 1(k). This prohibits to determine \(Q_{T}\) solely from the IP contrast. Similarly, Figures 1 (l)-(n) denote SEMPA images with \(w=\frac{1}{2}\) having \(\gamma\) values \(\frac{\pi}{2}\), \(\pi\), and \(\frac{3\pi}{2}\). ### Micromagnetic approach #### Analytical expression for the phase boundaries We explore the possible ground states of the model (1) along the lines of Ref.[24]. We consider the most generic ansatz for a helix in the real space, which can be parametrized by the normal \(\vec{\mathbf{n}}\) to the plane of the helix and the helical pitch vector \(\vec{q}\). The Neel order can be cast in terms of this parametrization as \[\mathbf{l}(\vec{r})=\cos(\vec{q}\cdot\vec{r})\mathbf{\hat{e}}_{1}+\sin(\vec{q}\cdot \vec{r})\mathbf{\hat{e}}_{2}+m_{0}\mathbf{\hat{n}} \tag{2}\] where \(\mathbf{l}(\vec{r})=\mathbf{L}(\vec{r})/|\mathbf{L}|\) and \(\{\mathbf{\hat{e}}_{1},\mathbf{\hat{e}}_{2},\mathbf{\vec{n}}\}\) defines a local frame of reference in the spin space. Upon substituting this expression into Eq. (1), we obtain the following identity for the energy density functional: \[\epsilon\left[\mathbf{l}(\vec{r})\right]=\tfrac{1}{1+m_{0}^{2}}\Big{\{}\tfrac{J}{ 2}\vec{q}^{\,2}+D\left(q_{x}\sin\theta\sin\phi-q_{y}\sin\theta\cos\phi\right) +K\big{(}\left[\tfrac{1}{2}+m_{0}^{2}\right]+\left[\tfrac{1}{2}-m_{0}^{2} \right]\cos^{2}\theta\big{)}\Big{\}}. \tag{3}\] This functional is minimized with respect to the variables \(\{\theta,\phi,\vec{q},m_{0}\}\) and the different possible extrema are found. The lowest energy configuration for a given set of parameters determines the ground state. The phase boundaries separating any two possible ground states are determined by equating their corresponding energies, from which the expression of the DMI \(D\) as a function of \(K_{\rm eff}\) is obtained. For instance, the phase boundary for the OOP-helical transition is parametrized by the curve \(D_{\rm c}^{\rm OOP}(\lambda)=\frac{4}{\pi}\sqrt{A\left[K_{\rm eff}-\frac{H_{2}^{ 2}}{2\lambda\mathbf{L}^{2}}\right]}\). ### Micromagnetic simulations Micromagnetic simulations were performed using the Mumax3 software [45]. The following setup was implemented in the simulations leading to Fig. 5. A bilayer square geometry of lateral size 256 nm and thickness 1 nm for each of the FM layers was considered and dipolar interaction included. The system was discretized with a mesh size of \(1\times 1\times 1\) nm\({}^{3}\) and periodic boundary conditions along the \(x\) and \(y\) directions were imposed, with period equal to 16 repetitions. The material parameters are \(A=1\times 10^{-11}\) Jm\({}^{-1}\) for the exchange constant, \(M_{s}\) = 0.145 MAm\({}^{-1}\) for the saturation magnetization and \(\alpha=0.01\) for the Gilbert damping. The strength of the interlayer exchange coupling was chosen to be \(\lambda=0.44\times 10^{-3}\) Jm\({}^{-2}\), which corresponds to the value obtained from SQUID measurements. We note that, in Mumax [3], interlayer exchange interactions are properly accounted for by rescaling the material parameters by the thickness of the spacer (see ext_scaleExchange function) [45]. To explore the \(D-K_{\rm eff}\) phase diagram, the effective out-of-plane uniaxial anisotropy \(K_{\rm eff}\) and the DMI \(D\) were varied in the range \(\left[-3\times 10^{5},3\times 10^{5}\right]\) Jm\({}^{-3}\) and \(\left[0,2\times 10^{-3}\right]\) Jm\({}^{-2}\), respectively. An initial meron configuration is chosen in the simulations, which is minimized to find the equilibrium configuration. The parameter space of \((D,K_{\rm eff})\) was swept to obtain the light green shaded region in Fig. 5. The stability of the bimerons was confirmed via micromagnetic simulations performed on MuMax [3]. Their size and shape were analyzed in the domain \(D<D_{\rm c}^{\rm IP}\) of the parameter space \(\lambda-D\). Panel 5(b) depicts the magnetization profile of a bimeron stabilized for the material parameters summarised in the Methods section. To gain deeper insight into the impact of various magnetic parameters on the properties of AFM bimerons, we conduct micromagnetic simulations (see SM section 6). Our simulations demonstrate that both the DMI and the \(K_{z}\) jointly assist in the formation of larger bimerons in SyAFM, while stronger interlayer exchange stabilizes smaller bimerons. These findings show that by tunning the properties we can design and optimize of AFM bimeron-based devices.
2301.02848
Boundedness of the Fifth Derivative for the One-Particle Coulombic Density Matrix at the Diagonal
Boundedness is demonstrated for the fifth derivative of the one-particle reduced density matrix for non-relativistic Coulombic wavefunctions in the vicinity of the diagonal. To prove this result, strong pointwise bounds are obtained for cluster derivatives of wavefunctions involving multiple clusters.
Peter Hearnshaw
2023-01-07T13:21:53Z
http://arxiv.org/abs/2301.02848v2
# Boundedness of the fifth off-diagonal derivative for the one-particle Coulombic density matrix ###### Abstract. Boundedness is demonstrated for the fifth derivative of the one-particle reduced density matrix for non-relativistic Coulombic wavefunctions in the vicinity of the diagonal. Key words and phrases:Multi-particle quantum system, Coulombic wavefunction, Schrodinger equation, one-particle density matrix 2010 Mathematics Subject Classification: Primary 35B65; Secondary 35J10, 81V55 ## 1. Introduction and results We consider the non-relativistic quantum systems of \(N\geq 2\) electrons among \(N_{0}\) nuclei which represents the system of an atom or molecule. For simplicity we restrict ourselves to the case of an atom (\(N_{0}=1\)), although all results readily generalise to the molecular case. The electrons have coordinates \(\mathbf{x}=(x_{1},\ldots,x_{N}),x_{k}\in\mathbb{R}^{3}\), \(k=1,\ldots,N\), and the nucleus has charge \(Z>0\) and its position fixed at the origin. The corresponding Schrodinger operator is \[H=-\Delta+V \tag{1.1}\] where \(\Delta=\sum_{k=1}^{N}\Delta_{x_{k}}\) is the Laplacian in \(\mathbb{R}^{3N}\), i.e. \(\Delta_{x_{k}}\) refers to the Laplacian applied to the variable \(x_{k}\), and \(V\) is the Coulomb potential given by \[V(\mathbf{x})=-\sum_{k=1}^{N}\frac{Z}{|x_{k}|}+\sum_{1\leq j<k\leq N}\frac{1}{ |x_{j}-x_{k}|} \tag{1.2}\] for \(\mathbf{x}\in\mathbb{R}^{3N}\). This operator acts in \(L^{2}(\mathbb{R}^{3N})\) and is self-adjoint on \(H^{2}(\mathbb{R}^{3N})\), see for example [1, Theorem X.16]. We consider solutions to the eigenvalue problem for \(H\) in the operator sense, namely \[H\psi=E\psi \tag{1.3}\] for \(\psi\in H^{2}(\mathbb{R}^{3N})\) and \(E\in\mathbb{R}\). For each \(j=1,\ldots,N\), we represent \[\hat{\mathbf{x}}_{j}=(x_{1},\ldots,x_{j-1},x_{j+1},\ldots,x_{N}),\quad(x,\hat {\mathbf{x}}_{j})=(x_{1},\ldots,x_{j-1},x,x_{j+1},\ldots,x_{N})\] for \(x\in\mathbb{R}^{3}\). We define the _one-particle reduced density matrix_, or simply _density matrix_, by \[\gamma(x,y)=\int_{\mathbb{R}^{3N-3}}\psi(x,\hat{\mathbf{x}})\overline{\psi(y, \hat{\mathbf{x}})}\,d\hat{\mathbf{x}},\quad\ \hat{\mathbf{x}}=\hat{\mathbf{x}}_{1}. \tag{1.4}\] for \(x,y\in\mathbb{R}^{3}\). More commonly, the one-particle reduced density matrix is defined as the function \[g(x,y)=\sum_{j=1}^{N}\int\limits_{\mathbb{R}^{3N-3}}\psi(x,\hat{\mathbf{x}}_{j} )\overline{\psi(y,\hat{\mathbf{x}}_{j})}\,d\hat{\mathbf{x}}_{j},\quad x,y\in \mathbb{R}^{3}. \tag{1.5}\] However, since we are interested only in regularity properties we need only study one term of (1.5), hence the definition (1.4). In fact we have \(g(x,y)=N\gamma(x,y)\) whenever \(\psi\) is totally antisymmetric. An important related function is the _one-particle density_, or simply _density_, which is defined here as \[\rho(x)=\gamma(x,x)=\int_{\mathbb{R}^{3N-3}}|\psi(x,\hat{\mathbf{x}})|^{2}\,d \hat{\mathbf{x}},\quad x\in\mathbb{R}^{3}. \tag{1.6}\] Now suppose \(\psi\) is any eigenfunction obeying (1.3) with \(\left\|\psi\right\|_{L^{2}(\mathbb{R}^{3N})}=1\), and let \(\rho\) and \(\gamma\) be the corresponding functions as defined in (1.6) and (1.4) respectively. In [2], real analyticity is proven for \(\gamma(x,y)\) as a function of two variables in the set \[\mathcal{D}=\{(x,y)\in\mathbb{R}^{3}\times\mathbb{R}^{3}:x\neq 0,y\neq 0,x\neq y\}.\] In particular, real analyticity was not shown across the diagonal, that is where \(x=y\). Previously, real analyticity was shown to hold for the density \(\rho(x)\) on the set \(\mathbb{R}^{3}\backslash\{0\}\) in [3], see also [4]. In other words, this shows real analyticity of \(\gamma\) in one variable along the diagonal \(x=y\), excluding the point \(x=y=0\). Despite this, there is no smoothness of \(\gamma\) across the diagonal which is discussed in [5]. Pointwise derivative estimates for \(\gamma\) on the set \(\mathcal{D}\) were then given in this paper, namely [5, Theorem 1.1], and are restated as follows. For \(b\geq 0\), \(t>0\) define \[h_{b}(t)=\begin{cases}t^{\min\{0,5-b\}}&\text{if }b\neq 5\\ \log(t^{-1}+2)&\text{if }b=5.\end{cases}\] We denote \(\mathbb{N}_{0}=\mathbb{N}\cup\{0\}\). For all \(R>0\) and \(\alpha,\beta\in\mathbb{N}_{0}^{3}\) with \(|\alpha|,|\beta|\geq 1\) there exists \(C\) such that \[|\partial_{x}^{\alpha}\partial_{y}^{\beta}\gamma(x,y)|\leq C \big{(}1+|x|^{2-|\alpha|-|\beta|}+|y|^{2-|\alpha|-|\beta|}\\ +h_{|\alpha|+|\beta|}(|x-y|)\big{)}\left\|\rho\right\|_{L^{1}(B(x,R))}^{1/2}\left\|\rho\right\|_{L^{1}(B(y,R))}^{1/2} \tag{1.7}\] and for all \(|\alpha|\geq 1\) there exists \(C\) such that \[|\partial_{x}^{\alpha}\gamma(x,y)|+|\partial_{y}^{\alpha}\gamma( x,y)|\leq C\big{(}1+|x|^{1-|\alpha|}+|y|^{1-|\alpha|}\\ +h_{|\alpha|}(|x-y|)\big{)}\left\|\rho\right\|_{L^{1}(B(x,R))}^{1/ 2}\left\|\rho\right\|_{L^{1}(B(y,R))}^{1/2} \tag{1.8}\] for all \(x,y\in\mathbb{R}^{3}\) with \(x\neq 0\), \(y\neq 0\) and \(x\neq y\). The notation \(\partial_{x}^{\alpha}\) refers to the \(\alpha\)-partial derivative in the \(x\) variable. The constant \(C\) depends on \(\alpha,\beta,R,N\) and \(Z\). The right-hand side is finite because \(\psi\) is normalised and hence \(\rho\in L^{1}(\mathbb{R}^{3})\). The bounded first derivative at the nucleus reflects local Lipschitz continuity of \(\gamma\) on \(\mathbb{R}^{6}\). In addition, there is local boundedness of up to four derivatives at the diagonal with at worst a logarithmic singularity for the fifth derivative. The purpose of this current paper is to show local boundedness of the fifth derivative at the diagonal. The existence of the fifth-order cusp at the diagonal was previously demonstrated in [6] (see also [7]). In this paper, quantum chemistry calculations show that for \(x,r\in\mathbb{R}^{3}\), \(x\neq 0\) and small \(r\) we have \[\operatorname{Re}[\gamma(x+r,x-r)]=\gamma(x,x)+C(x)|r|^{5}+R(x,r) \tag{1.9}\] for some functions \(C(x)\) and \(R(x,r)\), the latter having no contribution from \(|r|^{k}\) for \(k=0,1,3,5\) in the small \(|r|\) expansion at \(r=0\). Our main result is as follows. **Theorem 1.1**.: _Let \(\psi\) be an eigenfunction of (1.3). Define \(m(x,y)=\min\{1,|x|,|y|\}\). Then for all \(|\alpha|+|\beta|=5\) and \(R>0\) we have \(C\), depending on \(R\) and also on \(Z,N\) and \(E\), such that_ \[|\partial_{x}^{\alpha}\partial_{y}^{\beta}\gamma(x,y)|\leq Cm(x,y)^{-4}\,\| \rho\|_{L^{1}(B(x,R))}^{1/2}\,\|\rho\|_{L^{1}(B(y,R))}^{1/2} \tag{1.10}\] _for all \(x,y\in\mathbb{R}^{3}\) obeying \(0<|x-y|\leq(2N)^{-1}m(x,y)\)._ **Remark 1.2**.: 1. Theorem 1.1 naturally extends to the case of a molecule with several nuclei whose positions are fixed. The modifications are straightforward. 2. The bound (1.10) naturally complements (1.7) and (1.8) for \(|\alpha|+|\beta|\neq 5\). 3. As a consequence of [5, Proposition 7.1], the inequality (1.10) shows that \(\gamma\in C^{4,1}_{loc}\big{(}(\mathbb{R}^{3}\backslash\{0\})\times(\mathbb{R }^{3}\backslash\{0\})\big{)}\). **Notation.** As mentioned earlier, we use standard notation whereby \(\mathbf{x}=(x_{1},\ldots,x_{N})\in\mathbb{R}^{3N}\), \(x_{j}\in\mathbb{R}^{3}\), j=1,..., N, and where \(N\) is the number of electrons. In addition, define for \(1\leq j,k\leq N\), \(j\neq k\), \[\mathbf{\hat{x}}_{j} =(x_{1},\ldots,x_{j-1},x_{j+1},\ldots,x_{N}) \tag{1.12}\] \[\mathbf{\hat{x}}_{jk} =(x_{1},\ldots,x_{j-1},x_{j+1},\ldots,x_{k-1},x_{k+1},\ldots,x_{N}) \tag{1.11}\] with obvious modifications if either \(j,k\) equals \(1\) or \(N\), and if \(k<j\). We define \(\mathbf{\hat{x}}=\mathbf{\hat{x}}_{1}\), which will be used throughout. Variables placed before \(\mathbf{\hat{x}}_{j}\) and \(\mathbf{\hat{x}}_{jk}\) will be placed in the removed slots as follows, for any \(x,y\in\mathbb{R}^{3}\) we have \[(x,\mathbf{\hat{x}}_{j}) =(x_{1},\ldots,x_{j-1},x,x_{j+1},\ldots,x_{N}), \tag{1.14}\] \[(x,y,\mathbf{\hat{x}}_{jk}) =(x_{1},\ldots,x_{j-1},x,x_{j+1},\ldots,x_{k-1},y,x_{k+1},\ldots,x_{N}). \tag{1.13}\] In this way, \(\mathbf{x}=(x_{j},\mathbf{\hat{x}}_{j})=(x_{j},x_{k},\mathbf{\hat{x}}_{jk})\). We define a _cluster_ to be any subset \(P\subset\{1,\ldots,N\}\). Denote \(P^{c}=\{1,\ldots,N\}\backslash P\), \(P^{*}=P\backslash\{1\}\). We will also need _cluster sets_, \(\mathbf{P}=(P_{1},\ldots,P_{M})\), where \(M\geq 1\) and \(P_{1},\ldots,P_{M}\) are clusters. First-order _cluster derivatives_ are defined, for a non-empty cluster \(P\), by \[D_{P}^{\alpha}=\sum_{j\in P}\partial_{x_{j}}^{\alpha}\quad\text{for $\alpha\in \mathbb{N}_{0}^{3}$, $|\alpha|=1$.} \tag{1.15}\] For \(P=\emptyset\), \(D_{P}^{\alpha}\) is defined as the identity. Higher order cluster derivatives, for \(\alpha=(\alpha^{\prime},\alpha^{\prime\prime},\alpha^{\prime\prime\prime}) \in\mathbb{N}_{0}^{3}\) with \(|\alpha|\geq 2\), are defined by successive application of first-order cluster derivatives as follows, \[D_{P}^{\alpha}=(D_{P}^{e_{1}})^{\alpha^{\prime}}(D_{P}^{e_{2}})^{\alpha^{ \prime\prime}}(D_{P}^{e_{3}})^{\alpha^{\prime\prime\prime}} \tag{1.16}\] where \(e_{1},e_{2},e_{3}\) are the standard unit basis vectors of \(\mathbb{R}^{3}\). Let \(\mathbf{P}=(P_{1},\ldots,P_{M})\) and \(\boldsymbol{\alpha}=(\alpha_{1},\ldots,\alpha_{M})\), \(\alpha_{j}\in\mathbb{N}_{0}^{3}\), \(1\leq j\leq M\), then we define the _multicluster derivative_ (often simply referred to as cluster derivative) by \[D_{\mathbf{P}}^{\boldsymbol{\alpha}}=D_{P_{1}}^{\alpha_{1}}\ldots D_{P_{M}}^{ \alpha_{M}}. \tag{1.17}\] It can readily be seen that cluster derivatives obey the Leibniz rule. Throughout, the letter \(C\) refers to a positive constant whose value is unimportant but may depend on \(Z\), \(N\) and the eigenvalue \(E\). **Distance function notation and elementary results.** For non-empty cluster \(P\), define \[\Sigma_{P}=\Big{\{}\mathbf{x}\in\mathbb{R}^{3N}:\prod_{j\in P}|x_{j}|\prod_{ \begin{subarray}{c}l\in P\\ m\in P^{c}\end{subarray}}|x_{l}-x_{m}|=0\Big{\}}. \tag{1.18}\] For \(P=\emptyset\) we set \(\Sigma_{P}:=\emptyset\). Denote \(\Sigma_{P}^{c}=\mathbb{R}^{3N}\backslash\Sigma_{P}\). For each \(P\) we have \(\Sigma_{P}\subset\Sigma\) where \[\Sigma=\Big{\{}\mathbf{x}\in\mathbb{R}^{3N}:\prod_{1\leq j\leq N}|x_{j}|\prod_ {1\leq l<m\leq N}|x_{l}-x_{m}|=0\Big{\}} \tag{1.19}\] is the set of singularities of the Coulomb potential \(V\), (1.2). For any cluster \(P\) we can define the following two distances \[d_{P}(\mathbf{x}):=\operatorname{dist}(\mathbf{x},\Sigma_{P})= \min\big{\{}|x_{j}|,\,2^{-1/2}|x_{j}-x_{k}|:j\in P,k\in P^{c}\big{\}} \tag{1.21}\] \[\lambda_{P}(\mathbf{x}):=\min\{1,\,d_{P}(\mathbf{x})\} \tag{1.20}\] for all \(\mathbf{x}\in\mathbb{R}^{3N}\). Using the formula (1.20), see for example [5, Lemma 4.2], it can be shown that both \(d_{P}\) and \(\lambda_{P}\) are Lipschitz and obey \[|d_{P}(\mathbf{x})-d_{P}(\mathbf{y})|,|\lambda_{P}(\mathbf{x})-\lambda_{P}( \mathbf{y})|\leq|\mathbf{x}-\mathbf{y}| \tag{1.22}\] for all \(\mathbf{x},\mathbf{y}\in\mathbb{R}^{3N}\). Let \(\mathbf{P}=(P_{1},\ldots,P_{M})\) be a cluster set and \(\boldsymbol{\alpha}=(\alpha_{1},\ldots,\alpha_{M})\in\mathbb{N}_{0}^{3M}\) be a multiindex. Define \[\Sigma_{\boldsymbol{\alpha}}=\bigcup_{j\,:\,\alpha_{j}\neq 0}\Sigma_{P_{j}} \tag{1.23}\] for non-zero \(\boldsymbol{\alpha}\) and when \(\boldsymbol{\alpha}=0\) we set \(\Sigma_{\boldsymbol{\alpha}}=\emptyset\). Denote \(\Sigma_{\boldsymbol{\alpha}}^{c}=\mathbb{R}^{3N}\backslash\Sigma_{\boldsymbol {\alpha}}\). For non-zero \(\boldsymbol{\alpha}\) we can also define the two distances \[d_{\boldsymbol{\alpha}}(\mathbf{x}) =\min\{d_{P_{j}}(\mathbf{x}):\alpha_{j}\neq 0,\,j=1,\ldots,M\}, \tag{1.25}\] \[\lambda_{\boldsymbol{\alpha}}(\mathbf{x}) =\min\{\lambda_{P_{j}}(\mathbf{x}):\alpha_{j}\neq 0,\,j=1, \ldots,M\} \tag{1.24}\] for all \(\mathbf{x}\in\mathbb{R}^{3N}\). Notice we also have the identity \[d_{\boldsymbol{\alpha}}(\mathbf{x})=\operatorname{dist}(\mathbf{x},\Sigma_{ \boldsymbol{\alpha}}).\] Indeed, since \(\Sigma_{P_{j}}\subset\Sigma_{\boldsymbol{\alpha}}\) whenever \(j\) is such that \(\alpha_{j}\neq 0\), we have \(\operatorname{dist}(\mathbf{x},\Sigma_{\boldsymbol{\alpha}})\leq d_{P_{j}}( \mathbf{x})\) and hence \(\operatorname{dist}(\mathbf{x},\Sigma_{\boldsymbol{\alpha}})\leq d_{ \boldsymbol{\alpha}}(\mathbf{x})\). Conversely for each \(\boldsymbol{\xi}\in\Sigma_{\boldsymbol{\alpha}}\) we have \(\boldsymbol{\xi}\in\Sigma_{P_{j}}\) for some \(j\) with \(\alpha_{j}\neq 0\), and hence \(|\mathbf{x}-\boldsymbol{\xi}|\geq d_{P_{j}}(\mathbf{x})\). Therefore, \(|\mathbf{x}-\boldsymbol{\xi}|\geq d_{\boldsymbol{\alpha}}(\mathbf{x})\). Taking infimum over \(\boldsymbol{\xi}\in\Sigma_{\boldsymbol{\alpha}}\) we obtain the reverse inequality. . We will also use the following related quantities involving maxima of relevant distances for non-zero \(\boldsymbol{\alpha}\), \[q_{\boldsymbol{\alpha}}(\mathbf{x}) =\max\{d_{P_{j}}(\mathbf{x}):\alpha_{j}\neq 0,\,j=1,\ldots,M\} \tag{1.27}\] \[\mu_{\boldsymbol{\alpha}}(\mathbf{x}) =\max\{\lambda_{P_{j}}(\mathbf{x}):\alpha_{j}\neq 0,\,j=1, \ldots,M\}. \tag{1.26}\] Finally, for \(\boldsymbol{\alpha}=0\) we set \(d_{\boldsymbol{\alpha}},q_{\boldsymbol{\alpha}}\equiv 0\) and \(\lambda_{\boldsymbol{\alpha}},\mu_{\boldsymbol{\alpha}}\equiv 1\). In order to state the results we first define the following for an arbitrary function \(u\) and any \(r>0\), \[f_{\infty}(\mathbf{x};r;u):=\left\|\nabla u\right\|_{L^{\infty}(B(\mathbf{x}, r))}+\left\|u\right\|_{L^{\infty}(B(\mathbf{x},r))} \tag{1.28}\] for \(\mathbf{x}\in\mathbb{R}^{3N}\). The ball \(B(\mathbf{x},r)\) is considered in \(\mathbb{R}^{3N}\). Largely, this notation will be used for \(u=\psi\) and in this case we have the notation, \[f_{\infty}(\mathbf{x};r):=f_{\infty}(\mathbf{x};r;\psi). \tag{1.29}\] ### A pointwise cluster derivative bound To prove Theorem 1.1 we will state and prove a new pointwise bound to cluster derivatives of eigenfunctions \(\psi\), which may itself be of independent interest. It will be shown by elliptic regularity that for all \(\boldsymbol{\alpha}\) the weak cluster derivatives \(D_{\mathbf{P}}^{\boldsymbol{\alpha}}\psi\) exist in the set \(\Sigma_{\boldsymbol{\alpha}}^{c}\). It is therefore interesting to consider how such cluster derivatives behave as the set \(\Sigma_{\boldsymbol{\alpha}}\) is approached. Previously, S. Fournais and T. O. Sorensen have given bounds to local \(L^{p}\)-norms of cluster derivatives of \(\psi\) for a single cluster \(P\). Indeed, in [8, Proposition 1.10] it is shown that for any multiindex \(0\neq\alpha\in\mathbb{N}_{0}^{3}\), \(p\in(1,\infty]\) and any \(0<r<R<1\) there exists \(C\), depending on \(r,R,p\) and \(\alpha\), such that \[\left\|D_{P}^{\alpha}\psi\right\|_{L^{p}(B(\mathbf{x},r\lambda_{P}(\mathbf{x}) ))}\leq C\lambda_{P}(\mathbf{x})^{1-|\alpha|}\big{(}\left\|\nabla\psi\right\|_{ L^{p}(B(\mathbf{x},R\lambda_{P}(\mathbf{x})))}+\left\|\psi\right\|_{L^{p}(B( \mathbf{x},R\lambda_{P}(\mathbf{x})))}\big{)} \tag{1.30}\] for all \(\mathbf{x}\in\Sigma_{P}^{c}\). Notice that for every \(\mathbf{x}\in\Sigma_{P}^{c}\), we have \(B(\mathbf{x},r\lambda_{P}(\mathbf{x}))\subset\Sigma_{P}^{c}\) by the definition of \(\lambda_{P}(\mathbf{x})\). Therefore, the set \(\Sigma_{P}\) is avoided when evaluating \(D_{P}^{\alpha}\psi\) in the \(L^{p}\)-norms. The objective of the following theorem is to extend the bounds (1.30) in the case of \(p=\infty\) and for cluster derivatives for cluster sets \(\mathbf{P}\). In particular, estimates are obtained which depend on the order of derivative for each of the respective clusters in \(\mathbf{P}\). In the following, \(\nabla\) denotes the gradient operator in \(\mathbb{R}^{3N}\). **Theorem 1.3**.: _For every cluster set \(\mathbf{P}=(P_{1},\ldots,P_{M})\), multiindex \(\boldsymbol{\alpha}\in\mathbb{N}_{0}^{3M}\) and any \(0<r<R<1\) there exists \(C\), depending on \(\boldsymbol{\alpha},r\) and \(R\), such that for \(k=0,1\),_ \[\big{\|}D_{\mathbf{P}}^{\boldsymbol{\alpha}}\nabla^{k}\psi\big{\|}_{L^{ \infty}(B(\mathbf{x},r\lambda_{\boldsymbol{\alpha}}(\mathbf{x})))}\leq C \lambda_{\boldsymbol{\alpha}}(\mathbf{x})^{1-k}\lambda_{P_{1}}(\mathbf{x})^{- |\alpha_{1}|}\ldots\lambda_{P_{M}}(\mathbf{x})^{-|\alpha_{M}|}f_{\infty}( \mathbf{x};R) \tag{1.31}\] _for all \(\mathbf{x}\in\Sigma_{\boldsymbol{\alpha}}^{c}\)._ _Furthermore, for each \(|\boldsymbol{\alpha}|\geq 1\) there exists a function \(G_{\mathbf{P}}^{\boldsymbol{\alpha}}:\Sigma_{\boldsymbol{\alpha}}^{c}\to \mathbb{C}^{3N}\) such that_ \[D_{\mathbf{P}}^{\boldsymbol{\alpha}}\nabla\psi=G_{\mathbf{P}}^{\boldsymbol{ \alpha}}+\psi D_{\mathbf{P}}^{\boldsymbol{\alpha}}\nabla F_{c} \tag{1.32}\] _and for every \(b\in[0,1)\) there exists \(C\), depending on \(\boldsymbol{\alpha},r,R\) and \(b\), such that_ \[\|G_{\mathbf{P}}^{\boldsymbol{\alpha}}\|_{L^{\infty}(B(\mathbf{x},r\lambda_{ \boldsymbol{\alpha}}(\mathbf{x})))}\,\leq C\mu_{\boldsymbol{\alpha}}(\mathbf{x })^{b}\lambda_{P_{1}}(\mathbf{x})^{-|\alpha_{1}|}\ldots\lambda_{P_{M}}( \mathbf{x})^{-|\alpha_{M}|}f_{\infty}(\mathbf{x};R) \tag{1.33}\] _for all \(\mathbf{x}\in\Sigma_{\boldsymbol{\alpha}}^{c}\)._ **Remark 1.4**.: 1. In the case of a single cluster and \(k=0\), the bound (1.31) reestablishes (1.30) in the case of \(p=\infty\), albeit with a slightly larger radius in the \(L^{\infty}\)-norms on the right-hand side. 2. When \(k=0\) and \(\boldsymbol{\alpha}\neq 0\), the presence of a single power of \(\lambda_{\boldsymbol{\alpha}}(\mathbf{x})\) in the bound will cancel a single negative power of \(\lambda_{P_{j}}(\mathbf{x})\) for \(j\) such that \(\lambda_{P_{j}}(\mathbf{x})\leq\lambda_{P_{i}}(\mathbf{x})\) for all \(i=1,\ldots,M\) with \(\alpha_{i}\neq 0\). Notice that the appropriate \(j\) will depend on \(\mathbf{x}\). 3. The bound in (1.33) is stronger than that of (1.31) with \(k=1\). This is because a positive power of \(\mu_{\boldsymbol{\alpha}}(\mathbf{x})\) will partially cancel a single negative power of \(\lambda_{P_{j}}(\mathbf{x})\) for \(j\) such that \(\lambda_{P_{j}}(\mathbf{x})\geq\lambda_{P_{i}}(\mathbf{x})\) for all \(i=1,\ldots,M\) with \(\alpha_{i}\neq 0\). The proof of Theorem 1.3 will follow a similar strategy to that of [8, Proposition 1.10]. An additional result will be required to prove (1.33). This result, [9], shows that \(\psi\) can be made \(C^{1,1}(\mathbb{R}^{3N})\) upon multiplication by a factor, universal in the sense that the factor depends only on \(N\) and \(Z\). We will require an elliptic regularity result, stated below, which will be used in the proofs. Beforehand, we clarify the precise form of definitions which we will be using. Let \(\Omega\) be open, \(\theta\in(0,1]\) and \(k=\mathbb{N}_{0}\). We formally define the \(\theta\)-Holder seminorms for a function \(f\) by \[[f]_{\theta,\Omega}=\sup_{\begin{subarray}{c}x,y\in\Omega\\ x\neq y\end{subarray}}\frac{|f(x)-f(y)|}{|x-y|^{\theta}},\] \[[\nabla^{k}f]_{\theta,\Omega}=\sup_{|\alpha|=k}[\partial^{ \alpha}f]_{\theta,\Omega}.\] The space \(C^{k,\theta}(\Omega)\) is defined as all \(f\in C^{k}(\Omega)\) where \([\nabla^{k}f]_{\theta,\Omega^{\prime}}\) is finite for each \(\Omega^{\prime}\) compactly contained in \(\Omega\). In addition, the space \(C^{k,\theta}(\overline{\Omega})\) is defined as all \(f\in C^{k}(\overline{\Omega})\) where \([\nabla^{k}f]_{\theta,\Omega}\) is finite. This space has a norm given by \[\|f\|_{C^{k,\theta}(\overline{\Omega})}=\|f\|_{C^{k}(\overline{\Omega})}+[ \nabla^{k}f]_{\theta,\Omega}.\] For open \(\Omega\subset\mathbb{R}^{n}\) we can consider the following elliptic equation, \[Lu:=-\Delta u+\mathbf{c}\cdot\nabla u+du=g \tag{1.34}\] for some \(\mathbf{c}:\Omega\to\mathbb{C}^{n}\) and \(d,g:\Omega\to\mathbb{C}\). The corresponding bilinear form for operator \(L\) is defined formally as \[\mathcal{L}(u,\chi)=\int_{\Omega}\left(\nabla u\cdot\nabla\chi+(\mathbf{c} \cdot\nabla u)\chi+du\chi\right)dx\] for all \(u\in H^{1}_{loc}(\Omega)\) and \(\chi\in C^{\infty}_{c}(\Omega)\). We say that a function \(u\in H^{1}_{loc}(\Omega)\) is a _weak solution_ to the equation (1.34) in \(\Omega\) if \(\mathcal{L}(u,\chi)=\int_{\Omega}g\chi\,dx\) for every \(\chi\in C^{\infty}_{c}(\Omega)\). The following theorem is a restatement of [5, Proposition 3.1] ([8, Proposition A.2] is similar), with additional Holder regularity which follows from the proof. **Theorem 1.5**.: _Let \(x_{0}\in\mathbb{R}^{n}\), \(R>0\) and \(\mathbf{c},d,g\in L^{\infty}(B(x_{0},R))\) and \(u\in H^{1}(B(x_{0},R))\) be a weak solution to (1.34) then for each \(\theta\in[0,1)\) we have \(u\in C^{1,\theta}(B(x_{0},R))\cap H^{2}_{loc}(B(x_{0},R))\), and for any \(r\in(0,R)\) we have_ \[\|u\|_{C^{1,\theta}(\overline{B(x_{0},r)})}\leq C(\|u\|_{L^{2}(B(x_{0},R))}+\| g\|_{L^{\infty}(B(x_{0},R))}) \tag{1.35}\] _for \(C=C(n,K,r,R,\theta)\) where_ \[\|\mathbf{c}\|_{L^{\infty}(B(x_{0},R))}+\|d\|_{L^{\infty}(B(x_{0},R))}\leq K.\] ## 2. Proof of Theorem 1.3 Our strategy for the proof will be to choose a suitable function \(F=F(\mathbf{x})\), dependent only on \(N\) and \(Z\), such that the function \(e^{-F}\psi\) has greater regularity than \(\psi\) itself. Such a multiplicative factor is frequently called a _Jastrow factor_ in mathematical literature, and this strategy has been used successfully in, for example, [10], [11] to elucidate regularity properties of \(\psi\). The function \(e^{-F}\psi\) will solve an elliptic equation with bounded coefficients which behave suitably well under the action of cluster derivatives. Elliptic regularity will then produce bounds to the cluster derivatives of \(e^{-F}\psi\). Such bounds can then be used to obtain bounds to the cluster derivatives of \(\psi\) itself. ### Jastrow factors We begin by defining the function \[F(\mathbf{x})=F_{c}(\mathbf{x})-F_{s}(\mathbf{x}), \tag{2.1}\] for \(\mathbf{x}\in\mathbb{R}^{3N}\), where \[F_{c}(\mathbf{x}) =-\frac{Z}{2}\sum_{1\leq j\leq N}|x_{j}|+\frac{1}{4}\sum_{1\leq l<k \leq N}|x_{l}-x_{k}|, \tag{2.3}\] \[F_{s}(\mathbf{x}) =-\frac{Z}{2}\sum_{1\leq j\leq N}\sqrt{|x_{j}|^{2}+1}+\frac{1}{4} \sum_{1\leq l<k\leq N}\sqrt{|x_{l}-x_{k}|^{2}+1}. \tag{2.2}\] The function \(F\) was used in [8]. We now detail some basic facts regarding these functions. Firstly, \[\Delta F_{c}=V \tag{2.4}\] where \(V\) is the Coulomb potential, (1.2). The function \(F_{s}\) has the same behaviour as \(F_{c}\) at infinity. However, \(F_{s}\in C^{\infty}(\mathbb{R}^{3N})\) and \[\partial^{\alpha}F_{s}\in L^{\infty}(\mathbb{R}^{3N})\quad\text{for all}\;\; \alpha\in\mathbb{N}_{0}^{3N},|\alpha|\geq 1. \tag{2.5}\] Finally, the function \(F\) obeys \[F,\nabla F\in L^{\infty}(\mathbb{R}^{3N}). \tag{2.6}\] The function \(F\) is used to define the following object, which will be used throughout the proof: \[\phi=e^{-F}\psi. \tag{2.7}\] Using (1.3) the following elliptic equation can be shown to hold for \(\phi\) a weak solution, \[-\Delta\phi-2\nabla F\cdot\nabla\phi+(\Delta F^{(2)}-|\nabla F|^{2}-E)\phi=0. \tag{2.8}\] Since all coefficients are bounded in \(\mathbb{R}^{3N}\) we can see from Theorem 1.5 that \(\phi\in C^{1,\theta}(\mathbb{R}^{3N})\) for each \(\theta\in[0,1)\). Furthermore, by the same theorem, for any pair \(0<r<R\) we can find constants \(C,C^{\prime}\), dependent only on \(N,Z,E,r,R\) and \(\theta\), such that \[\left\|\phi\right\|_{C^{1,\theta}(\overline{B(\mathbf{x},r)})}\leq C\left\| \phi\right\|_{L^{2}(B(\mathbf{x},R))}\leq C^{\prime}\left\|\phi\right\|_{L^{ \infty}(B(\mathbf{x},R))} \tag{2.9}\] for all \(\mathbf{x}\in\mathbb{R}^{3N}\). From the definition of \(\phi\), (2.7), along with (2.6) there exists new constants \(C<C^{\prime}\), dependent only on \(Z\) and \(N\) (hence independent of \(R>0\)), such that \[C\left\|\phi\right\|_{L^{\infty}(B(\mathbf{x},R))}\leq\left\|\psi\right\|_{L^ {\infty}(B(\mathbf{x},R))}\leq C^{\prime}\left\|\phi\right\|_{L^{\infty}(B( \mathbf{x},R))} \tag{2.10}\] for all \(\mathbf{x}\in\mathbb{R}^{3N}\). ### Derivatives of F Informally, our objective is to take cluster derivatives of the elliptic equation (2.8) and apply elliptic regularity. To do so, we require bounds to the cluster derivatives of the coefficients present in this equation. This is the objective of the current section. To begin, we state and prove the following preparatory lemma involving the distances introduced in (1.21), (1.25) and (1.27). **Lemma 2.1**.: _For any \(\boldsymbol{\sigma}=(\sigma_{1},\ldots,\sigma_{M})\in\mathbb{N}_{0}^{3M}\) we have for \(k=0,1\),_ \[\mu_{\boldsymbol{\sigma}}(\mathbf{y})^{k-|\boldsymbol{\sigma}|}\leq\lambda_{ \boldsymbol{\sigma}}(\mathbf{y})^{k}\lambda_{P_{1}}(\mathbf{y})^{-|\sigma_{1} |}\ldots\lambda_{P_{M}}(\mathbf{y})^{-|\sigma_{M}|} \tag{2.11}\] _for all \(\mathbf{y}\in\mathbb{R}^{3N}\). Furthermore, let \(\boldsymbol{\beta}^{(1)},\ldots,\boldsymbol{\beta}^{(n)}\) be an arbitrary collection of multiindices in \(\mathbb{N}_{0}^{3M}\) such that \(\boldsymbol{\beta}^{(1)}+\cdots+\boldsymbol{\beta}^{(n)}=\boldsymbol{\sigma}\) then_ \[\prod_{j=1}^{n}\lambda_{\boldsymbol{\beta}^{(j)}}(\mathbf{y})\leq\lambda_{ \boldsymbol{\sigma}}(\mathbf{y}) \tag{2.12}\] _for all \(\mathbf{y}\in\mathbb{R}^{3N}\)._ Proof.: The results are trivial in the case of \(\boldsymbol{\sigma}=0\) therefore we assume in the following that \(\boldsymbol{\sigma}\) is non-zero. First, observe for all \(j=1,\ldots,M\), \[\mu_{\boldsymbol{\sigma}}(\mathbf{y})^{-|\sigma_{j}|} \leq\lambda_{P_{j}}(\mathbf{y})^{-|\sigma_{j}|}\] \[\mu_{\boldsymbol{\sigma}}(\mathbf{y})^{1-|\sigma_{j}|} \leq\lambda_{P_{j}}(\mathbf{y})^{1-|\sigma_{j}|}\quad\text{ if }\sigma_{j}\neq 0\] by the definition of \(\mu_{\boldsymbol{\sigma}}\). Now perform the following trivial expansion of the product, \[\mu_{\boldsymbol{\sigma}}(\mathbf{y})^{-|\boldsymbol{\sigma}|} =\mu_{\boldsymbol{\sigma}}(\mathbf{y})^{-|\sigma_{1}|}\ldots\mu_{ \boldsymbol{\sigma}}(\mathbf{y})^{-|\sigma_{M}|}\] \[\leq\lambda_{P_{1}}(\mathbf{y})^{-|\sigma_{1}|}\ldots\lambda_{P_ {M}}(\mathbf{y})^{-|\sigma_{M}|}\] which proves (2.11) for \(k=0\). For \(k=1\), consider that for each \(\mathbf{y}\) we can find \(l=1,\ldots,M\) such that \(\lambda_{\boldsymbol{\sigma}}(\mathbf{y})=\lambda_{P_{l}}(\mathbf{y})\) and \(\sigma_{l}\neq 0\). Then, \[\mu_{\boldsymbol{\sigma}}(\mathbf{y})^{1-|\boldsymbol{\sigma}|} =\mu_{\boldsymbol{\sigma}}(\mathbf{y})^{-|\sigma_{1}|}\ldots\mu_{ \boldsymbol{\sigma}}(\mathbf{y})^{1-|\sigma_{l}|}\ldots\mu_{\boldsymbol{ \sigma}}(\mathbf{y})^{-|\sigma_{M}|}\] \[\leq\lambda_{P_{1}}(\mathbf{y})^{-|\sigma_{1}|}\ldots\lambda_{P_ {l}}(\mathbf{y})^{1-|\sigma_{l}|}\ldots\lambda_{P_{M}}(\mathbf{y})^{-|\sigma_ {M}|}\] \[=\lambda_{\boldsymbol{\sigma}}(\mathbf{y})\lambda_{P_{1}}(\mathbf{ y})^{-|\sigma_{1}|}\ldots\lambda_{P_{M}}(\mathbf{y})^{-|\sigma_{M}|}\] as required. Finally we prove (2.12). As above, take arbitrary \(\mathbf{y}\) and a corresponding \(l\) such that \(\lambda_{\boldsymbol{\sigma}}(\mathbf{y})=\lambda_{P_{l}}(\mathbf{y})\) with \(\sigma_{l}\neq 0\). For each \(1\leq j\leq n\) we denote the \(\mathbb{N}_{0}^{3}\)-components of \(\boldsymbol{\beta}^{(j)}\) as \(\boldsymbol{\beta}^{(j)}=(\beta_{1}^{(j)},\ldots,\beta_{M}^{(j)})\). We know \(\boldsymbol{\beta}^{(1)}+\cdots+\boldsymbol{\beta}^{(n)}=\boldsymbol{\sigma}\) so in particular, \(\beta_{l}^{(1)}+\cdots+\beta_{l}^{(n)}=\sigma_{l}\). Since \(\sigma_{l}\neq 0\) there exists at least one \(1\leq r\leq n\) such that \(\beta_{l}^{(r)}\neq 0\). Hence by the definition of \(\lambda_{\boldsymbol{\beta}^{(r)}}\), \[\lambda_{\boldsymbol{\beta}^{(r)}}(\mathbf{y})=\min\{\lambda_{P_{s}}(\mathbf{ y}):\beta_{s}^{(r)}\neq 0,\,s=1,\ldots,M\}\leq\lambda_{P_{l}}(\mathbf{y})= \lambda_{\boldsymbol{\sigma}}(\mathbf{y}).\] The remaining factors \(\lambda_{\boldsymbol{\beta}^{(j)}}(\mathbf{y})\), for \(j\neq r\), can each be bounded above by one. The following lemma will be useful in proving results about taking cluster derivatives of \(F\), as defined in (2.1). Later, we will apply it using \(f\) as the function \(|x|\) for \(x\in\mathbb{R}^{3}\), or derivatives thereof. **Lemma 2.2**.: _Let \(f\in C^{\infty}(\mathbb{R}^{3}\backslash\{0\})\) and \(k\in\mathbb{N}_{0}\) be such that for each \(\sigma\in\mathbb{N}_{0}^{3}\) there exists \(C\) such that_ \[|\partial^{\sigma}f(x)|\leq C|x|^{k-|\sigma|}\text{ for all }x\neq 0. \tag{2.13}\] _Then for any \(\boldsymbol{\alpha}\neq 0\) with \(|\boldsymbol{\alpha}|\geq k\) there exists some new \(C\) such that for any \(l,m=1,\ldots,N,\) the weak derivatives \(D^{\boldsymbol{\alpha}}_{\mathbf{P}}(f(x_{l}))\) and \(D^{\boldsymbol{\alpha}}_{\mathbf{P}}(f(x_{l}-x_{m}))\) are both smooth in \(\Sigma^{c}_{\boldsymbol{\alpha}}\) and obey_ \[|D^{\boldsymbol{\alpha}}_{\mathbf{P}}(f(x_{l}))|,\,|D^{\boldsymbol{\alpha}}_{ \mathbf{P}}(f(x_{l}-x_{m}))|\leq Cq_{\boldsymbol{\alpha}}(\mathbf{x})^{k-| \boldsymbol{\alpha}|}\] _for all \(\mathbf{x}\in\Sigma^{c}_{\boldsymbol{\alpha}}\)._ Proof.: Take any \(j=1,\ldots,M\) with \(\alpha_{j}\neq 0\), then we have \(D^{\alpha_{j}}_{P_{j}}(f(x_{l}))\equiv 0\) for each \(l\in P^{c}_{j}\). Therefore, for \(D^{\boldsymbol{\alpha}}_{\mathbf{P}}(f(x_{l}))\) to not be identically zero we require that \(l\in P_{j}\) for each \(j\) with \(\alpha_{j}\neq 0\). For such \(l\) we have \(x_{l}\neq 0\) since \(\mathbf{x}\in\Sigma^{c}_{\boldsymbol{\alpha}}\), and \[|x_{l}|\geq d_{P_{j}}(\mathbf{x})\] for each \(j\) with \(\alpha_{j}\neq 0\) by (1.20). Therefore, for constant \(C\) in (2.13), we have \[|D^{\boldsymbol{\alpha}}_{\mathbf{P}}(f(x_{l}))|=|\partial^{\alpha_{1}+\cdots +\alpha_{M}}f(x_{l})|\leq C|x_{l}|^{k-|\boldsymbol{\alpha}|}\leq Cq_{ \boldsymbol{\alpha}}(\mathbf{x})^{k-|\boldsymbol{\alpha}|}\] because \(|\boldsymbol{\alpha}|\geq k\). Similarly, for each \(j=1,\ldots,M\), with \(\alpha_{j}\neq 0\) we have \(D^{\alpha_{j}}_{P_{j}}(f(x_{l}-x_{m}))\equiv 0\) if either \(l,m\in P_{j}\) or \(l,m\in P^{c}_{j}\). Therefore, for \(D^{\boldsymbol{\alpha}}_{\mathbf{P}}(f(x_{l}-x_{m}))\) to not be identically zero we require that \[(l,m)\in\bigcap_{j\,:\,\alpha_{j}\neq 0}\big{(}(P_{j}\times P^{c}_{j})\cup(P^{c }_{j}\times P_{j})\big{)}.\] For such \((l,m)\) we have \(x_{l}\neq x_{m}\) since \(\mathbf{x}\in\Sigma^{c}_{\boldsymbol{\alpha}}\) and \[|x_{l}-x_{m}|\geq\sqrt{2}\,d_{P_{j}}(\mathbf{x})\] for each \(j\) with \(\alpha_{j}\neq 0\) by (1.20). Therefore, for some constant \(C^{\prime}\), \[|D^{\boldsymbol{\alpha}}_{\mathbf{P}}(f(x_{l}-x_{m}))|=|\partial^{\alpha_{1}+ \cdots+\alpha_{M}}f(x_{l}-x_{m})|\leq C|x_{l}-x_{m}|^{k-|\boldsymbol{\alpha}| }\leq C^{\prime}q_{\boldsymbol{\alpha}}(\mathbf{x})^{k-|\boldsymbol{\alpha}|}\] because \(|\boldsymbol{\alpha}|\geq k\). The following lemma provides pointwise bounds to cluster derivatives of functions involving \(F\). **Lemma 2.3**.: _For any cluster set \(\mathbf{P}\) and any \(|\boldsymbol{\sigma}|\geq 1\) there exists \(C\), which depends on \(\boldsymbol{\sigma}\), such that for \(k=0,1\),_ \[\big{|}D^{\boldsymbol{\sigma}}_{\mathbf{P}}\nabla^{k}F(\mathbf{y})\big{|},\, \big{|}D^{\boldsymbol{\sigma}}_{\mathbf{P}}\nabla^{k}(e^{F})(\mathbf{y})\big{|} \leq C\lambda_{\boldsymbol{\sigma}}(\mathbf{y})^{1-k}\lambda_{P_{1}}(\mathbf{y })^{-|\sigma_{1}|}\ldots\lambda_{P_{M}}(\mathbf{y})^{-|\sigma_{M}|} \tag{2.14}\] \[\big{|}D^{\boldsymbol{\sigma}}_{\mathbf{P}}|\nabla F(\mathbf{y})|^{2}\big{|} \leq C\lambda_{P_{1}}(\mathbf{y})^{-|\sigma_{1}|}\ldots\lambda_{P_{M}}(\mathbf{ y})^{-|\sigma_{M}|} \tag{2.15}\] _for all \(\mathbf{y}\in\Sigma^{c}_{\boldsymbol{\sigma}}\). The bound to the first object in (2.14) also holds when \(F\) is replaced by \(F_{c}\)._ Proof.: Let \(\tau\) be the function defined as \(\tau(x)=|x|\) for \(x\in\mathbb{R}^{3}\). Then, by definition (2.2) we can write \[F_{c}(\mathbf{y})=-\frac{Z}{2}\sum_{1\leq j\leq N}\tau(y_{j})+\frac{1}{4}\sum_{ 1\leq l<m\leq N}\tau(y_{l}-y_{m}). \tag{2.16}\] For each \(r=1,\ldots,N\), denote by \(\nabla_{y_{r}}\) the three-dimensional gradient in the variable \(y_{r}\). We then have \[\nabla_{y_{r}}F_{c}(\mathbf{y})=-\frac{Z}{2}\nabla\tau(y_{r})+\frac{1}{4}\sum _{\begin{subarray}{c}m=1\\ m\neq r\end{subarray}}^{N}\nabla\tau(y_{r}-y_{m}). \tag{2.17}\] We apply the \(D^{\boldsymbol{\sigma}}_{\mathbf{P}}\)-derivative to each of (2.16) and (2.17), using Lemma 2.2 with \(f=\tau\) and \(f=\nabla\tau\) respectively. This shows that for \(k=0,1\) the functions \(D^{\boldsymbol{\sigma}}_{\mathbf{P}}\nabla^{k}F_{c}\) exist and are smooth on the set \(\Sigma^{c}_{\boldsymbol{\sigma}}\). Furthermore, there exists \(C\), depending on \(\boldsymbol{\sigma}\), such that \[|D^{\boldsymbol{\sigma}}_{\mathbf{P}}\nabla^{k}F_{c}(\mathbf{y})|\leq Cq_{ \boldsymbol{\sigma}}(\mathbf{y})^{1-k-|\boldsymbol{\sigma}|}\leq C\mu_{ \boldsymbol{\sigma}}(\mathbf{y})^{1-k-|\boldsymbol{\sigma}|} \tag{2.18}\] for all \(\mathbf{y}\in\Sigma^{c}_{\boldsymbol{\sigma}}\). In the second inequality we used that \(\mu_{\boldsymbol{\sigma}}(\mathbf{y})\leq q_{\boldsymbol{\sigma}}(\mathbf{y})\). Recall that \(\nabla F_{s}\) is smooth with all derivatives bounded, see (2.5). Since \(F=F_{c}-F_{s}\) and \(\mu_{\boldsymbol{\sigma}}\leq 1\), we have that for each \(\boldsymbol{\sigma}\neq 0\) and \(k=0,1\), there exists \(C\), depending on \(\boldsymbol{\sigma}\), such that \[|D^{\boldsymbol{\sigma}}_{\mathbf{P}}\nabla^{k}F(\mathbf{y})|\leq C\mu_{ \boldsymbol{\sigma}}(\mathbf{y})^{1-k-|\boldsymbol{\sigma}|} \tag{2.19}\] for all \(\mathbf{y}\in\Sigma^{c}_{\boldsymbol{\sigma}}\). By a straightforward application of Lemma 2.1, this proves the first bound in (2.14), for \(k=0,1\). We proceed to prove the second bound in (2.14). For every \(\boldsymbol{\eta}\leq\boldsymbol{\sigma}\) we have that \(\Sigma_{\boldsymbol{\eta}}\subset\Sigma_{\boldsymbol{\sigma}}\) and therefore \(D^{\boldsymbol{\eta}}_{\mathbf{P}}\nabla^{k}F\) exists and is smooth in \(\Sigma^{c}_{\boldsymbol{\sigma}}\), for \(k=0,1\). We use the chain rule for weak derivatives to show that \(D^{\boldsymbol{\sigma}}_{\mathbf{P}}(e^{F})\) exists in \(\Sigma^{c}_{\boldsymbol{\sigma}}\) and is equal to a sum of terms, each of the form \[e^{F}\prod_{1\leq j\leq n}D^{\boldsymbol{\beta}^{(j)}}_{\mathbf{P}}F \tag{2.20}\] for some \(1\leq n\leq|\boldsymbol{\sigma}|\) and some collection \(0\neq\boldsymbol{\beta}^{(j)}\in\mathbb{N}_{0}^{3M}\) for \(j=1,\ldots,n\), where \(\boldsymbol{\beta}^{(1)}+\cdots+\boldsymbol{\beta}^{(n)}=\boldsymbol{\sigma}\). For each \(j=1,\ldots,n\), we write \(\boldsymbol{\beta}^{(j)}=(\beta^{(j)}_{1},\ldots,\beta^{(j)}_{M})\) to denote the \(\mathbb{N}_{0}^{3}\)-components of \(\boldsymbol{\beta}^{(j)}\). Similarly, the weak derivative \(D^{\boldsymbol{\sigma}}_{\mathbf{P}}\nabla(e^{F})\) exists in \(\Sigma^{c}_{\boldsymbol{\sigma}}\) and the gradient of the general term (2.20) is equal to \[e^{F}\,\nabla F\prod_{1\leq j\leq n}D^{\boldsymbol{\beta}^{(j)}}_{\mathbf{P}}F +e^{F}\sum_{r=1}^{n}\Big{(}D^{\boldsymbol{\beta}^{(r)}}_{\mathbf{P}}\nabla F \prod_{\begin{subarray}{c}1\leq j\leq n\\ j\neq r\end{subarray}}D^{\boldsymbol{\beta}^{(j)}}_{\mathbf{P}}F\Big{)}. \tag{2.21}\] We will now find bounds for the above expressions. To do this we will use (2.19) to bound cluster derivatives of \(F\). This, along with (2.11) and (2.12) of Lemma 2.1, allows us to bound the following product \[\Big{|}\prod_{j=1}^{n}D_{\mathbf{P}}^{\boldsymbol{\beta}^{(j)}}F( \mathbf{y})\Big{|} \leq C\prod_{j=1}^{n}\mu_{\boldsymbol{\beta}^{(j)}}(\mathbf{y})^{1-| \boldsymbol{\beta}^{(j)}|}\] \[\leq C\prod_{j=1}^{n}\lambda_{\boldsymbol{\beta}^{(j)}}(\mathbf{y })\lambda_{P_{1}}(\mathbf{y})^{-|\beta_{1}^{(j)}|}\ldots\lambda_{P_{M}}( \mathbf{y})^{-|\beta_{M}^{(j)}|}\] \[=C\Big{(}\prod_{j=1}^{n}\lambda_{\boldsymbol{\beta}^{(j)}}( \mathbf{y})\Big{)}\Big{(}\prod_{l=1}^{M}\lambda_{P_{l}}(\mathbf{y})^{-|\beta_ {l}^{(1)}|}\ldots\lambda_{P_{l}}(\mathbf{y})^{-|\beta_{l}^{(n)}|}\Big{)}\] \[\leq C\lambda_{\boldsymbol{\sigma}}(\mathbf{y})\lambda_{P_{1}}( \mathbf{y})^{-|\sigma_{1}|}\ldots\lambda_{P_{M}}(\mathbf{y})^{-|\sigma_{M}|}\] where \(C\) is some constant, depending on \(\boldsymbol{\sigma}\), and all \(\mathbf{y}\in\Sigma_{\boldsymbol{\sigma}}^{c}\). We used that for each \(l=1,\ldots,M\), we have \(|\beta_{l}^{(1)}|+\cdots+|\beta_{l}^{(n)}|=|\sigma_{l}|\) as a consequence of \(\boldsymbol{\beta}^{(1)}+\cdots+\boldsymbol{\beta}^{(n)}=\boldsymbol{\sigma}\). We now bound the following product in a similar manner using (2.19) and Lemma 2.1. For any \(r=1,\ldots,n\), \[\Big{|}D_{\mathbf{P}}^{\boldsymbol{\beta}^{(r)}}\nabla F(\mathbf{y})\prod_{ \begin{subarray}{c}1\leq j\leq n\\ j\neq r\end{subarray}}D_{\mathbf{P}}^{\boldsymbol{\beta}^{(j)}}F(\mathbf{y}) \Big{|}\leq C\lambda_{P_{1}}(\mathbf{y})^{-|\sigma_{1}|}\ldots\lambda_{P_{M}}( \mathbf{y})^{-|\sigma_{M}|}\] for some constant \(C\), depending on \(\boldsymbol{\sigma}\), and all \(\mathbf{y}\in\Sigma_{\boldsymbol{\sigma}}^{c}\). Using the fact that \(e^{F}\) and \(\nabla F\) are bounded in \(\mathbb{R}^{3N}\), it is now straightforward to bound (2.20) and (2.21) appropriately. This completes the proof of (2.14). Finally, we prove the inequality (2.15). It can be shown that the following Leibniz rule holds \[D_{\mathbf{P}}^{\boldsymbol{\sigma}}|\nabla F|^{2}=\sum_{\boldsymbol{\beta} \leq\boldsymbol{\sigma}}\binom{\boldsymbol{\sigma}}{\boldsymbol{\beta}}D_{ \mathbf{P}}^{\boldsymbol{\beta}}\nabla F\cdot D_{\mathbf{P}}^{\boldsymbol{ \sigma}-\boldsymbol{\beta}}\nabla F\] in \(\Sigma_{\boldsymbol{\sigma}}^{c}\). For every \(\boldsymbol{\beta}\leq\boldsymbol{\sigma}\) we can bound \(D_{\mathbf{P}}^{\boldsymbol{\beta}}\nabla F\) by a constant if \(\boldsymbol{\beta}=0\), and by (2.19) otherwise. This gives some constant \(C\), depending on \(\boldsymbol{\sigma}\), such that \[\big{|}D_{\mathbf{P}}^{\boldsymbol{\sigma}}|\nabla F(\mathbf{y})|^{2}\big{|} \leq C\sum_{\boldsymbol{\beta}\leq\boldsymbol{\sigma}}\mu_{\boldsymbol{\beta} }(\mathbf{y})^{-|\boldsymbol{\beta}|}\mu_{\boldsymbol{\sigma}-\boldsymbol{ \beta}}(\mathbf{y})^{-|\boldsymbol{\sigma}|+|\boldsymbol{\beta}|}\] for all \(\mathbf{y}\in\Sigma_{\boldsymbol{\sigma}}^{c}\). The required bound is obtained after an application of (2.11) of Lemma 2.1. The following result extends the bounds given in Lemma 2.3 to give bounds to \(L^{\infty}\)-norms in balls. It is useful to note that if \(\mathbf{x}\in\Sigma_{\boldsymbol{\sigma}}^{c}\), for \(\boldsymbol{\sigma}\neq 0\), then \(B(\mathbf{x},\nu\lambda_{\boldsymbol{\sigma}}(\mathbf{x}))\subset\Sigma_{ \boldsymbol{\sigma}}^{c}\) for all \(\nu\in(0,1)\). **Lemma 2.4**.: _For any \(|\boldsymbol{\sigma}|\geq 1\) and any \(\nu\in(0,1)\) there exists \(C\), depending on \(\boldsymbol{\sigma}\) and \(\nu\), such that for \(k=0,1\),_ \[\left\|D_{\mathbf{P}}^{\boldsymbol{\sigma}}\nabla^{k}F\right\|_{L^ {\infty}(B(\mathbf{x},\nu\lambda_{\boldsymbol{\sigma}}(\mathbf{x})))},\, \left\|D_{\mathbf{P}}^{\boldsymbol{\sigma}}\nabla^{k}(e^{F})\right\|_{L^{ \infty}(B(\mathbf{x},\nu\lambda_{\boldsymbol{\sigma}}(\mathbf{x})))}\\ \leq C\lambda_{\boldsymbol{\sigma}}(\mathbf{x})^{1-k}\lambda_{P_ {1}}(\mathbf{x})^{-|\sigma_{1}|}\ldots\lambda_{P_{M}}(\mathbf{x})^{-|\sigma_{ M}|} \tag{2.23}\] \[\left\|D_{\mathbf{P}}^{\boldsymbol{\sigma}}|\nabla F|^{2}\right\|_{L^ {\infty}(B(\mathbf{x},\nu\lambda_{\boldsymbol{\sigma}}(\mathbf{x})))}\leq C \lambda_{P_{1}}(\mathbf{x})^{-|\sigma_{1}|}\ldots\lambda_{P_{M}}(\mathbf{x})^ {-|\sigma_{M}|} \tag{2.22}\] _for all \(\mathbf{x}\in\Sigma_{\boldsymbol{\sigma}}^{c}\). The bound to the first norm in (2.22) also holds when \(F\) is replaced by \(F_{c}\)._ Proof.: Take some \(\mathbf{x}\in\Sigma_{\boldsymbol{\sigma}}^{c}\). By (1.22), for each \(\mathbf{y}\in B(\mathbf{x},\nu\lambda_{\boldsymbol{\sigma}}(\mathbf{x}))\) we have \[|\lambda_{P_{j}}(\mathbf{x})-\lambda_{P_{j}}(\mathbf{y})|\leq|\mathbf{x}- \mathbf{y}|\leq\nu\lambda_{\boldsymbol{\sigma}}(\mathbf{x})\leq\nu\lambda_{P_{ j}}(\mathbf{x})\] for each \(j=1,\ldots,M\) with \(\sigma_{j}\neq 0\). The final inequality above uses the definition of \(\lambda_{\boldsymbol{\sigma}}\) in (1.25). By rearrangement, we have \[(1-\nu)\lambda_{P_{j}}(\mathbf{x})\leq\lambda_{P_{j}}(\mathbf{y}). \tag{2.24}\] Therefore, \[\lambda_{P_{1}}(\mathbf{y})^{-|\sigma_{1}|}\ldots\lambda_{P_{M}}(\mathbf{y})^ {-|\sigma_{M}|}\leq(1-\nu)^{-|\boldsymbol{\sigma}|}\lambda_{P_{1}}(\mathbf{x} )^{-|\sigma_{1}|}\ldots\lambda_{P_{M}}(\mathbf{x})^{-|\sigma_{M}|} \tag{2.25}\] for all \(\mathbf{y}\in B(\mathbf{x},\nu\lambda_{\boldsymbol{\sigma}}(\mathbf{x}))\). We now prove an analogous inequality. First, we see that \(\lambda_{\boldsymbol{\sigma}}(\mathbf{x})=\lambda_{P_{l}}(\mathbf{x})\) for some \(1\leq l\leq M\) with \(\sigma_{l}\neq 0\) which will depend on the choice of \(\mathbf{x}\). We also note that \(\lambda_{\boldsymbol{\sigma}}(\mathbf{y})\leq\lambda_{P_{l}}(\mathbf{y})\) for all \(\mathbf{y}\in\mathbb{R}^{3N}\) which follows from \(\sigma_{l}\neq 0\) and the definition of \(\lambda_{\boldsymbol{\sigma}}\). Therefore, \[\lambda_{\boldsymbol{\sigma}}(\mathbf{y})\lambda_{P_{1}}(\mathbf{ y})^{-|\sigma_{1}|}\ldots\lambda_{P_{M}}(\mathbf{y})^{-|\sigma_{M}|} \leq\lambda_{P_{l}}(\mathbf{y})^{1-|\sigma_{l}|}\prod_{\begin{subarray} {c}j=1\\ j\neq l\end{subarray}}^{M}\lambda_{P_{j}}(\mathbf{y})^{-|\sigma_{j}|}\] \[\leq(1-\nu)^{1-|\boldsymbol{\sigma}|}\lambda_{P_{l}}(\mathbf{x})^ {1-|\sigma_{l}|}\prod_{\begin{subarray}{c}j=1\\ j\neq l\end{subarray}}^{M}\lambda_{P_{j}}(\mathbf{x})^{-|\sigma_{j}|} \tag{2.26}\] \[=(1-\nu)^{1-|\boldsymbol{\sigma}|}\lambda_{\boldsymbol{\sigma}}( \mathbf{x})\lambda_{P_{1}}(\mathbf{x})^{-|\sigma_{1}|}\ldots\lambda_{P_{M}}( \mathbf{x})^{-|\sigma_{M}|}\] for all \(\mathbf{y}\in B(\mathbf{x},\nu\lambda_{\boldsymbol{\sigma}}(\mathbf{x}))\). In the second step we applied (2.24). The required bounds then arise from Lemma 2.3 followed by either (2.25) or (2.26) as appropriate. . We introduce the following notation. For cluster set \(\mathbf{P}=(P_{1},\ldots,P_{M})\) and \(\boldsymbol{\alpha}\in\mathbb{N}_{0}^{3M}\) define \[\phi_{\boldsymbol{\alpha}}=D_{\mathbf{P}}^{\boldsymbol{\alpha}}\phi.\] We have previously obtained estimates for the coefficients, and their derivatives, in equation (2.8). By taking cluster derivatives of this equation, we obtain elliptic equations whose weak solutions are \(\phi_{\boldsymbol{\alpha}}\). By a process of rescaling we will apply elliptic regularity to give quantative bounds to the \(L^{\infty}\)-norms of \(\phi_{\boldsymbol{\alpha}}\) and \(\nabla\phi_{\boldsymbol{\alpha}}\). From here it is straightforward to obtain corresponding bounds for \(D_{\mathbf{P}}^{\boldsymbol{\alpha}}\psi\) and \(D_{\mathbf{P}}^{\boldsymbol{\alpha}}\nabla\psi\). **Lemma 2.5**.: _For every \(|\boldsymbol{\alpha}|\geq 1\) we have \(\phi_{\boldsymbol{\alpha}}\) is a weak solution to_ \[-\Delta\phi_{\boldsymbol{\alpha}}-2\nabla F\cdot\nabla\phi_{ \boldsymbol{\alpha}}+(\Delta F_{s}-|\nabla F|^{2}-E)\phi_{\boldsymbol{\alpha}} \\ =\sum_{\begin{subarray}{c}\boldsymbol{\sigma}\leq\boldsymbol{ \alpha}\\ \boldsymbol{\sigma}\neq 0\end{subarray}}\binom{\boldsymbol{\alpha}}{\boldsymbol{ \sigma}}\big{(}2D_{\mathbf{P}}^{\boldsymbol{\sigma}}\nabla F\cdot\nabla\phi_{ \boldsymbol{\alpha}-\boldsymbol{\sigma}}-D_{\mathbf{P}}^{\boldsymbol{\sigma} }(\Delta F_{s}-|\nabla F|^{2})\phi_{\boldsymbol{\alpha}-\boldsymbol{\sigma}} \big{)}=:g_{\boldsymbol{\alpha}} \tag{2.27}\] _in \(\Sigma_{\boldsymbol{\alpha}}^{c}\), and therefore \(\phi_{\boldsymbol{\alpha}}\in C^{1}(\Sigma_{\boldsymbol{\alpha}}^{c})\cap H_{ loc}^{2}(\Sigma_{\boldsymbol{\alpha}}^{c})\)._ Proof.: We prove by induction, starting with \(|\boldsymbol{\alpha}|=1\). Let \(\mathcal{L}(\cdot,\cdot)\) be the bilinear form corresponding to the operator acting on \(\phi_{\boldsymbol{\alpha}}\) on the left-hand side of (2.27), as defined in (1.34). Since \(\phi\in H_{loc}^{2}(\mathbb{R}^{3N})\) we have \(\phi_{\boldsymbol{\alpha}}\in H_{loc}^{1}(\mathbb{R}^{3N})\). We use integration by parts to show that, for any \(\chi\in C_{c}^{\infty}(\Sigma_{\boldsymbol{\alpha}}^{c})\), \[\mathcal{L}(\phi_{\boldsymbol{\alpha}},\chi)=-\mathcal{L}(\phi,D_{\mathbf{P}} ^{\boldsymbol{\alpha}}\chi)+\int_{\Sigma_{\boldsymbol{\alpha}}^{c}}g_{ \boldsymbol{\alpha}}\chi\,d\mathbf{x}. \tag{2.28}\] Now, \(\mathcal{L}(\phi,D_{\mathbf{P}}^{\boldsymbol{\alpha}}\chi)=0\) since \(\phi\) is a weak solution to (2.8), hence (2.27) holds. By Lemma 2.3 and that \(\phi\in C^{1}(\mathbb{R}^{3N})\) we see that \(g_{\boldsymbol{\alpha}}\in L_{loc}^{\infty}(\Sigma_{\boldsymbol{\alpha}}^{c})\). Hence, by Theorems 1.5, \(\phi_{\boldsymbol{\alpha}}\in C^{1}(\Sigma_{\boldsymbol{\alpha}}^{c})\cap H_{ loc}^{2}(\Sigma_{\boldsymbol{\alpha}}^{c})\) Assume the hypothesis holds for all multiindices \(1\leq|\boldsymbol{\alpha}|\leq k-1\) for some \(k\geq 2\). Take some arbitrary multiindex \(|\boldsymbol{\alpha}|=k\). We will prove the induction hypothesis for \(\boldsymbol{\alpha}\). First, we state the useful fact that for any \(\boldsymbol{\sigma}\leq\boldsymbol{\alpha}\) we have \(\Sigma_{\boldsymbol{\sigma}}\subset\Sigma_{\boldsymbol{\alpha}}\). Take some \(\boldsymbol{\eta}\leq\boldsymbol{\alpha}\) with \(|\boldsymbol{\eta}|=1\). Notice, then, that \(\phi_{\boldsymbol{\alpha}-\boldsymbol{\eta}}\in H_{loc}^{2}(\Sigma_{ \boldsymbol{\alpha}}^{c})\) by the induction hypothesis, and hence \(\phi_{\boldsymbol{\alpha}}\in H_{loc}^{1}(\Sigma_{\boldsymbol{\alpha}}^{c})\). This allows \(\mathcal{L}(\phi_{\boldsymbol{\alpha}},\chi)\) to be defined for any \(\chi\in C_{c}^{\infty}(\Sigma_{\boldsymbol{\alpha}}^{c})\). Integration by parts is then used on the \(D_{\mathbf{P}}^{\boldsymbol{\eta}}\)-derivative, in a similar way to (2.28). Along with the induction hypothesis and further applications of integration by parts, we find that (2.27) holds for \(\phi_{\boldsymbol{\alpha}}\) as a weak solution. Regularity for \(\phi_{\boldsymbol{\alpha}}\) is shown in a similar way to the \(|\boldsymbol{\alpha}|=1\) case. Indeed, the induction hypothesis can be used to show \(g_{\boldsymbol{\alpha}}\in L_{loc}^{\infty}(\Sigma_{\boldsymbol{\alpha}}^{c})\). We now apply the \(C^{1}\)-elliptic regularity estimates of Theorem 1.5, via a scaling procedure, to the equations (2.27). **Lemma 2.6**.: _For all \(|\boldsymbol{\alpha}|\geq 1\) and \(0<r<R<1\) there exists \(C\), dependent only on \(E,Z,N,r\) and \(R\), such that_ \[\left\|\nabla\phi_{\boldsymbol{\alpha}}\right\|_{L^{\infty}(B( \mathbf{x},r\lambda_{\boldsymbol{\alpha}}(\mathbf{x})))}\\ \leq C\lambda_{\boldsymbol{\alpha}}(\mathbf{x})^{-1}\big{(}\left\| \phi_{\boldsymbol{\alpha}}\right\|_{L^{\infty}(B(\mathbf{x},R\lambda_{ \boldsymbol{\alpha}}(\mathbf{x})))}+\lambda_{\boldsymbol{\alpha}}(\mathbf{x}) ^{2}\left\|g_{\boldsymbol{\alpha}}\right\|_{L^{\infty}(B(\mathbf{x},R\lambda_ {\boldsymbol{\alpha}}(\mathbf{x})))}\big{)} \tag{2.29}\] _for all \(\mathbf{x}\in\Sigma_{\boldsymbol{\alpha}}^{c}\). The function \(g_{\boldsymbol{\alpha}}\) was given by (2.27)._ Proof.: Fix any \(\mathbf{x}\in\Sigma_{\boldsymbol{\alpha}}^{c}\) and denote \(\lambda=\lambda_{\boldsymbol{\alpha}}(\mathbf{x})\). The proof proceeds via a rescaling. Define \[w(\mathbf{y}) =\phi_{\boldsymbol{\alpha}}(\mathbf{x}+\lambda\mathbf{y})\] \[\mathbf{c}(\mathbf{y}) =-2\nabla F(\mathbf{x}+\lambda\mathbf{y})\] \[d(\mathbf{y}) =\Delta F_{s}(\mathbf{x}+\lambda\mathbf{y})-|\nabla F(\mathbf{x} +\lambda\mathbf{y})|^{2}-E\] \[f(\mathbf{y}) =g_{\boldsymbol{\alpha}}(\mathbf{x}+\lambda\mathbf{y})\] for all \(\mathbf{y}\in B(0,1)\). Then by Lemma 2.5 we have that \(w\) is a weak solution to \[-\Delta w+\lambda\mathbf{c}\cdot\nabla w+\lambda^{2}dw=\lambda^{2}f \tag{2.30}\] in \(B(0,1)\). By (2.6) and (2.5), and that \(\lambda\leq 1\) by definition, we obtain \[\lambda\left\|\mathbf{c}\right\|_{L^{\infty}(B(0,1))}+\lambda^{2}\left\|d \right\|_{L^{\infty}(B(0,1))}\leq K\] for some \(K\), dependent only on \(E,Z\) and \(N\) and in particular is independent of our choice of \(\mathbf{x}\). Therefore, by Theorem 1.5, with \(\theta=0\), we get some \(C\), dependent only on \(N,K,r\) and \(R\), such that \[\left\|w\right\|_{C^{1}(\overline{B(0,r)})}\leq C(\left\|w\right\|_{L^{\infty} (B(0,R))}+\lambda^{2}\left\|f\right\|_{L^{\infty}(B(0,R))}). \tag{2.31}\] Finally, we use \(\nabla w(\mathbf{y})=\lambda\nabla\phi_{\boldsymbol{\alpha}}(\mathbf{x}+ \lambda\mathbf{y})\) by the chain rule, and rewrite (2.31) to give (2.29). The following proposition uses induction to write estimates for both \(\nabla\phi_{\boldsymbol{\alpha}}\) and \(\phi_{\boldsymbol{\alpha}}\) with a bound involving only the zeroth and first-order derivatives of \(\phi\). Corollary A.2 of the Appendix is used in the proof to improve the regularity of the second derivative of \(\phi\), and this benefit is passed on through induction to all higher orders of derivative. The function \(f_{\infty}(\,\cdot\,;R;\phi)\) was defined in (1.28). **Proposition 2.7**.: _For any \(|\boldsymbol{\alpha}|\geq 1\), any \(0<r<R<1\) and \(b\in[0,1)\) there exists \(C\), dependent on \(\boldsymbol{\alpha},r,R\) and \(b\), such that for \(k=0,1\), with \(k+|\boldsymbol{\alpha}|\geq 2\),_ \[\left\|\nabla^{k}\phi_{\boldsymbol{\alpha}}\right\|_{L^{\infty}(B(\mathbf{x}, r\lambda_{\boldsymbol{\alpha}}(\mathbf{x})))}\leq C\lambda_{\boldsymbol{\alpha}}( \mathbf{x})^{1-k}\lambda_{P_{1}}(\mathbf{x})^{-|\alpha_{1}|}\ldots\lambda_{P_ {M}}(\mathbf{x})^{-|\alpha_{M}|}\mu_{\boldsymbol{\alpha}}(\mathbf{x})^{b}f_{ \infty}(\mathbf{x};R;\phi) \tag{2.32}\] _for all \(\mathbf{x}\in\Sigma_{\boldsymbol{\alpha}}^{c}\). The inequality also holds for \(|\boldsymbol{\alpha}|=1\) and \(k=0\) if we take \(b=0\)._ Proof.: Suppose \(|\boldsymbol{\alpha}|=1\). Let \(l\) be the index \(1\leq l\leq M\) such that \(\alpha_{l}\neq 0\). Then \(\lambda_{P_{l}}=\lambda_{\boldsymbol{\alpha}}=\mu_{\boldsymbol{\alpha}}\) and \(\Sigma_{\boldsymbol{\alpha}}=\Sigma_{P_{l}}\). The required bound for \(k=b=0\) follows from the definition of cluster derivative, (1.15). When \(k=1\) and \(b\in[0,1)\) the bound follows directly from Corollary A.2 with \(P=P_{l}\). For \(|\boldsymbol{\alpha}|\geq 2\) we prove by induction. Assume the hypothesis holds for all multiindices \(\boldsymbol{\alpha}\) with \(1\leq|\boldsymbol{\alpha}|\leq m-1\) for some \(m\geq 2\). Take any \(\boldsymbol{\alpha}\) with \(|\boldsymbol{\alpha}|=m\). We prove the \(k=0\) and \(k=1\) cases in turn. It is useful, here, to state the following fact: for all \(\boldsymbol{\sigma}\leq\boldsymbol{\alpha}\) we have \(\Sigma_{\boldsymbol{\sigma}}\subset\Sigma_{\boldsymbol{\alpha}}\) and hence \(\lambda_{\boldsymbol{\alpha}}(\mathbf{x})\leq\lambda_{\boldsymbol{\sigma}}( \mathbf{x})\) for all \(\mathbf{x}\in\Sigma_{\boldsymbol{\alpha}}^{c}\). The \(k=0\) case is a straightforward application of the induction hypothesis and is described as follows. Firstly, for each \(\mathbf{x}\in\Sigma_{\boldsymbol{\alpha}}^{c}\) it is clear that there exists some \(1\leq l\leq M\) such that \(\lambda_{\boldsymbol{\alpha}}(\mathbf{x})=\lambda_{P_{l}}(\mathbf{x})\) and where \(\alpha_{l}\neq 0\). Therefore we can find some \(\boldsymbol{\eta}\leq\boldsymbol{\alpha}\) with \(|\boldsymbol{\eta}|=1\) and \(\eta_{j}=0\) for each \(j\neq l\) where \(1\leq j\leq M\). Now, from the definition of cluster derivatives (1.16), \[\|\phi_{\boldsymbol{\alpha}}\|_{L^{\infty}(B(\mathbf{x},r\lambda_{\boldsymbol {\alpha}}(\mathbf{x})))}\leq N\,\|\nabla\phi_{\boldsymbol{\alpha}-\boldsymbol{ \eta}}\|_{L^{\infty}(B(\mathbf{x},r\lambda_{\boldsymbol{\alpha}}(\mathbf{x})))}\,. \tag{2.33}\] We can then use the induction assumption to show existence of \(C\) such that \[\|\nabla\phi_{\boldsymbol{\alpha}-\boldsymbol{\eta}}\|_{L^{\infty}(B(\mathbf{x },r\lambda_{\boldsymbol{\alpha}}(\mathbf{x})))}\leq C\lambda_{P_{l}}(\mathbf{ x})\lambda_{P_{1}}(\mathbf{x})^{-|\alpha_{1}|}\ldots\lambda_{P_{M}}(\mathbf{x})^{-| \alpha_{M}|}\mu_{\boldsymbol{\alpha}-\boldsymbol{\eta}}(\mathbf{x})^{b}f_{ \infty}(\mathbf{x};R;\phi), \tag{2.34}\] where we can then use \(\lambda_{\boldsymbol{\alpha}}(\mathbf{x})=\lambda_{P_{l}}(\mathbf{x})\) and \(\mu_{\boldsymbol{\alpha}-\boldsymbol{\eta}}(\mathbf{x})\leq\mu_{\boldsymbol{ \alpha}}(\mathbf{x})\) to complete the bound (2.32) for our choice of \(\boldsymbol{\alpha}\) in the case of \(k=0\). The \(k=1\) case follows from the induction hypothesis and Lemma 2.6. Let \(r^{\prime}=(r+R)/2\). Firstly, by the definition of \(g_{\boldsymbol{\alpha}}\), (2.27), along with (2.5) and Lemma 2.4 we can find \(C\), depending on \(\boldsymbol{\alpha}\) and \(r^{\prime}\), such that \[\|g_{\boldsymbol{\alpha}}\|_{L^{\infty}(B(\mathbf{x},r^{\prime} \lambda_{\boldsymbol{\alpha}}(\mathbf{x})))}\leq C\sum_{\begin{subarray}{c} \boldsymbol{\sigma}\leq\boldsymbol{\alpha}\\ \boldsymbol{\sigma}\neq 0\end{subarray}}\lambda_{P_{1}}(\mathbf{x})^{-| \sigma_{1}|}\ldots\lambda_{P_{M}}(\mathbf{x})^{-|\sigma_{M}|}\big{(}\,\|\phi_ {\boldsymbol{\alpha}-\boldsymbol{\sigma}}\|_{L^{\infty}(B(\mathbf{x},r^{\prime }\lambda_{\boldsymbol{\alpha}}(\mathbf{x})))}\\ +\|\nabla\phi_{\boldsymbol{\alpha}-\boldsymbol{\sigma}}\|_{L^{ \infty}(B(\mathbf{x},r^{\prime}\lambda_{\boldsymbol{\alpha}}(\mathbf{x})))}\, \big{)}.\] It follows by the induction hypothesis with \(b=0\), along with the definition of \(f_{\infty}\), (1.29), that we can find some \(C\), depending on \(\boldsymbol{\alpha},r^{\prime}\) and \(R\), such that \[\|g_{\boldsymbol{\alpha}}\|_{L^{\infty}(B(\mathbf{x},r^{\prime}\lambda_{ \boldsymbol{\alpha}}(\mathbf{x})))}\leq C\lambda_{P_{1}}(\mathbf{x})^{-|\alpha_ {1}|}\ldots\lambda_{P_{M}}(\mathbf{x})^{-|\alpha_{M}|}f_{\infty}(\mathbf{x};R;\phi) \tag{2.35}\] for all \(\mathbf{x}\in\Sigma_{\boldsymbol{\alpha}}^{c}\). Now, to prove the required bound (2.32) for \(k=1\) we apply Lemma 2.6 to obtain some constant \(C\), depending on \(r\) and \(r^{\prime}\), such that \[\|\nabla\phi_{\boldsymbol{\alpha}}\|_{L^{\infty}(B(\mathbf{x},r\lambda_{ \boldsymbol{\alpha}}(\mathbf{x})))}\leq C\big{(}\lambda_{\boldsymbol{\alpha}}( \mathbf{x})^{-1}\,\|\phi_{\boldsymbol{\alpha}}\|_{L^{\infty}(B(\mathbf{x},r^{ \prime}\lambda_{\boldsymbol{\alpha}}(\mathbf{x})))}+\lambda_{\boldsymbol{\alpha}} (\mathbf{x})\,\|g_{\boldsymbol{\alpha}}\|_{L^{\infty}(B(\mathbf{x},r^{\prime} \lambda_{\boldsymbol{\alpha}}(\mathbf{x})))}\,\big{)}.\] We use the \(k=0\) case, proven above, to bound the \(L^{\infty}\)-norm of \(\phi_{\boldsymbol{\alpha}}\) in the first term on the right-hand side of the above inequality. To the second term, we apply (2.35) and the simple inequality \(\lambda_{\boldsymbol{\alpha}}(\mathbf{x})\leq\mu_{\boldsymbol{\alpha}}(\mathbf{x })^{b}\). Together, these give the bound (2.32) for \(k=1\) and our choice of \(\boldsymbol{\alpha}\). This completes the induction. Using the definition \(\phi=e^{-F}\psi\), we now obtain bounds to the cluster derivatives of the eigenfunction \(\psi\) using those for the cluster derivatives of \(\phi\) in the above proposition. Proof of Theorem 1.3.: It is clear that (1.31) holds when \(\boldsymbol{\alpha}=0\). Therefore, consider \(|\boldsymbol{\alpha}|\geq 1\). We first prove (1.31) for \(k=0\). Take \(\mathbf{x}\in\Sigma^{c}_{\boldsymbol{\alpha}}\), then by the Leibniz rule for cluster derivatives we have \[D^{\boldsymbol{\alpha}}_{\mathbf{P}}\psi=\sum_{\boldsymbol{\beta}\leq \boldsymbol{\alpha}}\binom{\boldsymbol{\alpha}}{\boldsymbol{\beta}}D^{ \boldsymbol{\beta}}_{\mathbf{P}}\big{(}e^{F}\big{)}\phi_{\boldsymbol{\alpha}- \boldsymbol{\beta}} \tag{2.36}\] in \(B(\mathbf{x},r\lambda_{\boldsymbol{\alpha}}(\mathbf{x}))\). Now, for each \(\boldsymbol{\beta}\leq\boldsymbol{\alpha}\) there exists some \(C\), independent of \(\mathbf{x}\), such that \[\Big{\|}D^{\boldsymbol{\beta}}_{\mathbf{P}}\big{(}e^{F}\big{)}\phi_{ \boldsymbol{\alpha}-\boldsymbol{\beta}}\Big{\|}_{L^{\infty}(B(\mathbf{x},r \lambda_{\boldsymbol{\alpha}}(\mathbf{x})))}\leq C\lambda_{\boldsymbol{ \alpha}}(\mathbf{x})\lambda_{P_{1}}(\mathbf{x})^{-|\alpha_{1}|}\ldots\lambda_ {P_{M}}(\mathbf{x})^{-|\alpha_{M}|}f_{\infty}(\mathbf{x};R;\phi).\] To prove this, we use Lemma 2.4 and Proposition 2.7 with \(b=0\). In addition, if \(\boldsymbol{\beta}=0\) we use (2.6) and if \(\boldsymbol{\beta}=\boldsymbol{\alpha}\) we use the definition (1.29). If \(\boldsymbol{\beta}\notin\{0,\boldsymbol{\alpha}\}\) we use the inequality \(\lambda_{\boldsymbol{\beta}}(\mathbf{x})\lambda_{\boldsymbol{\alpha}- \boldsymbol{\beta}}(\mathbf{x})\leq\lambda_{\boldsymbol{\alpha}}(\mathbf{x})\), which is obtained from Lemma 2.1. The required bound (1.31) for \(k=0\) then follows from (2.36) and (2.9)-(2.10). Take \(\mathbf{x}\in\Sigma^{c}_{\boldsymbol{\alpha}}\). By the definition \(F=F_{c}-F_{s}\), the equality \(\nabla\psi=\psi\nabla F+e^{F}\nabla\phi\) and the Leibniz rule, we have \[D^{\boldsymbol{\alpha}}_{\mathbf{P}}\nabla\psi=G^{\boldsymbol{\alpha}}_{ \mathbf{P}}+\psi D^{\boldsymbol{\alpha}}_{\mathbf{P}}\nabla F_{c}\] in \(B(\mathbf{x},r\lambda_{\boldsymbol{\alpha}}(\mathbf{x}))\), where \[G^{\boldsymbol{\alpha}}_{\mathbf{P}}=\sum_{\begin{subarray}{c}\boldsymbol{ \beta}\leq\boldsymbol{\alpha}\\ |\boldsymbol{\beta}|\geq 1\end{subarray}}\binom{\boldsymbol{\alpha}}{ \boldsymbol{\beta}}\psi_{\boldsymbol{\beta}}\,D^{\boldsymbol{\alpha}- \boldsymbol{\beta}}_{\mathbf{P}}\nabla F+\sum_{\boldsymbol{\beta}\leq \boldsymbol{\alpha}}\binom{\boldsymbol{\alpha}}{\boldsymbol{\beta}}\nabla\phi_ {\boldsymbol{\beta}}\,D^{\boldsymbol{\alpha}-\boldsymbol{\beta}}_{\mathbf{P}} \big{(}e^{F}\big{)}-\psi\,D^{\boldsymbol{\alpha}}_{\mathbf{P}}\nabla F_{s}. \tag{2.37}\] We bound each term in \(G^{\boldsymbol{\alpha}}_{\mathbf{P}}\) as in (1.33). Take any \(\boldsymbol{\beta}\leq\boldsymbol{\alpha}\) with \(|\boldsymbol{\beta}|\geq 1\). Notice that \(\Sigma_{\boldsymbol{\beta}}\subset\Sigma_{\boldsymbol{\alpha}}\) and hence \(\lambda_{\boldsymbol{\alpha}}(\mathbf{x})\leq\lambda_{\boldsymbol{\beta}}( \mathbf{x})\). Now, there exists \(C\), independent of \(\mathbf{x}\), such that \[\Big{\|}\psi_{\boldsymbol{\beta}}\,D^{\boldsymbol{\alpha}-\boldsymbol{\beta }}_{\mathbf{P}}\nabla F\Big{\|}_{L^{\infty}(B(\mathbf{x},r\lambda_{\boldsymbol {\alpha}}(\mathbf{x})))}\leq C\lambda_{\boldsymbol{\beta}}(\mathbf{x})\lambda _{P_{1}}(\mathbf{x})^{-|\alpha_{1}|}\ldots\lambda_{P_{M}}(\mathbf{x})^{-| \alpha_{M}|}f_{\infty}(\mathbf{x};R). \tag{2.38}\] To prove this inequality, we bound derivatives of \(F\) using Lemma 2.4 if \(\boldsymbol{\beta}\neq\boldsymbol{\alpha}\) and (2.6) if \(\boldsymbol{\beta}=\boldsymbol{\alpha}\). The derivatives \(\psi_{\boldsymbol{\beta}}\) are then bounded using (1.31) with \(k=0\), which was proven above. The right-hand side of (2.38) can be bounded as in (1.33) by using the simple bound \(\lambda_{\boldsymbol{\beta}}\leq\mu_{\boldsymbol{\alpha}}\). Next, we can find \(C\), independent of \(\mathbf{x}\), such that \[\Big{\|}\nabla\phi_{\boldsymbol{\beta}}\,D^{\boldsymbol{\alpha}-\boldsymbol{ \beta}}_{\mathbf{P}}\big{(}e^{F}\big{)}\Big{\|}_{L^{\infty}(B(\mathbf{x},r \lambda_{\boldsymbol{\alpha}}(\mathbf{x})))}\leq C\mu_{\boldsymbol{\beta}}( \mathbf{x})^{b}\lambda_{\boldsymbol{\alpha}-\boldsymbol{\beta}}(\mathbf{x}) \lambda_{P_{1}}(\mathbf{x})^{-|\alpha_{1}|}\ldots\lambda_{P_{M}}(\mathbf{x})^{- |\alpha_{M}|}f_{\infty}(\mathbf{x};R;\phi).\] This is proven using Proposition 2.7 to bound the derivatives \(\nabla\phi_{\boldsymbol{\beta}}\), and Lemma 2.4 and (2.6) to bound derivatives of \(e^{F}\). The right-hand side of the above inequality can be bounded as in (1.33) after use of (2.9)-(2.10) and the inequalities \(\mu_{\boldsymbol{\beta}}\leq\mu_{\boldsymbol{\alpha}}\) and \(\lambda_{\boldsymbol{\alpha}-\boldsymbol{\beta}}\leq 1\). In addition, by Lemma 2.4 there exists \(C\) such that \[\big{\|}\nabla\phi\,D^{\boldsymbol{\alpha}}_{\mathbf{P}}\big{(}e^{F}\big{)} \big{\|}_{L^{\infty}(B(\mathbf{x},r\lambda_{\boldsymbol{\alpha}}(\mathbf{x}))) }\leq C\lambda_{\boldsymbol{\alpha}}(\mathbf{x})\lambda_{P_{1}}(\mathbf{x})^{- |\alpha_{1}|}\ldots\lambda_{P_{M}}(\mathbf{x})^{-|\alpha_{M}|}f_{\infty}( \mathbf{x};R;\phi).\] Use of (2.9)-(2.10), as before, and the inequality \(\lambda_{\boldsymbol{\alpha}}\leq\mu_{\boldsymbol{\alpha}}\) give the correct bound. Finally, the last term in (2.37) is readily bounded appropriately using (2.5). In view of (2.37), this completes the proof of (1.33). To prove (1.31) for \(k=1\) we use (1.32), (1.33) and the following inequality, \[\|\psi D_{\mathbf{P}}^{\boldsymbol{\alpha}}\nabla F_{c}\|_{L^{\infty}(B( \mathbf{x},r\lambda_{\boldsymbol{\alpha}}(\mathbf{x})))}\leq C\lambda_{P_{1}} (\mathbf{x})^{-|\alpha_{1}|}\ldots\lambda_{P_{M}}(\mathbf{x})^{-|\alpha_{M}|} f_{\infty}(\mathbf{x};R),\] which holds for some \(C\) as a direct consequence of Lemma 2.4. ## 3. Biscaled cutoffs and their associated clusters In the proof of Theorem 1.1, partial derivatives of \(\gamma(x,y)\) will be written in terms of integrals involving cluster derivatives of \(\psi\). Cutoff functions are used here to ensure these cluster derivatives are evaluated only at points in \(\mathbb{R}^{3N}\) where they exist and where we can apply Theorem 1.3. The cutoffs introduced in this section are of a similar form to those used in [5], which themselves are based on those used in [3]. In all these cases, cutoffs enforce bounds on the distances between pairs of particles on the support of the cutoff. The bounds on these distances can be scaled using a scaling parameter. The cutoffs defined here will involve the points (i.e. the "particles") \(x,y\) and \(x_{j}\), \(2\leq j\leq N\). For a given cutoff, we will define a corresponding cluster \(P\) which is a collection of the particles whose distance to \(x\) is bounded above appropriately. The same is be done for \(y\), with a cluster \(S\). In using these cutoffs, derivatives of \(\gamma(x,y)\) in the \(x\)-variable can naturally be written as cluster derivatives using the \(P\) cluster, and those in the \(y\)-variable can be written as cluster derivatives using the \(S\) cluster. We will also need to take derivatives of \(\gamma(x,y)\) in both \(x\) and \(y\) simultanously (which will later be denoted by \(\partial_{u}\)). For this, it is natural to consider a larger cluster \(Q\) which contains both \(P\) and \(S\). The particles in \(Q\) are held close to both \(x\) and \(y\), but potentially looser than how particles in \(P\) and \(S\) are held to \(x\) and \(y\) respectively. For this reason, the cutoffs used here will involve two scaling parameters \(\delta\) and \(\epsilon\) and hence are called _biscaled_. ### Definition of \(\Phi\) To begin, take some \(\chi\in C_{c}^{\infty}(\mathbb{R})\), \(0\leq\chi\leq 1\), with \[\chi(s)=\begin{cases}1&\text{ if }|s|\leq 1\\ 0&\text{ if }|s|\geq 2.\end{cases} \tag{3.1}\] for \(s\in\mathbb{R}\). For each \(t>0\) we can define the following two _cutoff factors_\(\zeta_{t}=\zeta_{t}(z)\) and \(\theta_{t}=\theta_{t}(z)\) by \[\zeta_{t}(z)=\chi\Big{(}\frac{4N|z|}{t}\Big{)},\qquad\theta_{t}(z)=1-\zeta_{t} (z) \tag{3.2}\] for \(z\in\mathbb{R}^{3}\). We have the following _support criteria_ for cutoff factors. For any \(z\in\mathbb{R}^{3}\) and \(t>0\), * If \(\zeta_{t}(z)\neq 0\) then \(|z|<(2N)^{-1}t\), * If \(\theta_{t}(z)\neq 0\) then \(|z|>(4N)^{-1}t\). Let \(0<\delta<(4N)^{-1}\epsilon\) (the use of \((4N)^{-1}\) is explained in the following lemma). We define a _biscaled cutoff_, which depends on \(\delta\) and \(\epsilon\) as parameters, as a function \(\Phi=\Phi_{\delta,\epsilon}(x,y,\mathbf{\hat{x}})\) defined by \[\Phi_{\delta,\epsilon}(x,y,\mathbf{\hat{x}})=\prod_{2\leq j\leq N}g_{j}^{(1)}( x-x_{j})\prod_{2\leq j\leq N}g_{j}^{(2)}(y-x_{j})\prod_{2\leq k<l\leq N}f_{kl}(x_{k}-x _{l}) \tag{3.3}\] for \(x,y\in\mathbb{R}^{3}\) and \(\mathbf{\hat{x}}\in\mathbb{R}^{3N-3}\), and where \(g_{j}^{(1)},g_{j}^{(2)},f_{kl}\in\{\zeta_{\delta},\theta_{\delta}\zeta_{ \epsilon},\theta_{\epsilon}\}\) for \(2\leq j\leq N\) and \(2\leq k<l\leq N\). In addition, for \(2\leq k<l\leq N\) we define \(f_{lk}=f_{kl}\). The idea is that on the support of \(\Phi\) we have upper and/or lower bounds to the distances between various pairs of particles. If we take the particles \(x_{j}\) and \(x_{k}\), for example, we have that \(\zeta_{\delta}(x_{j}-x_{k})\) holds the two particles close relative to a distance \(\delta\), \(\theta_{\epsilon}(x_{j}-x_{k})\) holds them apart relative to a distance \(\epsilon\), and \(\theta_{\delta}\zeta_{\epsilon}\) keeps them within an intermediate distance apart. We use the standard notation \(\mathds{1}_{S}\) to denote the indicator function on a set \(S\). It is then convenient to define for each \(t>0\), \[\mathds{1}_{t}(z) =\mathds{1}_{\{(4N)^{-1}t<|z|<(2N)^{-1}t\}}(z) \tag{3.5}\] \[\mathds{1}_{t}^{\prime}(z) =\mathds{1}_{\{(4N)^{-1}t<|z|<1\}}(z) \tag{3.4}\] for \(z\in\mathbb{R}^{3}\). And for each \(t>0\) we define the function \(M_{t}=M_{t}(x,y,\mathbf{\hat{x}})\) by \[M_{t}(x,y,\mathbf{\hat{x}})=\sum_{2\leq j\leq N}\mathds{1}_{t}(x-x_{j})+\sum_ {2\leq j\leq N}\mathds{1}_{t}(y-x_{j})+\sum_{2\leq k<l\leq N}\mathds{1}_{t}(x_ {k}-x_{l}) \tag{3.6}\] for \(x,y\in\mathbb{R}^{3}\) and \(\mathbf{\hat{x}}\in\mathbb{R}^{3N-3}\). The cutoffs, \(\Phi\), defined in (3.3) form a partition of unity as shown in the following lemma. **Lemma 3.1**.: _There exists a finite collection \(\{\Phi^{(j)}\}_{j=1}^{J}\) of biscaled cutoffs (3.3), with integer \(J\) depending only on \(N\), such that whenever \(0<\delta\leq(4N)^{-1}\epsilon\) we have_ \[\sum_{j=1}^{J}\Phi^{(j)}(x,y,\mathbf{\hat{x}})=1\] _for all \(x,y\in\mathbb{R}^{3}\), \(\mathbf{\hat{x}}\in\mathbb{R}^{3N-3}\)._ Proof.: First, we claim that for all \(z\in\mathbb{R}^{3}\), \[\theta_{\delta}(z)\theta_{\epsilon}(z) =\theta_{\epsilon}(z), \tag{3.8}\] \[\zeta_{\delta}(z)\theta_{\epsilon}(z) =0,\] (3.9) \[\zeta_{\delta}(z)\zeta_{\epsilon}(z) =\zeta_{\delta}(z). \tag{3.7}\] This claim, along with the definitions (3.2), shows that \[1=\big{(}\zeta_{\delta}(z)+\theta_{\delta}(z)\big{)}\big{(}\zeta_{\epsilon}(z)+ \theta_{\epsilon}(z)\big{)}=\zeta_{\delta}(z)+\theta_{\delta}(z)\zeta_{\epsilon }(z)+\theta_{\epsilon}(z) \tag{3.10}\] for all \(z\in\mathbb{R}^{3}\). To prove (3.7) and (3.8) we need only consider \(z\) such that \(\theta_{\epsilon}(z)\neq 0\), in which case \(|z|>(4N)^{-1}\epsilon\geq\delta\). For such \(z\) we therefore have \(\theta_{\delta}(z)=1\), by the definition of \(\theta_{\delta}\), which proves (3.7). For (3.8), if we also have \(\zeta_{\delta}(z)\neq 0\), then \(|z|<(2N)^{-1}\delta\) by the support criteria for \(\zeta_{\delta}\), giving a contradiction. For (3.9) we need only consider \(z\) such that \(\zeta_{\delta}(z)\neq 0\), in which case \(|z|<(2N)^{-1}\delta\). This gives \(4N|z|\epsilon^{-1}<(2N)^{-1}\) and hence, by definition, \(\zeta_{\epsilon}(z)=1\). . For any \(A,B\subset\{2,\ldots,N\}\) with \(A\cap B=\emptyset\) we define \[\tau_{A,B}(\mathbf{x})=\prod_{j\in A}\zeta_{\delta}(x_{1}-x_{j})\prod_{j\in B }(\theta_{\delta}\zeta_{\epsilon})(x_{1}-x_{j})\prod_{j\in\{2,\ldots,N\}\setminus (A\cup B)}\theta_{\epsilon}(x_{1}-x_{j})\] and therefore \[\sum_{\begin{subarray}{c}A\subset\{2,\ldots,N\}\\ B\subset\{2,\ldots,N\}\setminus A\end{subarray}}\tau_{A,B}(\mathbf{x})=\prod_{ 2\leq j\leq N}\big{(}\zeta_{\delta}(x_{1}-x_{j})+(\theta_{\delta}\zeta_{ \epsilon})(x_{1}-x_{j})+\theta_{\epsilon}(x_{1}-x_{j})\big{)}=1\] for all \(\mathbf{x}\in\mathbb{R}^{3N}\). Let \(\Xi=\{(j,k):2\leq k<l\leq N\}\). For each subset \(Y,Z\subset\Xi\) with \(Y\cap Z=\emptyset\) we define \[T_{Y,Z}(\mathbf{\hat{x}})=\prod_{(j,k)\in Y}\zeta_{\delta}(x_{j}-x_{k})\prod_ {(j,k)\in Z}(\theta_{\delta}\zeta_{\epsilon})(x_{j}-x_{k})\prod_{(j,k)\in\Xi \setminus(Y\cup Z)}\theta_{\epsilon}(x_{j}-x_{k})\] and therefore \[\sum_{\begin{subarray}{c}Y\subset\Xi\\ Z\subset\Xi\setminus Y\end{subarray}}T_{Y,Z}(\mathbf{\hat{x}})=\prod_{2\leq j <k\leq N}\big{(}\zeta_{\delta}(x_{j}-x_{k})+(\theta_{\delta}\zeta_{\epsilon} )(x_{j}-x_{k})+\theta_{\epsilon}(x_{j}-x_{k})\big{)}=1\] for all \(\mathbf{\hat{x}}\in\mathbb{R}^{3N-3}\). Overall, \[\sum_{\begin{subarray}{c}A\subset\{2,\ldots,N\}\\ B\subset\{2,\ldots,N\}\setminus A\end{subarray}}\sum_{\begin{subarray}{c}C \subset\{2,\ldots,N\}\\ D\subset\{2,\ldots,N\}\setminus C\end{subarray}}\sum_{\begin{subarray}{c}Y \subset\Xi\\ Z\subset\Xi\setminus Y\end{subarray}}\tau_{A,B}(x,\mathbf{\hat{x}})\tau_{C,D}(y,\mathbf{\hat{x}})T_{Y,Z}(\mathbf{\hat{x}})=1,\] which is a sum of biscaled cutoffs. ### Clusters corresponding to \(\Phi\) We now introduce three clusters corresponding to each \(\Phi\), defined in (3.3). To define these clusters we first introduce index sets based solely on the choice of cutoff factors \(g_{j}^{(1)},g_{j}^{(2)}\) and \(f_{kl}\) in the definition of \(\Phi\), (3.3). The index sets and clusters are therefore not dependent on \(x,y,\mathbf{\hat{x}}\) nor on the scaling parameters \(\delta\) and \(\epsilon\). . We define the index set \(L\subset\{(j,k)\in\{1,\ldots,N\}^{2}:j\neq k\}\) as follows. * We have \((j,k)\in L\) if \(f_{jk}\neq\theta_{\epsilon}\) for \(j,k=2,\ldots,N\). Also \((1,j),(j,1)\in L\) if \((g_{j}^{(1)},g_{j}^{(2)})\neq(\theta_{\epsilon},\theta_{\epsilon})\) for \(j=2,\ldots,N\). Furthermore, we define two more index sets \(J,K\subset\{(j,k)\in\{1,\ldots,N\}^{2}:j\neq k\}\) as follows, * We have \((j,k)\in J\) if \(f_{jk}=\zeta_{\delta}\) for \(j,k=2,\ldots,N\). Also \((1,j),(j,1)\in J\) if \(g_{j}^{(1)}=\zeta_{\delta}\) for \(j=2,\ldots,N\). * We have \((j,k)\in K\) if \(f_{jk}=\zeta_{\delta}\) for \(j,k=2,\ldots,N\). Also \((1,j),(j,1)\in K\) if \(g_{j}^{(2)}=\zeta_{\delta}\) for \(j=2,\ldots,N\). These index sets obey \(J,K\subset L\). For an arbitrary index set \(I\subset\{(j,k)\in\{1,\ldots,N\}^{2}:j\neq k\}\) we say that two indices \(j,k\in\{1,\ldots,N\}\) are \(I\)-_linked_ if either \(j=k\), or \((j,k)\in I\), or there exist pairwise distinct indices \(j_{1},\ldots,j_{s}\) for \(1\leq s\leq N-2\), all distinct from \(j\) and \(k\) such that \((j,j_{1}),(j_{1},j_{2}),\ldots,(j_{s},k)\in I\). . The cluster \(Q=Q(\Phi)\) is defined as the set of all indices \(L\)-linked to \(1\). The cluster \(P=P(\Phi)\) is defined as the set of all indices \(J\)-linked to \(1\). The cluster \(S=S(\Phi)\) is defined as the set of all indices \(K\)-linked to \(1\). Since \(J,K\subset L\) we see that \(P,S\subset Q\). ### Support of \(\Phi\) Continuing, the clusters \(P,S\) and \(Q\) will be understood as \(P(\Phi),S(\Phi)\) and \(Q(\Phi)\), respectively, when the biscaled cutoff \(\Phi\) is unambiguous. The following lemma shows that on the support of \(\Phi\), the cluster \(P^{*}=P\backslash\{1\}\) represents a set of particles \(x_{j}\), \(j\in P^{*}\), which are close to \(x\), and the cluster \(S^{*}=S\backslash\{1\}\) represents a set of particles \(x_{j}\), \(j\in S^{*}\), close to \(y\). In both cases this closeness is with respect to the parameter \(\delta\). The cluster \(Q^{*}=Q\backslash\{1\}\) (recall \(P,S\subset Q\)) represents a set of particles \(x_{j}\), \(j\in Q^{*}\), close to both \(x\) and \(y\), albeit held potentially looser since this closeness is with respect to the larger parameter \(\epsilon\). **Lemma 3.2**.: _Let \(\delta\leq(4N)^{-1}\epsilon\) and \(\delta\leq|x-y|\leq 2\delta\). Then \(\Phi(x,y,\hat{\mathbf{x}})=0\) unless_ \[|x-x_{k}|<\delta/2<|y-x_{k}|\quad\text{ for }\quad k\in P^{*} \tag{3.11}\] \[|y-x_{k}|<\delta/2<|x-x_{k}|\quad\text{ for }\quad k\in S^{*} \tag{3.12}\] \[|x-x_{k}|,\,|y-x_{k}|<\epsilon/2\quad\text{ for }\quad k\in Q^{*}. \tag{3.13}\] _Therefore, if \(\Phi(x,y,\,\cdot\,)\) is not identically zero for each \(x,y\in\mathbb{R}^{3}\) with \(\delta\leq|x-y|\leq 2\delta\), then \(P^{*}\cap S^{*}=\emptyset\)._ Proof.: By definition, if \(k\in P^{*}\) either \(g_{k}^{(1)}=\zeta_{\delta}\) or there exist pairwise distinct \(j_{1},\ldots,j_{s}\in\{2,\ldots,N\}\) with \(1\leq s\leq N-2\) such that \(g_{j_{1}}^{(1)}=\zeta_{\delta}\) and \(f_{j_{1},j_{2}},f_{j_{2},j_{3}}\)..., \(f_{j_{s},k}=\zeta_{\delta}\). In the former case, support criteria for \(\zeta_{\delta}(x-x_{k})\) gives \(|x-x_{k}|<(2N)^{-1}\delta\). In the latter case, support conditions give \(|x-x_{j_{1}}|,\,|x_{j_{1}}-x_{j_{2}}|,\,\ldots,\,|x_{j_{s}}-x_{k}|<(2N)^{-1}\delta\) and so by the triangle inequality \(|x-x_{k}|<\delta/2\). Now, \(|y-x_{k}|\geq|x-y|-|x-x_{k}|>\delta/2\) by the reverse triangle inequality. The case of \(k\in S^{*}\) is analogous. . Now let \(k\in Q^{*}\). First, we consider the case where either \(g_{k}^{(1)}\neq\theta_{\epsilon}\) or \(g_{k}^{(2)}\neq\theta_{\epsilon}\), or both. Without loss, assume \(g_{k}^{(1)}\neq\theta_{\epsilon}\). Then either \(g_{k}^{(1)}=\zeta_{\delta}\) or \(g_{k}^{(1)}=\theta_{\delta}\zeta_{\epsilon}\). Hence by support criteria we have the inequalities, \[|x-x_{k}|<(2N)^{-1}\epsilon,\] \[|y-x_{k}|\leq|x-y|+|x-x_{k}|\leq 2\delta+(2N)^{-1}\epsilon\leq\epsilon/N,\] which gives the required inequality since \(N\geq 2\). Now, suppose that \(g_{k}^{(1)}=g_{k}^{(2)}=\theta_{\epsilon}\). Then there exist pairwise distinct \(j_{1},\dots,j_{s}\in\{2,\dots,N\}\) with \(1\leq s\leq N-2\) such that \(f_{j_{1},j_{2}},f_{j_{2},j_{3}},\dots,f_{j_{s},k}\neq\theta_{\epsilon}\) and either \(g_{j_{1}}^{(1)}\neq\theta_{\epsilon}\) or \(g_{j_{1}}^{(2)}\neq\theta_{\epsilon}\). As before, we see that \[|x-x_{j_{1}}|,\,|y-x_{j_{1}}|\leq\epsilon/N\] regardless of which (or both) of \(g_{j_{1}}^{(1)}\) and \(g_{j_{1}}^{(2)}\) are not \(\theta_{\epsilon}\). It's also clear that by support criteria, \(|x_{j_{1}}-x_{j_{2}}|,\,\dots,\,|x_{j_{s}}-x_{k}|\leq(2N)^{-1}\epsilon\). Therefore, by the triangle inequality, \[|x-x_{k}|\leq|x-x_{j_{1}}|+|x_{j_{1}}-x_{j_{2}}|+\dots+|x_{j_{s}}-x_{k}|\leq \frac{\epsilon}{N}+\frac{\epsilon(N-2)}{2N}=\frac{\epsilon}{2},\] and similarly we can show \(|y-x_{k}|\leq\epsilon/2\), completing the proof. ### Factorisation of biscaled cutoffs Let \(\Phi\) be given by (3.3). We can define a _partial product_ of \(\Phi\) as a function of the form \[\Phi^{\prime}(x,y,\mathbf{\hat{x}})=\prod_{j\in T_{1}}g_{j}^{(1)}(x-x_{j}) \prod_{j\in T_{2}}g_{j}^{(2)}(y-x_{j})\prod_{(k,l)\in R_{1}}f_{kl}(x_{k}-x_{l}) \tag{3.14}\] where \(T_{1},T_{2}\subset\{2,\dots,N\}\), \(R_{1}\subset\{(k,l):2\leq k<l\leq N\}\). We now define classes of partial products of \(\Phi\) which corresponding to a cluster. Let \(T\) be an arbitrary cluster with \(1\in T\). \[\Phi(x,y,\mathbf{\hat{x}};T) =\prod_{j\in T^{*}}g_{j}^{(1)}(x-x_{j})\prod_{j\in T^{*}}g_{j}^{( 2)}(y-x_{j})\prod_{\begin{subarray}{c}k,l\in T^{*}\\ k<l\end{subarray}}f_{kl}(x_{k}-x_{l}), \tag{3.16}\] \[\Phi(x,y,\mathbf{\hat{x}};T^{c}) =\prod_{\begin{subarray}{c}k,l\in T^{c}\\ k<l\end{subarray}}f_{kl}(x_{k}-x_{l}),\] (3.17) \[\Phi(x,y,\mathbf{\hat{x}};T,T^{c}) =\prod_{j\in T^{c}}g_{j}^{(1)}(x-x_{j})\prod_{j\in T^{c}}g_{j}^{(2 )}(y-x_{j})\prod_{\begin{subarray}{c}k\in T^{*}\\ l\in T^{c}\end{subarray}}f_{kl}(x_{k}-x_{l}). \tag{3.15}\] Formulae (3.15) and (3.16) are consistent since the former concerns clusters containing \(1\) and the latter concerns clusters not containing \(1\). If we consider both \(x\) and \(y\) to adopt the role of particle \(1\) then these functions can be interpreted as \(\Phi(\,\cdot\,;T)\) involving pairs of particles in \(T\), \(\Phi(\,\cdot\,;T^{c})\) involving pairs in \(T^{c}\), and \(\Phi(\,\cdot\,;T,T^{c})\) involving pairs where one lies in \(T\) and another lies in \(T^{c}\). **Lemma 3.3**.: _Given a biscaled cutoff \(\Phi\) and any cluster \(T\) with \(1\in T\) we have_ \[\Phi(x,y,\mathbf{\hat{x}})=\Phi(x,y,\mathbf{\hat{x}};T)\Phi(x,y,\mathbf{\hat{x}}; T,T^{c})\Phi(x,y,\mathbf{\hat{x}};T^{c}). \tag{3.18}\] Proof.: This identity follows from the definitions (3.15)-(3.17) and the equality \[\prod_{2\leq k<l\leq N}f_{kl}(x_{k}-x_{l})=\prod_{\begin{subarray}{c}k,l\in T ^{*}\\ k<l\end{subarray}}f_{kl}(x_{k}-x_{l})\prod_{\begin{subarray}{c}k\in T^{*}\\ l\in T^{c}\end{subarray}}f_{kl}(x_{k}-x_{l})\prod_{\begin{subarray}{c}k,l\in T ^{c}\\ k<l\end{subarray}}f_{kl}(x_{k}-x_{l}),\] since for any \(2\leq k<l\leq N\) we have \(f_{kl}=f_{lk}\). Let \(Q=Q(\Phi)\). Then the partial product \(\Phi(\,\cdot\,;Q,Q^{c})\) consists of only \(\theta_{\epsilon}\) cutoff factors, as shown in the following lemma. **Lemma 3.4**.: _For \(Q=Q(\Phi)\),_ \[\Phi(x,y,\mathbf{\hat{x}};Q,Q^{c})=\prod_{j\in Q^{c}}\theta_{\epsilon}(x-x_{j} )\prod_{j\in Q^{c}}\theta_{\epsilon}(y-x_{j})\prod_{\begin{subarray}{c}k\in Q ^{*}\\ l\in Q^{c}\end{subarray}}\theta_{\epsilon}(x_{k}-x_{l}). \tag{3.19}\] Proof.: By the definition of the cluster \(Q(\Phi)\) we have \(g_{j}^{(1)}=g_{j}^{(2)}=\theta_{\epsilon}\) for each \(j\in Q^{c}\), and \(f_{kl}=\theta_{\epsilon}\) for each \(k\in Q^{*}\) and \(l\in Q^{c}\). ### Derivatives of biscaled cutoffs The definition of cluster derivatives, (1.16), is modestly extended to allow action on biscaled cutoffs. For any cluster \(T\) we define the following three cluster derivatives which can act on functions of \(x\), \(y\) and \(\mathbf{\hat{x}}\), such as \(\Phi\). We set \[D^{\alpha}_{x,T}=\partial^{\alpha}_{x}+\sum_{j\in T^{*}}\partial^{\alpha}_{x_ {j}}\quad\text{and}\quad D^{\alpha}_{y,T}=\partial^{\alpha}_{y}+\sum_{j\in T^{ *}}\partial^{\alpha}_{x_{j}}\quad\text{for $\alpha\in\mathbb{N}_{0}^{3}$, $|\alpha|=1$} \tag{3.20}\] \[D^{\alpha}_{x,y,T}=\partial^{\alpha}_{x}+\partial^{\alpha}_{y}+\sum_{j\in T^{ *}}\partial^{\alpha}_{x_{j}}\quad\text{for $\alpha\in\mathbb{N}_{0}^{3}$, $|\alpha|=1$}. \tag{3.21}\] which are extended to higher order multiindices \(\alpha\in\mathbb{N}_{0}^{3}\) by successive application of first-order derivatives, as in (1.16). The following lemma gives partial derivative estimates of the cutoff factors. We will require the following elementary result. For each \(\sigma\in\mathbb{N}_{0}^{3}\) and \(s\in\mathbb{R}\) there exists \(C>0\) such that for any \(z_{0}\in\mathbb{R}^{3}\) we get \(\partial^{\sigma}_{z}|z+z_{0}|^{s}\leq C|z+z_{0}|^{s-|\sigma|}\) for all \(z\in\mathbb{R}^{3}\), \(z\neq-z_{0}\). Recall the function \(\mathds{1}_{t}\) was defined in (3.4). **Lemma 3.5**.: _For any \(\sigma\in\mathbb{N}_{0}^{3}\) with \(|\sigma|\geq 1\) and any \(t>0\) there exists \(C\), depending on \(\sigma\) but independent of \(t\), such that_ \[|\partial^{\sigma}\zeta_{t}(z)|,\,|\partial^{\sigma}\theta_{t}(z)|\leq Ct^{-| \sigma|}\mathds{1}_{t}(z) \tag{3.22}\] _for all \(z\in\mathbb{R}^{3}\)._ Proof.: Without loss we consider the case of \(\theta_{t}\), the case of \(\zeta_{t}\) being similar. In the following, \(\chi^{(j)}\) refers to the \(j\)-th (univariate) derivative of the function \(\chi\) defined in (3.1). Now, since \(|\sigma|\geq 1\) the chain rule shows that \(\partial^{\sigma}\theta_{t}(z)\) can be written as a sum of terms of the form \[\Big{(}\frac{4N}{t}\Big{)}^{m}\,\chi^{(m)}\Big{(}\frac{4N|z|}{t}\Big{)}\, \partial_{z}^{\sigma_{1}}|z|\,\dots\,\partial_{z}^{\sigma_{m}}|z| \tag{3.23}\] where \(1\leq m\leq|\sigma|\), and \(\sigma_{1},\dots,\sigma_{m}\in\mathbb{N}_{0}^{3}\) are non-zero multiindices obeying \[\sigma_{1}+\dots+\sigma_{m}=\sigma.\] Since \(m\geq 1\) we have that if \(\chi^{(m)}(s)\neq 0\) then \(s\in(1,2)\), and therefore for any term (3.23) to be non-zero we require that \[(4N)^{-1}t<|z|<(2N)^{-1}t. \tag{3.24}\] By the remark preceeding the current lemma, there exists \(C\), dependent on \(\sigma_{1},\dots,\sigma_{m}\), such that \[\partial_{z}^{\sigma_{1}}|z|\,\dots\,\partial_{z}^{\sigma_{m}}|z|\leq C|z|^{m- |\sigma|}\leq C(4N)^{|\sigma|-m}t^{m-|\sigma|},\] using (3.24). Therefore, the terms (3.23) can readily be bounded to give the desired result. We now give bounds for the cluster derivatives (3.20)-(3.21) acting on cutoffs. **Lemma 3.6**.: _Let \(\Phi\) be any biscaled cutoff and let \(Q=Q(\Phi)\). Then_ \[D_{x,y,Q}^{\alpha}\Phi(\,\cdot\,;Q)\equiv 0 \tag{3.25}\] _for all \(\alpha\in\mathbb{N}_{0}^{3}\) with \(|\alpha|\geq 1\)._ Proof.: By the chain rule, each function in the product (3.15) for \(\Phi(\,\cdot\,;Q)\) has zero derivative upon action of \(D_{x,y,Q}^{\alpha}\). Recall that the notion of partial products was defined in (3.14) and the function \(M_{t}\) for \(t>0\) was defined in (3.6). **Lemma 3.7**.: _Let \(\delta\leq(4N)^{-1}\epsilon\) and \(\Phi=\Phi_{\delta,\epsilon}\) be a biscaled cutoff. Let \(Q=Q(\Phi)\). Then for any multiindex \(\boldsymbol{\alpha}\in\mathbb{N}_{0}^{3N+3}\) there exists \(C\), dependent on \(\boldsymbol{\alpha}\) but independent of \(\delta\) and \(\epsilon\), such that for any partial products \(\Phi^{\prime}\) of \(\Phi\) we have_ \[|\partial^{\boldsymbol{\alpha}}\Phi^{\prime}(x,y,\mathbf{\hat{x}})|\leq \begin{cases}C\epsilon^{-|\boldsymbol{\alpha}|}&\text{ if }\Phi^{\prime}=\Phi(\,\cdot\,;Q,Q^{c})\\ C\big{(}\epsilon^{-|\boldsymbol{\alpha}|}+\delta^{-|\boldsymbol{\alpha}|}M_{ \delta}(x,y,\mathbf{\hat{x}})\big{)}&\text{ otherwise.}\end{cases} \tag{3.26}\] Proof.: Lemma 3.5 gives bounds for the partial derivatives of the function \(\zeta_{\delta},\theta_{\delta},\zeta_{\epsilon},\theta_{\epsilon}\). Considering \(\theta_{\delta}\zeta_{\epsilon}\), we apply the Leibniz rule with \(\sigma\in\mathbb{N}_{0}^{3}\), \(|\sigma|\geq 1\), to obtain \[\partial^{\sigma}(\theta_{\delta}\zeta_{\epsilon})(z) =\sum_{\mu\leq\sigma}\binom{\sigma}{\mu}\partial^{\mu}\theta_{ \delta}(z)\partial^{\sigma-\mu}\zeta_{\epsilon}(z) \tag{3.27}\] \[=\partial^{\sigma}\theta_{\delta}(z)+\partial^{\sigma}\zeta_{ \epsilon}(z),\] since, for each \(\mu\leq\sigma\) with \(\mu\neq 0\) and \(\mu\neq\sigma\) we have \[\partial^{\mu}\theta_{\delta}(z)\,\partial^{\sigma-\mu}\zeta_{\epsilon}(z)\equiv 0\] by Lemma 3.5 and the definition (3.4). Now, to evaluate \(\partial^{\boldsymbol{\alpha}}\Phi^{\prime}\) we apply the Leibniz rule to the product (3.14) using (3.27) where appropriate. Differentiated cutoff factors are bounded by (3.22) and any remaining undifferentiated cutoff factors are bounded above by \(1\). If \(\Phi^{\prime}=\Phi(\,\cdot\,;Q,Q^{c})\), all cutoff factors are of the form \(\theta_{\epsilon}\) by Lemma 3.4 and therefore we need only use the bounds in (3.22) with \(t=\epsilon\). The derivative \(D_{x,y,Q}\) acting on \(\Phi\) is special in that it contributes only powers of \(\epsilon\) (and not \(\delta\)) to the bounds. This is shown in the next lemma. **Lemma 3.8**.: _Let \(\delta\leq(4N)^{-1}\epsilon\) and \(\Phi=\Phi_{\delta,\epsilon}\) be a biscaled cutoff. Let \(Q=Q(\Phi)\). For any multiindices \(\alpha\in\mathbb{N}_{0}^{3}\) and \(\boldsymbol{\sigma}\in\mathbb{N}_{0}^{3N+3}\) there exists \(C\), independent of \(\epsilon\) and \(\delta\), such that_ \[|\partial^{\boldsymbol{\sigma}}D_{x,y,Q}^{\alpha}\Phi(x,y,\mathbf{\hat{x}})| \leq C\epsilon^{-|\alpha|}\big{(}\epsilon^{-|\boldsymbol{\sigma}|}+\delta^{-| \boldsymbol{\sigma}|}M_{\delta}(x,y,\mathbf{\hat{x}})\big{)}\] _for all \(x,y,\mathbf{\hat{x}}\)._ Proof.: First, set \(\Phi^{\prime}=\Phi(\,\cdot\,;Q)\,\Phi(\,\cdot\,;Q^{c})\) and \(\Phi^{\prime\prime}=\Phi(\,\cdot\,;Q,Q^{c})\). We then have \[D_{x,y,Q}^{\alpha}\Phi=\Phi^{\prime}\,D_{x,y,Q}^{\alpha}\Phi^{\prime\prime}\] which follows from Lemmas 3.3 and 3.6 and that \(\Phi(\,\cdot\,;Q^{c})\) is not dependent on variables involved in the \(D_{x,y,Q}^{\alpha}\)-derivative. By the definition (3.21), the derivative \(D_{x,y,Q}^{\alpha}\Phi^{\prime\prime}\) can be written as a sum of partial derivatives of the form \(\partial^{\boldsymbol{\alpha}}\Phi^{\prime\prime}\) where \(\boldsymbol{\alpha}\in\mathbb{N}_{0}^{3N+3}\) obeys \(|\boldsymbol{\alpha}|=|\alpha|\). Now, by the Leibniz rule and Lemma 3.7 there exists some constants \(C\) and \(C^{\prime}\), independent of \(\delta\) and \(\epsilon\), such that \[|\partial^{\boldsymbol{\sigma}}(\Phi^{\prime}\,\partial^{ \boldsymbol{\alpha}}\Phi^{\prime\prime})| \leq\sum_{\boldsymbol{\tau}\leq\boldsymbol{\sigma}}\binom{ \boldsymbol{\sigma}}{\boldsymbol{\tau}}|\partial^{\boldsymbol{\tau}}\Phi^{ \prime}|\,|\partial^{\boldsymbol{\sigma}-\boldsymbol{\tau}+\boldsymbol{\alpha }}\Phi^{\prime\prime}|\] \[\leq C\sum_{\boldsymbol{\tau}\leq\boldsymbol{\sigma}}(\epsilon^{ -|\boldsymbol{\tau}|}+\delta^{-|\boldsymbol{\tau}|}M_{\delta})\epsilon^{-| \boldsymbol{\sigma}|+|\boldsymbol{\tau}|-|\boldsymbol{\alpha}|}\] \[\leq C^{\prime}\epsilon^{-|\boldsymbol{\alpha}|}(\epsilon^{-| \boldsymbol{\sigma}|}+\delta^{-|\boldsymbol{\sigma}|}M_{\delta}),\] completing the proof. ## 4. Proof of Theorem 1.1 The idea of the proof is to turn partial derivatives of an integral, such as the density matrix \(\gamma(x,y)\) weighted with a suitable cutoff, into cluster derivatives under the integral. For the density matrix we then estimate the resulting integrals involving cluster derivatives of \(\psi\) using the pointwise bounds of Theorem 1.3. Although Theorem 1.1 is stated using partial derivatives of the density matrix \(\gamma(x,y)\) in the \(x\)- and \(y\)-variables, it is more appropriate to consider directional derivatives. Indeed, we define new variables \[u =(x+y)/2, \tag{4.2}\] \[v =(x-y)/2, \tag{4.1}\] and consider the \(\partial_{u}^{\alpha}\partial_{v}^{\beta}\)-derivatives of the density matrix. The \(\partial_{u}\)-derivatives act along the direction parallel to the diagonal and it is found that they do not affect the non-smoothness we get at the diagonal, regardless of how many of these derivatives are taken. The \(\partial_{v}\)-derivatives act in the direction perpendicular to the diagonal and these derivatives are found to contribute to worsening the non-smoothness at the diagonal. Consider derivatives of the form \(\partial_{x}^{\alpha}\partial_{y}^{\beta}\gamma(x,y)\) where \(|\alpha|+|\beta|=2\). As discussed in [5], the case where \(|\alpha|=|\beta|=1\) (the _mixed_ derivatives) is particularly well-behaved in contrast to the other cases. The reason behind this is that when \(|\alpha|=|\beta|=1\), differentiation under the integral leads to both \(\psi\) factors being differentiated exactly once. The greater regularity in this case follows from the well known fact that \(\psi,\nabla\psi\in L^{\infty}_{loc}(\mathbb{R}^{3N})\), first proven in [12], whereas higher order derivatives of \(\psi\) do not have this locally boundedness. This is used in the proof in the following way. In (4.34) below, it will be shown that derivatives of the form \(\partial_{v}^{\sigma}\) for \(|\sigma|=2\) can be written in terms of \(\partial_{u}^{\sigma}\) and the mixed derivatives \(\partial_{x}^{\alpha}\partial_{y}^{\beta}\gamma(x,y)\) for some \(|\alpha|=|\beta|=1\). The benefit of this identity is that two \(v\)-derivatives (which act to worsen the singularity at the diagonal) have been transformed into two \(u\)-derivatives (which do not worsen the singularity) along with mixed derivatives which have good regularity. This method only works for two \(v\)-derivatives, and the strategy of using cluster derivatives, as described above, must be used in conjunction. The difficulties encountered in the fifth derivative of the density matrix are described in a later section. ### Density matrix notation The proof will require auxiliary functions related to the density matrix which we introduce now. For \(l,m\in\mathbb{N}_{0}^{3}\) with \(|l|,|m|\leq 1\) define, \[\gamma_{l,m}(x,y)=\int_{\mathbb{R}^{3N-3}}\partial_{x}^{l}\psi(x,\mathbf{\hat {x}})\overline{\partial_{y}^{m}\psi(y,\mathbf{\hat{x}})}\,d\mathbf{\hat{x}}. \tag{4.3}\] In this notation, it is clear that \(\gamma=\gamma_{0,0}\). For any biscaled cutoff \(\Phi\), defined in (3.3), we set \[\gamma_{l,m}(x,y;\Phi)=\int_{\mathbb{R}^{3N-3}}\partial_{x}^{l}\psi(x, \mathbf{\hat{x}})\overline{\partial_{y}^{m}\psi(y,\mathbf{\hat{x}})}\Phi(x,y, \mathbf{\hat{x}})\,d\mathbf{\hat{x}},\] and define \(\gamma(\,\cdot\,;\Phi)=\gamma_{0,0}(\,\cdot\,;\Phi)\). We will consider the above functions in the variables (4.1) and (4.2). It is then natural to define for all \(u,v\in\mathbb{R}^{3}\), \[\tilde{\gamma}_{l,m}(u,v) =\gamma_{l,m}(u+v,u-v), \tag{4.5}\] \[\tilde{\gamma}_{l,m}(u,v;\Phi) =\gamma_{l,m}(u+v,u-v;\Phi). \tag{4.4}\] ### Integrals involving \(f_{\infty}\) The following proposition is a restatement of [5, Lemma 5.1] and is proven in that paper. Notice that the function \(M_{t}\) was defined in (3.6) and has a slightly different form to the corresponding function used in the paper. **Proposition 4.1**.: _Given \(R>0\), there exists \(C\) such that_ \[\int_{\mathbb{R}^{3N-3}}f_{\infty}(x,\mathbf{\hat{x}};R)f_{\infty}(y,\mathbf{ \hat{x}};R)\,d\mathbf{\hat{x}}\leq C\left\|\rho\right\|_{L^{1}(B(x,2R))}^{1/2} \left\|\rho\right\|_{L^{1}(B(y,2R))}^{1/2} \tag{4.6}\] _for all \(x,y\in\mathbb{R}^{3}\). In addition, given \(G\in L^{1}(\mathbb{R}^{3})\) there exists \(C\), independent of \(G\), such that_ \[\int_{\mathbb{R}^{3N-3}}\big{(}|G(x_{j}-x_{k})|+|G(z-x_{k})|+|G(x_ {j})|\big{)}f_{\infty}(x,\mathbf{\hat{x}};R)f_{\infty}(y,\mathbf{\hat{x}};R) \,d\mathbf{\hat{x}}\\ \leq C\left\|G\right\|_{L^{1}(\mathbb{R}^{3})}\left\|\rho\right\| _{L^{1}(B(x,2R))}^{1/2}\left\|\rho\right\|_{L^{1}(B(y,2R))}^{1/2} \tag{4.7}\] _for all \(x,y,z\in\mathbb{R}^{3}\), and \(j,k=2,\ldots,N\), \(j\neq k\). In particular, for any \(t>0\) there exists \(C\), independent of \(t\), such that_ \[\int_{\mathbb{R}^{3N-3}}M_{t}(x,y,\mathbf{\hat{x}})f_{\infty}(x,\mathbf{\hat{ x}};R)f_{\infty}(y,\mathbf{\hat{x}};R)\,d\mathbf{\hat{x}}\leq Ct^{3}\left\| \rho\right\|_{L^{1}(B(x,2R))}^{1/2}\left\|\rho\right\|_{L^{1}(B(y,2R))}^{1/2} \tag{4.8}\] _for all \(x,y\in\mathbb{R}^{3}\)._ We introduce the following quantities based on the bounded distances (1.21). For any \(x,y\in\mathbb{R}^{3}\) and \(\mathbf{\hat{x}}\in\mathbb{R}^{3N-3}\) define \[\lambda(x,y,\mathbf{\hat{x}}) =\min\{\lambda_{P}(x,\mathbf{\hat{x}}),\,\lambda_{S^{*}}(x,\mathbf{ \hat{x}}),\,\lambda_{P^{*}}(y,\mathbf{\hat{x}}),\,\lambda_{S}(y,\mathbf{\hat{ x}})\}, \tag{4.10}\] \[\pi(x,y,\mathbf{\hat{x}}) =\min\{\lambda_{Q}(x,\mathbf{\hat{x}}),\,\lambda_{Q}(y,\mathbf{ \hat{x}})\}. \tag{4.9}\] Later, we will see that these quantities appear to negative powers when we apply Theorem 1.3 to clusters derivatives of \(\psi\) involving the clusters \(P,S\) and \(Q\). The next lemma will give conditions for when these quantities can be bounded away from zero on the support of \(\Phi\). Beforehand, we consider an alternative formulae for (4.10) and (4.9). Recall that \(1\in P,S,Q\) by definition. Using (1.20) we find that \[\pi(x,y,\mathbf{\hat{x}})=\min\{1,\ |x|,\ |y|,\ |x_{j}|:j\in Q^{*},\ 2^{-1/2}|x-x_{k}|:k\in Q^{c},\\ 2^{-1/2}|y-x_{k}|:k\in Q^{c},\ 2^{-1/2}|x_{j}-x_{k}|:j\in Q^{*},k \in Q^{c}\}. \tag{4.11}\] In the case of \(P^{*}\cap S^{*}=\emptyset\) we similarly find that \[\lambda(x,y,\mathbf{\hat{x}})=\min\{1,\ |x|,\ |y|,\ |x_{j}|:j\in P^{*} \cup S^{*},\ 2^{-1/2}|x-x_{k}|:k\in P^{c},\\ 2^{-1/2}|y-x_{k}|:k\in S^{c},\ 2^{-1/2}|x_{j}-x_{k}|:(j,k)\in(P^{*} \times P^{c})\cup(S^{*}\times S^{c})\}. \tag{4.12}\] **Lemma 4.2**.: _Let \(\delta\leq(4N)^{-1}\epsilon\), and \(\epsilon\leq 1\) and let \(\Phi=\Phi_{\delta,\epsilon}\) be an arbitrary biscaled cutoff. Let \(x,y\in\mathbb{R}^{3}\) be such that \(\delta\leq|x-y|\leq 2\delta\), and \(|x|,|y|\geq\epsilon\). Then there exists a constant \(C\), dependent only on \(N\), such that_ \[\pi(x,y,\mathbf{\hat{x}}) \geq C\epsilon \tag{4.14}\] \[\lambda(x,y,\mathbf{\hat{x}}) \geq C\delta \tag{4.13}\] _whenever \(\Phi(x,y,\mathbf{\hat{x}})\neq 0\). In addition, for all \(b\geq 0\) with \(b\neq 3\) and \(R>0\) there exists \(C\), depending on \(b\) and \(R\) but independent of \(\delta,\epsilon,x\) and \(y\) such that_ \[\int_{\mathrm{supp}\,\Phi(x,y,\,\cdot)}\lambda(x,y,\mathbf{\hat{ x}})^{-b}f_{\infty}(x,\mathbf{\hat{x}};R)f_{\infty}(y,\mathbf{\hat{x}};R)\,d \mathbf{\hat{x}}\\ \leq C(\epsilon^{-b}+h_{b}(\delta))\left\|\rho\right\|_{L^{1}(B( x,2R))}^{1/2}\left\|\rho\right\|_{L^{1}(B(y,2R))}^{1/2} \tag{4.15}\] _where, for all \(t>0\), we define_ \[h_{b}(t)=\begin{cases}0&\text{if }b<3\\ t^{3-b}&\text{if }b>3.\end{cases}\] The following corollary will be useful later. **Corollary 4.3**.: _There exists \(C\), depending on \(R\) but independent of \(\delta\) and \(\epsilon\), such that_ \[\sum_{r=1}^{2}\int_{\mathbb{R}^{3N-3}}\lambda(x,y,\mathbf{\hat{x} })^{-2}f_{\infty}(x,\mathbf{\hat{x}};R)f_{\infty}(y,\mathbf{\hat{x}};R)|\nabla ^{r}\Phi(x,y,\mathbf{\hat{x}})|\,d\mathbf{\hat{x}}\\ \leq C\epsilon^{-3}\left\|\rho\right\|_{L^{1}(B(x,2R))}^{1/2}\left\| \rho\right\|_{L^{1}(B(y,2R))}^{1/2} \tag{4.16}\] _for all \(x,y\in\mathbb{R}^{3}\) with \(\delta\leq|x-y|\leq 2\delta\) and \(|x|,|y|\geq\epsilon\)._ Proof of Corollary 4.3.: When \(r=1\) the bound for the integral is immediate. Using Lemma 3.8 and (4.14), the integral in (4.16) for \(r=2\) can be bounded by some constant multiplying \[\int_{\mathrm{supp}\,\Phi(x,y,\,\cdot)}\big{(}\epsilon^{-1}\lambda(x,y, \mathbf{\hat{x}})^{-2}+\delta^{-3}M_{\delta}(x,y,\mathbf{\hat{x}})\big{)}f_{ \infty}(x,\mathbf{\hat{x}};R)f_{\infty}(y,\mathbf{\hat{x}};R)\,d\mathbf{\hat{ x}}\] The required bound follows from (4.15) of Lemma 4.2, (4.8) of Proposition 4.1 and that \(\epsilon<1\). Proof of Lemma 4.2.: By Lemma 3.2 we need only consider \(\Phi\) with \(P^{*}\cap S^{*}=\emptyset\). By the definition of the cluster \(Q\), if \(j\in Q^{*}\) and \(k\in Q^{c}\) then \(f_{jk}=\theta_{\epsilon}\) in the formula (3.3) defining \(\Phi\). Similarly, if \(k\in Q^{c}\) then \(g_{k}^{(1)}=g_{k}^{(2)}=\theta_{\epsilon}\). Using the support criteria of \(\theta_{\epsilon}\) we therefore get \[|x-x_{k}|,\,|y-x_{k}|,\,|x_{j}-x_{k}|\geq(4N)^{-1}\epsilon\qquad j\in Q^{*},k \in Q^{c}. \tag{4.17}\] In addition, if \(j\in Q^{*}\) we get \[|x_{j}|\geq|x|-|x-x_{j}|\geq\epsilon/2\qquad j\in Q^{*} \tag{4.18}\] since for such \(j\) we have \(|x-x_{j}|\leq\epsilon/2\) by Lemma 3.2. The lower bound (4.13) then follows from the formula (4.11) for \(\pi(x,y,\mathbf{\hat{x}})\). . In a similar way we consider the clusters \(P\) and \(S\). Indeed, let \(j\in P^{*}\), \(k\in P^{c}\) or \(j\in S^{*}\), \(k\in S^{c}\), then \(f_{jk}\neq\zeta_{\delta}\). Similarly, if \(k\in P^{c}\) then \(g_{k}^{(1)}\neq\zeta_{\delta}\), and if \(k\in S^{c}\) then \(g_{k}^{(2)}\neq\zeta_{\delta}\). Notice that if a cutoff factor is not \(\zeta_{\delta}\) then it must either be \(\theta_{\delta}\zeta_{\epsilon}\) or \(\theta_{\epsilon}\), both of which are only supported away from zero. Therefore by the support criteria of these factors we obtain the following inequalities, \[|x-x_{k}|,|y-x_{l}|\geq(4N)^{-1}\delta k\in P^{c},\,l\in S^{c} \tag{4.20}\] \[|x_{j}-x_{k}|\geq(4N)^{-1}\delta j\in P^{*},\,k\in P^{c}\text{ or }j\in S^{*},\,k\in S^{c}. \tag{4.19}\] We also have (4.18) for \(j\in P^{*}\cup S^{*}\) since \(P,S\subset Q\). The lower bound (4.14) then follows from the formula (4.12) for \(\lambda(x,y,\mathbf{\hat{x}})\). . We recall the function \(\mathds{1}^{\prime}_{\delta}\) was defined in (3.5). By the formula (4.12) we can use (4.18) to obtain some \(C\), depending on \(b\), such that \[\lambda(x,y,\mathbf{\hat{x}})^{-b}\leq C\Big{(}\epsilon^{-b}+ \sum_{k\in P^{c}}\mathds{1}^{\prime}_{\delta}(x-x_{k})\,|x-x_{k}|^{-b}+\sum_{k \in S^{c}}\mathds{1}^{\prime}_{\delta}(y-x_{k})\,|y-x_{k}|^{-b}\\ +\sum_{\begin{subarray}{c}(j,k)\in(P^{*}\times P^{c})\\ \cup(S^{*}\times S^{c})\end{subarray}}\mathds{1}^{\prime}_{\delta}(x_{j}-x_{k })\,|x_{j}-x_{k}|^{-b}\Big{)}, \tag{4.21}\] where the upper bounds in the indicator functions (3.5) can be included because \(1\) lies in the minimum (4.12) and the lower bounds follow from (4.18)-(4.20). The bound (4.15) then follows from the above inequality along with both (4.6) and (4.7) of Proposition 4.1, where we choose \(G\) to be the function \(G(z)=\mathds{1}^{\prime}_{\delta}(z)|z|^{-b}\) for \(z\in\mathbb{R}^{3}\). ### Differentiating the density matrix - some required bounds We collect certain results which will be used throughout this section. Let \(\delta\leq(4N)^{-1}\epsilon\) and \(\epsilon\leq 1\) and suppose \(\Phi=\Phi_{\delta,\epsilon}\) is a biscaled cutoff as defined in (3.3). As usual, we denote \(Q=Q(\Phi)\), \(P=P(\Phi)\) and \(S=S(\Phi)\). Let \(\eta\in\mathbb{N}_{0}^{3}\) and \(\boldsymbol{\nu}=(\nu_{1},\nu_{2})\in\mathbb{N}_{0}^{6}\) be arbitrary. Firstly, we define \[\Phi^{(\eta,\boldsymbol{\nu})}(x,y,\mathbf{\hat{x}})=D^{\eta}_{x,y,Q}D^{\nu_{ 1}}_{x,P}D^{\nu_{2}}_{y,S}\Phi(x,y,\mathbf{\hat{x}}) \tag{4.22}\] where in the notation it is implicit the clusters used are \(Q,P\) and \(S\) corresponding to \(\Phi\). By Lemma 3.8 and the definition of cluster derivatives (1.16) it can be shown that for each \(\eta\) and \(\boldsymbol{\nu}\) there exists \(C\), independent of \(\delta\) and \(\epsilon\), such that \[|\Phi^{(\eta,\boldsymbol{\nu})}(x,y,\mathbf{\hat{x}})|\leq C\epsilon^{-|\eta|} \big{(}\epsilon^{-|\boldsymbol{\nu}|}+\delta^{-|\boldsymbol{\nu}|}M_{\delta}(x, y,\mathbf{\hat{x}})\big{)} \tag{4.23}\] for all \(x,y\in\mathbb{R}^{3}\) and \(\mathbf{\hat{x}}\in\mathbb{R}^{3N-3}\). Now take any \(x,y\in\mathbb{R}^{3}\) with \(\delta\leq|x-y|\leq 2\delta\) and \(|x|,|y|\geq\epsilon\), and suppose \(\Phi(x,y,\mathbf{\hat{x}})\neq 0\). Then, as a consequence of Lemma 4.2 we have both \(\pi(x,y,\mathbf{\hat{x}})\) and \(\lambda(x,y,\mathbf{\hat{x}})\) are positive. Therefore, by the definitions (4.9), (4.10) and (1.20), (1.21), \[(x,\mathbf{\hat{x}})\in\Sigma_{Q}^{c}\cap\Sigma_{P}^{c}\cap\Sigma_{S^{*}}^{c} \quad\text{and}\quad(y,\mathbf{\hat{x}})\in\Sigma_{Q}^{c}\cap\Sigma_{P^{*}}^{c} \cap\Sigma_{S}^{c}. \tag{4.24}\] This allows us to apply Theorem 1.3. Indeed, for every \(\eta\in\mathbb{N}_{0}^{3}\), \(\boldsymbol{\nu}=(\nu_{1},\nu_{2})\in\mathbb{N}_{0}^{6}\) and all \(R>0\) there exists \(C\), independent of our choice of \(x,y\) and \(\mathbf{\hat{x}}\), such that for \(k=0,1\), \[\big{|}D_{Q}^{\eta}D_{\{P,S^{*}\}}^{\boldsymbol{\nu}}\nabla^{k} \psi(x,\mathbf{\hat{x}})\big{|} \leq C\pi(x,y,\mathbf{\hat{x}})^{-|\eta|}\lambda(x,y,\mathbf{ \hat{x}})^{-|\boldsymbol{\nu}|}f_{\infty}(x,\mathbf{\hat{x}};R), \tag{4.26}\] \[\big{|}D_{Q}^{\boldsymbol{\nu}}D_{\{P^{*},S\}}^{\boldsymbol{\nu}} \nabla^{k}\psi(y,\mathbf{\hat{x}})\big{|} \leq C\pi(x,y,\mathbf{\hat{x}})^{-|\eta|}\lambda(x,y,\mathbf{ \hat{x}})^{-|\boldsymbol{\nu}|}f_{\infty}(y,\mathbf{\hat{x}};R). \tag{4.25}\] For convenience, we choose a bound which holds for both values of \(k\). The functions \(\lambda\) and \(\pi\) are defined in (4.9) and (4.10) respectively. Now, let \(|\boldsymbol{\nu}|\geq 1\). By Theorem 1.3 with \(b=1/2\) we can write \[D_{\{P,S^{*}\}}^{\boldsymbol{\nu}}\nabla\psi(x,\mathbf{\hat{x}}) =G_{\{P,S^{*}\}}^{\boldsymbol{\nu}}(x,\mathbf{\hat{x}})+\psi(x, \mathbf{\hat{x}})\big{(}D_{\{P,S^{*}\}}^{\boldsymbol{\nu}}\nabla F_{c}(x, \mathbf{\hat{x}})\big{)} \tag{4.28}\] \[D_{\{P^{*},S\}}^{\boldsymbol{\nu}}\nabla\psi(y,\mathbf{\hat{x}}) =G_{\{P^{*},S\}}^{\boldsymbol{\nu}}(y,\mathbf{\hat{x}})+\psi(y, \mathbf{\hat{x}})\big{(}D_{\{P^{*},S\}}^{\boldsymbol{\nu}}\nabla F_{c}(y, \mathbf{\hat{x}})\big{)} \tag{4.27}\] for functions \(G_{\{P,S^{*}\}}^{\boldsymbol{\nu}}\) and \(G_{\{P^{*},S\}}^{\boldsymbol{\nu}}\) which obey \[\big{|}G_{\{P,S^{*}\}}^{\boldsymbol{\nu}}(x,\mathbf{\hat{x}}) \big{|} \leq C\lambda(x,y,\mathbf{\hat{x}})^{1/2-|\boldsymbol{\nu}|}f_{ \infty}(x,\mathbf{\hat{x}};R), \tag{4.30}\] \[\big{|}G_{\{P^{*},S\}}^{\boldsymbol{\nu}}(y,\mathbf{\hat{x}}) \big{|} \leq C\lambda(x,y,\mathbf{\hat{x}})^{1/2-|\boldsymbol{\nu}|}f_{ \infty}(y,\mathbf{\hat{x}};R), \tag{4.29}\] for some \(C\), dependent on \(\boldsymbol{\nu}\) but independent of the choice of \(x,y\) and \(\mathbf{\hat{x}}\). By Lemma 2.3 and (2.2), for each \(\boldsymbol{\nu}\in\mathbb{N}_{0}^{6}\) there exists \(C\), independent of \(x,y\) and \(\mathbf{\hat{x}}\), such that \[\big{|}D_{\{P,S^{*}\}}^{\boldsymbol{\nu}}\nabla F_{c}(x,\mathbf{\hat{x}}) \big{|}+\big{|}D_{\{P^{*},S\}}^{\boldsymbol{\nu}}\nabla F_{c}(y,\mathbf{\hat{x }})\big{|}\leq C\lambda(x,y,\mathbf{\hat{x}})^{-|\boldsymbol{\nu}|}. \tag{4.31}\] ### Differentiating the density matrix - first and second derivatives In the following, let \(l,m\in\mathbb{N}_{0}^{3}\) obey \(|l|=|m|=1\). In standard notation, by \(\partial_{x_{1}}^{l}\psi\) we mean the \(l\)-partial derivative in the first \(\mathbb{R}^{3}\) component of \(\psi\). Then by differentiation under the integral, \[\partial_{u}^{l}\tilde{\gamma}(u,v) =\int_{\mathbb{R}^{3N-3}}\partial_{x_{1}}^{l}\psi(u+v,\mathbf{ \hat{x}})\overline{\psi(u-v,\mathbf{\hat{x}})}\,d\mathbf{\hat{x}}+\int_{ \mathbb{R}^{3N-3}}\psi(u+v,\mathbf{\hat{x}})\overline{\partial_{x_{1}}^{l} \psi(u-v,\mathbf{\hat{x}})}\,d\mathbf{\hat{x}}\] \[=\tilde{\gamma}_{l,0}(u,v)+\tilde{\gamma}_{0,l}(u,v), \tag{4.32}\] and similarly, \[\partial_{v}^{l}\tilde{\gamma}(u,v)=\tilde{\gamma}_{l,0}(u,v)-\tilde{\gamma} _{0,l}(u,v) \tag{4.33}\] for all \(u,v\in\mathbb{R}^{3}\). The above equalities are used to obtain a formula relating second order \(u\)-derivatives to second order \(v\)-derivatives of \(\tilde{\gamma}\). Omitting the argument \((u,v)\) we get \[\big{(}\partial_{u}^{l+m}-\partial_{v}^{l+m}\big{)}\tilde{\gamma} =\partial_{u}^{l}(\tilde{\gamma}_{m,0}+\tilde{\gamma}_{0,m})- \partial_{v}^{l}(\tilde{\gamma}_{m,0}-\tilde{\gamma}_{0,m})\] \[=\big{(}\partial_{u}^{l}+\partial_{v}^{l}\big{)}\tilde{\gamma}_{ 0,m}+\big{(}\partial_{u}^{l}-\partial_{v}^{l}\big{)}\tilde{\gamma}_{m,0} \tag{4.34}\] \[=2(\tilde{\gamma}_{l,m}+\tilde{\gamma}_{m,l})\] where the final equality is obtained by differentiation under the integral, as in (4.32). ### Differentiating the density matrix - general derivatives Partial derivatives of \(\gamma\) are written as linear combinations of integrals involving cluster derivatives of \(\psi\). One such integral is bounded in the following lemma. Since it is more involved than the other such integrals, the proof is postponed until later in the section. **Lemma 4.4**.: _Let \(\delta\leq(4N)^{-1}\epsilon\) and \(\epsilon\leq 1\) and let \(\Phi=\Phi_{\delta,\epsilon}\) be an arbitrary biscaled cutoff. Let \(P=P(\Phi)\) and \(S=S(\Phi)\) obey \(P^{*}\cap S^{*}=\emptyset\). For any \(\boldsymbol{\alpha},\boldsymbol{\beta}\in\mathbb{N}_{0}^{6}\) with \(|\boldsymbol{\alpha}|+|\boldsymbol{\beta}|=3\), any \(l,m\in\mathbb{N}_{0}^{3}\) with \(|l|=|m|=1\), and any \(R>0\) there exists \(C\) such that_ \[\Big{|}\int_{\mathbb{R}^{3N-3}}\big{(}D^{\boldsymbol{\alpha}}_{ \{P,S^{*}\}}\partial_{x}^{l}\psi(x,\mathbf{\hat{x}})\big{)}\big{(}D^{ \boldsymbol{\beta}}_{\{P^{*},S\}}\partial_{y}^{m}\psi(y,\mathbf{\hat{x}}) \big{)}\Phi(x,y,\mathbf{\hat{x}})\,d\mathbf{\hat{x}}\Big{|}\\ \leq C\epsilon^{-3}\left\|\rho\right\|_{L^{1}(B(x,R))}^{1/2}\left\| \rho\right\|_{L^{1}(B(y,R))}^{1/2} \tag{4.35}\] _for all \(\delta\leq|x-y|\leq 2\delta\) and \(|x|,|y|\geq\epsilon\). The constant \(C\) depends on \(R\) but is independent of \(\delta,\epsilon\)._ In part two of the following lemma the conditions on \(\eta,\mu,l\) and \(m\) are not the most general, but for simplicity we restrict ourselves to these assumptions. We use notation of cluster derivatives of \(\Phi\) from (4.22). **Lemma 4.5**.: _Let \(\delta\leq(4N)^{-1}\epsilon\) and \(\epsilon\leq 1\) and \(\Phi=\Phi_{\delta,\epsilon}\) be an arbitrary biscaled cutoff. Let \(\eta,\mu\in\mathbb{N}_{0}^{3}\) be arbitrary and \(l,m\in\mathbb{N}_{0}^{3}\) be such that \(|l|,|m|\leq 1\)._ 1. _On the set of_ \(u,v\) _such that_ \(\delta/2\leq|v|\leq\delta\) _and_ \(|u+v|,|u-v|\geq\epsilon\)_, the derivative_ \(\partial_{u}^{\eta}\partial_{v}^{\mu}\tilde{\gamma}_{l,m}(u,v;\Phi)\) _is equal to a linear combination of integrals of the form_ (4.36) \[\int_{\mathbb{R}^{3N-3}}\big{(}D^{\chi_{1}}_{Q}D^{\boldsymbol{ \alpha}}_{\{P,S^{*}\}}\partial_{x_{1}}^{l}\psi(u+v,\mathbf{\hat{x}})\big{)} \big{(}D^{\chi_{2}}_{Q}D^{\boldsymbol{\beta}}_{\{P^{*},S\}}\partial_{x_{1}}^{ m}\psi(u-v,\mathbf{\hat{x}})\big{)}\cdot\\ \Phi^{(\chi_{3},\boldsymbol{\sigma})}(u+v,u-v,\mathbf{\hat{x}}) \,d\mathbf{\hat{x}}\] _where_ \(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\sigma}\in\mathbb{N}_{0}^ {6}\) _and_ \(\boldsymbol{\chi}=(\chi_{1},\chi_{2},\chi_{3})\in\mathbb{N}_{0}^{9}\) _obey_ \(|\boldsymbol{\chi}|=|\eta|\) _and_ \(|\boldsymbol{\alpha}|+|\boldsymbol{\beta}|+|\boldsymbol{\sigma}|=|\mu|\)_._ 2. _Furthermore, suppose either_ * \(|\mu|\leq 2\)_, or_ * \(|\mu|=3\)_,_ \(\eta=0\) _and_ \(|l|=|m|=1\) _Then for all \(R>0\) we have some \(C_{0}\) such that_ (4.37) \[|\partial_{u}^{\eta}\partial_{v}^{\mu}\tilde{\gamma}_{l,m}(u,v;\Phi)|\leq C_{0} \epsilon^{-|\eta|-|\mu|}\left\|\rho\right\|_{L^{1}(B(u+v,R))}^{1/2}\|\rho\|_{L^ {1}(B(u-v,R))}^{1/2}\] _for all \(\delta/2\leq|v|\leq\delta\) and \(|u+v|,|u-v|\geq\epsilon\). The constant \(C_{0}\) depends on \(R\) but is independent of \(\delta,\epsilon\)._ Proof of i).: By Lemma 3.2, we need only consider \(\Phi\) such that \(P^{*}\cap S^{*}=\emptyset\). For each choice of \(u\) and \(v\) we define a \(u\)- and \(v\)-dependent change of variables for the integral \(\tilde{\gamma}_{lm}(u,v;\Phi)\) defined in (4.5). To start, we define two vectors \(\mathbf{\hat{a}}=(a_{2},\ldots,a_{N}),\mathbf{\hat{b}}=(b_{2},\ldots,b_{N})\in \mathbb{R}^{3N-3}\) by \[a_{k}=\begin{cases}u&\text{if }k\in Q^{*}\\ 0&\text{if }k\in Q^{c}\end{cases}\qquad\qquad b_{k}=\begin{cases}v&\text{if }k \in P^{*}\\ -v&\text{if }k\in S^{*}\\ 0&\text{if }k\in(P\cup S)^{c},\end{cases} \tag{4.38}\] and define \(\boldsymbol{\hat{\omega}}_{u,v}=\mathbf{\hat{a}}+\mathbf{\hat{b}}\). We then apply a translational change of variables which allows us to write \[\tilde{\gamma}_{lm}(u,v;\Phi)=\int_{\mathbb{R}^{3N-3}}\partial_{x_{1}}^{l}\psi (u+v,\mathbf{\hat{z}}+\boldsymbol{\hat{\omega}}_{u,v})\overline{\partial_{x_{ 1}}^{m}\psi(u-v,\mathbf{\hat{z}}+\boldsymbol{\hat{\omega}}_{u,v})}\Phi(u+v,u- v,\mathbf{\hat{z}}+\boldsymbol{\hat{\omega}}_{u,v})\,d\mathbf{\hat{z}}. \tag{4.39}\] We will then apply differentiation under the integral. Beforehand, we show how such derivatives will act on each function within the integrand. For a function \(f\) and any \(r\in\mathbb{N}_{0}^{3}\) with \(|r|=1\) we see that by the chain rule \[\partial_{u}^{r}[f(u\pm v,\mathbf{\hat{z}}+\boldsymbol{\hat{ \omega}}_{u,v})] =D_{Q}^{r}f(u\pm v,\mathbf{\hat{z}}+\boldsymbol{\hat{\omega}}_{u,v})\] \[\partial_{v}^{r}[f(u+v,\mathbf{\hat{z}}+\boldsymbol{\hat{\omega} }_{u,v})] =D_{P}^{r}f(u+v,\mathbf{\hat{z}}+\boldsymbol{\hat{\omega}}_{u,v})-D_ {S^{*}}^{r}f(u+v,\mathbf{\hat{z}}+\boldsymbol{\hat{\omega}}_{u,v})\] \[\partial_{v}^{r}[f(u-v,\mathbf{\hat{z}}+\boldsymbol{\hat{\omega} }_{u,v})] =D_{P^{*}}^{r}f(u-v,\mathbf{\hat{z}}+\boldsymbol{\hat{\omega}}_{u,v})-D_ {S}^{r}f(u-v,\mathbf{\hat{z}}+\boldsymbol{\hat{\omega}}_{u,v}).\] Applying repeatedly, we obtain for arbitrary \(\sigma,\nu\in\mathbb{N}_{0}^{3}\), \[\partial_{u}^{\sigma}\partial_{v}^{\nu}[f(u+v,\mathbf{\hat{z}} +\boldsymbol{\hat{\omega}}_{u,v})] =\sum_{\tau\leq\nu}c_{\tau,\nu}\big{(}D_{Q}^{\sigma}D_{P}^{\tau }D_{S^{*}}^{\nu-\tau}f(u+v,\mathbf{\hat{z}}+\boldsymbol{\hat{\omega}}_{u,v}) \big{)}\] \[\partial_{u}^{\sigma}\partial_{v}^{\nu}[f(u-v,\mathbf{\hat{z}}+ \boldsymbol{\hat{\omega}}_{u,v})] =\sum_{\tau\leq\nu}c_{\tau,\nu}\big{(}D_{Q}^{\sigma}D_{P^{*}}^{ \tau}D_{S}^{\nu-\tau}f(u-v,\mathbf{\hat{z}}+\boldsymbol{\hat{\omega}}_{u,v}) \big{)}.\] where \(c_{\tau,\nu}=(-1)^{|\nu|-|\tau|}\binom{\nu}{\tau}\). In a similar manner, for the cutoff we use the definitions (3.20)-(3.21) and (4.22) to write \[\partial_{u}^{\sigma}\partial_{v}^{\nu}[\Phi(u+v,u-v,\mathbf{\hat{z}}+ \boldsymbol{\hat{\omega}}_{u,v})]=\sum_{\tau\leq\nu}c_{\tau,\nu}\Phi^{(\sigma,\tau,\nu-\tau)}(u+v,u-v,\mathbf{\hat{z}}+\boldsymbol{\hat{\omega}}_{u,v}).\] By differentiating (4.39) under the integral, applying the Leibniz rule and reversing the change of variables, we find that \(\partial_{u}^{\eta}\partial_{v}^{\mu}\tilde{\gamma}_{lm}(u,v;\Phi)\) is a linear combination of terms of the required form. Proof of ii).: By part _(i)_ it suffices to prove the required bound for integrals of the form (4.36) with \(|\boldsymbol{\chi}|=|\eta|\) and \(|\boldsymbol{\alpha}|+|\boldsymbol{\beta}|+|\boldsymbol{\sigma}|=|\mu|\). Rewriting such integrals in the variables \(x=u+v\) and \(y=u-v\) we get \[\int_{\mathbb{R}^{3N-3}}\big{(}D_{Q}^{\chi_{1}}D_{\{P,S^{*}\}}^{\boldsymbol{ \alpha}}\partial_{x_{1}}^{l}\psi(x,\mathbf{\hat{x}})\big{)}\big{(}D_{Q}^{\chi _{2}}D_{\{P^{*},S\}}^{\boldsymbol{\beta}}\partial_{x_{1}}^{m}\psi(y,\mathbf{ \hat{x}})\big{)}\Phi^{(\chi_{3},\boldsymbol{\sigma})}(x,y,\mathbf{\hat{x}})\,d \mathbf{\hat{x}}. \tag{4.40}\] Notice that the variables \(x\) and \(y\) must obey \(\delta\leq|x-y|\leq 2\delta\) and \(|x|,|y|\geq\epsilon\). Firstly, we bound the above integral in the case where \(|\boldsymbol{\alpha}|+|\boldsymbol{\beta}|\leq 2\) and \(|\boldsymbol{\alpha}|+|\boldsymbol{\beta}|+|\boldsymbol{\sigma}|\leq 3\) for any \(|l|,|m|\leq 1\). By (4.23), (4.25), (4.26) followed by (4.13) of Lemma 4.2 we can bound this integral in absolute value by some constant multiplied by \[\epsilon^{-|\boldsymbol{\chi}|}\int_{\operatorname{supp}\Phi(x,y,.)}\big{(} \epsilon^{-|\boldsymbol{\sigma}|}+\delta^{-|\boldsymbol{\sigma}|}M_{\delta}( x,y,\mathbf{\hat{x}})\big{)}\lambda(x,y,\mathbf{\hat{x}})^{-|\boldsymbol{ \alpha}|-|\boldsymbol{\beta}|}f_{\infty}(x,\mathbf{\hat{x}};R)f_{\infty}(y, \mathbf{\hat{x}};R)\,d\mathbf{\hat{x}}.\] Selective use of (4.14) of Lemma 4.2 allows us to bound this quantity by some constant multiplied by \[\epsilon^{-|\boldsymbol{\chi}|}\int_{\operatorname{supp}\Phi(x,y,.)}\big{(} \epsilon^{-|\boldsymbol{\sigma}|}\lambda(x,y,\mathbf{\hat{x}})^{-|\boldsymbol {\alpha}|-|\boldsymbol{\beta}|}+\delta^{-|\boldsymbol{\alpha}|-|\boldsymbol{ \beta}|-|\boldsymbol{\sigma}|}M_{\delta}(x,y,\mathbf{\hat{x}})\big{)}f_{ \infty}(x,\mathbf{\hat{x}};R)f_{\infty}(y,\mathbf{\hat{x}};R)\,d\mathbf{\hat{ x}}.\] We can use (4.15) of Lemma 4.2, using that \(|\boldsymbol{\alpha}|+|\boldsymbol{\beta}|\leq 2\) by assumption, and (4.8) of Proposition 4.1 to bound this quantity by some constant multiplied by \[\epsilon^{-|\boldsymbol{\alpha}|-|\boldsymbol{\beta}|-|\boldsymbol{\sigma}|-| \boldsymbol{\chi}|}\,\|\rho\|_{L^{1}(B(x,2R))}^{1/2}\,\|\rho\|_{L^{1}(B(y,2R))} ^{1/2}\] where it was used that \(\delta\leq 1\leq\epsilon^{-1}\) and \(|\boldsymbol{\alpha}|+|\boldsymbol{\beta}|+|\boldsymbol{\sigma}|\leq 3\). After a return to \(u,v\)-variables, this proves the bound for integrals (4.36) in the case where \(|\boldsymbol{\alpha}|+|\boldsymbol{\beta}|\leq 2\) and \(|\boldsymbol{\alpha}|+|\boldsymbol{\beta}|+|\boldsymbol{\sigma}|\leq 3\). It remains to bound (4.40) in the case where \(|\boldsymbol{\alpha}|+|\boldsymbol{\beta}|=3\), \(|\boldsymbol{\sigma}|=|\boldsymbol{\chi}|=0\) and \(|l|=|m|=1\). This follows directly from Lemma 4.4. **Lemma 4.6**.: _Take any \(\eta,\mu,l,m\in\mathbb{N}_{0}^{3}\) as in Lemma 4.5(ii). Then for all \(R>0\) we have \(C\) such that_ \[|\partial_{u}^{\eta}\partial_{v}^{\mu}\tilde{\gamma}_{l,m}(u,v)|\leq C\min\{1,| u+v|,|u-v|\}^{-|\eta|-|\mu|}\,\|\rho\|_{L^{1}(B(u+v,R))}^{1/2}\,\|\rho\|_{L^{1}(B(u-v,R))}^{ 1/2} \tag{4.41}\] _for all \(u,v\in\mathbb{R}^{3}\) obeying \(0<|v|\leq(4N)^{-1}\min\{1,|u+v|,|u-v|\}\)._ Proof.: Firstly, by Lemma 3.1 there exists a finite collection of biscaled cutoffs, \(\Phi^{(j)}\), \(j=1,\ldots,J\), such that \[\tilde{\gamma}_{l,m}=\sum_{j=1}^{J}\tilde{\gamma}_{l,m}\big{(}\,\cdot\,;\Phi^{ (j)}_{\delta,\epsilon}\big{)} \tag{4.42}\] holds for all choices of \(0<\delta\leq(4N)^{-1}\epsilon\). Let \(C_{0}\) be the constant from Lemma 4.5 such that (4.40) holds for each \(\Phi^{(j)}\), \(j=1,\ldots,J\). Fix any \(v_{0}\neq 0\) and \(u_{0}\) such that \(|v_{0}|\leq(4N)^{-1}\min\{1,|u_{0}+v_{0}|,|u_{0}-v_{0}|\}\) and set \(\delta_{0}=|v_{0}|\) and \(\epsilon_{0}=\min\{1,|u_{0}+v_{0}|,|u_{0}-v_{0}|\}\). Then by (4.42), \[|\partial_{u}^{\eta}\partial_{v}^{\mu}\tilde{\gamma}_{l,m}(u_{0},v _{0})| \leq\sum_{j=1}^{J}\big{|}\partial_{u}^{\eta}\partial_{v}^{\mu} \tilde{\gamma}_{l,m}\big{(}u_{0},v_{0};\Phi_{\delta_{0},\epsilon_{0}}^{(j)} \big{)}\big{|}\] \[\leq JC_{0}\min\{1,|u_{0}+v_{0}|,|u_{0}-v_{0}|\}^{-|\eta|-|\mu|} \left\|\rho\right\|_{L^{1}(B(u_{0}+v_{0},R))}^{1/2}\left\|\rho\right\|_{L^{1}( B(u_{0}-v_{0},R))}^{1/2}.\] Since \(C_{0}\) does not depend on the choice of \(\delta_{0}\) and \(\epsilon_{0}\), the constant \(JC_{0}\) does not depend on the choice of \(u_{0}\) and \(v_{0}\). **Proposition 4.7**.: _For all \(\alpha,\beta\in\mathbb{N}_{0}^{3}\) with \(|\alpha|+|\beta|=5\) and all \(R>0\) there exists \(C\), depending on \(R\), such that_ \[|\partial_{u}^{\alpha}\partial_{v}^{\beta}\tilde{\gamma}(u,v)|\leq C\min\{1,|u +v|,|u-v|\}^{-4}\left\|\rho\right\|_{L^{1}(B(u+v,R))}^{1/2}\left\|\rho\right\| _{L^{1}(B(u-v,R))}^{1/2} \tag{4.43}\] _for all \(u,v\in\mathbb{R}^{3}\) obeying \(0<|v|\leq(4N)^{-1}\min\{1,|u+v|,|u-v|\}\)._ Proof.: First, we consider the case where \(|\beta|\leq 3\). We use (4.32) and (4.33) for when \(|\beta|\leq 2\) and \(|\beta|=3\) respectively. The bound then follows from Lemma 4.6 with \(|\eta|+|\mu|=4\) and \(|\mu|\leq 2\). Now, consider \(4\leq|\beta|\leq 5\). Take \(l,m\in\mathbb{N}_{0}^{3}\), \(|l|=|m|=1\) be such that \(l+m\leq\beta\). Then by (4.34), \[\partial_{u}^{\alpha}\partial_{v}^{\beta}\tilde{\gamma}(u,v)=\partial_{u}^{ \alpha+l+m}\partial_{v}^{\beta-l-m}\tilde{\gamma}(u,v)-2\big{(}\partial_{u}^{ \alpha}\partial_{v}^{\beta-l-m}\tilde{\gamma}_{l,m}(u,v)+\partial_{u}^{\alpha }\partial_{v}^{\beta-l-m}\tilde{\gamma}_{m,l}(u,v)\big{)}.\] The first term on the right-hand side has \(|\beta|-|l|-|m|\leq 3\) hence the required bound follows from the previous step. The remaining terms can be bounded using Lemma 4.6. The proof of our main theorem is an immediate consequence of this proposition. Proof of Theorem 1.1.: The proof follows from Proposition 4.7 with \(u=(x+y)/2\) and \(v=(x-y)/2\), along with the definition (4.4). ### Proof of Lemma 4.4 To prove Lemma 4.4, we will examine the cluster derivatives of \(\psi\) present in (4.35) more closely. In particular, Theorem 1.3 allows us to write such derivatives in terms of derivatives of \(F_{c}\) and a function, \(G_{\mathbf{P}}^{\boldsymbol{\alpha}}\), of higher regularity near certain singularities. Sign cancellation allows uniform boundedness of the integral (4.35) as \(x\) and \(y\) approach each other, and is more easily handled via derivatives of \(F_{c}\) rather than those of \(\psi\) itself. Indeed, due to the simple formula definining \(F_{c}\), it is possible to characterise all its cluster derivatives explicitly. A series of steps, mostly involving integration by parts, will complete the proof. To begin, we use definition (2.2) to write \[\nabla_{x}F_{c}(x,\mathbf{\hat{x}})=-\frac{Z}{2}\nabla_{x}|x|+\frac{1}{4}\sum_ {2\leq j\leq N}\nabla_{x}|x-x_{j}| \tag{4.44}\] with the formula also holding when \(x\) is replaced by \(y\). Let \(P\) and \(S\) be arbitrary clusters with \(1\in P,S\) and \(P^{*}\cap S^{*}=\emptyset\). Let \(\boldsymbol{\alpha}=(\alpha_{1},\alpha_{2})\in\mathbb{N}_{0}^{6}\), \(\boldsymbol{\beta}=(\beta_{1},\beta_{2})\in\mathbb{N}_{0}^{6}\) and let \(l,m\in\mathbb{N}_{0}^{3}\) obey \(|l|=|m|=1\). Then, \[D^{\boldsymbol{\alpha}}_{\{P,S^{*}\}}\partial_{x}^{l}|x|=\begin{cases} \partial_{x}^{\alpha_{1}+l}|x|&\text{ if }\alpha_{2}=0\\ 0&\text{ if }|\alpha_{2}|\geq 1,\end{cases} \tag{4.45}\] \[D^{\boldsymbol{\beta}}_{\{P^{*},S\}}\partial_{y}^{m}|y|=\begin{cases} \partial_{y}^{\beta_{2}+m}|y|&\text{ if }\beta_{1}=0\\ 0&\text{ if }|\beta_{1}|\geq 1.\end{cases} \tag{4.46}\] In the following, the cluster derivatives in (4.47) are understood to act with respect to the ordered variables \((x,x_{2},\ldots,x_{N})\) and the cluster derivatives in (4.48) are understood to act with respect to the ordered variables \((y,x_{2},\ldots,x_{N})\). For later convenience, on the right-hand side of both formulae, all derivatives in the \(x\)- or \(y\)-variable are rewritten to act on \(x_{j}\). Now assume \(|\boldsymbol{\alpha}|,|\boldsymbol{\beta}|\geq 1\), then \[D^{\boldsymbol{\alpha}}_{\{P,S^{*}\}}\partial_{x}^{l}|x-x_{j}|= \begin{cases}(-1)^{|\alpha_{1}|+1}\partial_{x_{j}}^{\alpha_{1}+ \alpha_{2}+l}|x-x_{j}|&\text{ if }|\alpha_{2}|\geq 1\text{ and }j\in S^{*}\\ 0&\text{ if }|\alpha_{2}|\geq 1\text{ and }j\in S^{c}\\ (-1)^{|\alpha_{1}|+1}\partial_{x_{j}}^{\alpha_{1}+\alpha_{2}+l}|x-x_{j}|&\text { if }\alpha_{2}=0\text{ and }j\in P^{c}\\ 0&\text{ if }\alpha_{2}=0\text{ and }j\in P^{*}\\ \end{cases} \tag{4.48}\] \[D^{\boldsymbol{\beta}}_{\{P^{*},S\}}\partial_{y}^{m}|y-x_{j}|= \begin{cases}(-1)^{|\beta_{2}|+1}\partial_{x_{j}}^{\beta_{1}+ \beta_{2}+m}|y-x_{j}|&\text{ if }|\beta_{1}|\geq 1\text{ and }j\in P^{*}\\ 0&\text{ if }|\beta_{1}|\geq 1\text{ and }j\in P^{c}\\ (-1)^{|\beta_{2}|+1}\partial_{x_{j}}^{\beta_{1}+\beta_{2}+m}|y-x_{j}|&\text{ if }\beta_{1}=0\text{ and }j\in S^{c}\\ 0&\text{ if }\beta_{1}=0\text{ and }j\in S^{*}.\end{cases} \tag{4.47}\] In particular, since \(P^{*}\cap S^{*}=\emptyset\), we have \(D^{\boldsymbol{\alpha}}_{\{P,S^{*}\}}\partial_{x}^{l}|x-x_{j}|\equiv 0\) unless \(j\in P^{c}\) and \(D^{\boldsymbol{\beta}}_{\{P^{*},S\}}\partial_{y}^{m}|y-x_{j}|\equiv 0\) unless \(j\in S^{c}\). We will often use the following elementary fact. For each \(z_{0}\in\mathbb{R}^{3}\) and \(\eta\in\mathbb{N}_{0}^{3}\) there exists \(C\), dependent on \(\eta\) but independent of \(z_{0}\), such that \[\big{|}\partial_{z}^{\eta}|z_{0}-z|\big{|}\leq C|z_{0}-z|^{1-|\eta|}\quad\text {for all }z\neq z_{0}. \tag{4.49}\] Therefore, by (4.12) we have for each \(|\eta|\geq 1\) some \(C\) and \(C^{\prime}\) such that \[\big{|}\partial_{x_{j}}^{\eta}|x-x_{j}|\big{|}+\big{|}\partial_{x_{k}}^{\eta}| y-x_{k}|\big{|}\leq C\big{(}|x-x_{j}|^{1-|\eta|}+|y-x_{k}|^{1-|\eta|}\big{)} \leq C^{\prime}\lambda(x,y,\hat{\mathbf{x}})^{1-|\eta|} \tag{4.50}\] for all \(2\leq j,k\leq N\) if \(|\eta|=1\), and all \(j\in P^{c}\), \(k\in S^{c}\) if \(|\eta|\geq 2\). Here, \(\lambda(x,y,\hat{\mathbf{x}})\) is defined in (4.9) using the clusters \(P\) and \(S\). **Lemma 4.8**.: _Let \(P=P(\Phi)\) and \(S=S(\Phi)\) obey \(P^{*}\cap S^{*}=\emptyset\), and take any \(R>0\). Consider \(|\boldsymbol{\alpha}|+|\boldsymbol{\beta}|=3\) and \(|l|=|m|=1\). For \(|\boldsymbol{\alpha}|,|\boldsymbol{\beta}|\geq 1\), the integral_ \[\int_{\mathbb{R}^{3N-3}}\big{(}D^{\boldsymbol{\alpha}}_{\{P,S^{*}\}}\partial_{x }^{l}F_{c}(x,\hat{\mathbf{x}})\big{)}\psi(x,\hat{\mathbf{x}})\big{(}D^{ \boldsymbol{\beta}}_{\{P^{*},S\}}\partial_{y}^{m}F_{c}(y,\hat{\mathbf{x}}) \big{)}\overline{\psi(y,\hat{\mathbf{x}})}\Phi(x,y,\hat{\mathbf{x}})\,d\hat{ \mathbf{x}}, \tag{4.51}\] _can be bounded as in (4.35). For \(|\boldsymbol{\alpha}|=3\), the integrals_ \[\int_{\mathbb{R}^{3N-3}}\big{(}D^{\boldsymbol{\alpha}}_{\{P,S^{*} \}}\partial^{l}_{x}F_{c}(x,\mathbf{\hat{x}})\big{)}\psi(x,\mathbf{\hat{x}}) \partial^{m}_{y}F(y,\mathbf{\hat{x}})\overline{\psi(y,\mathbf{\hat{x}})}\Phi(x, y,\mathbf{\hat{x}})\,d\mathbf{\hat{x}}, \tag{4.53}\] \[\int_{\mathbb{R}^{3N-3}}\big{(}D^{\boldsymbol{\alpha}}_{\{P,S^{*} \}}\partial^{l}_{x}F_{c}(x,\mathbf{\hat{x}})\big{)}\psi(x,\mathbf{\hat{x}})e^{ F}(y,\mathbf{\hat{x}})\overline{\partial^{m}_{y}\phi(y,\mathbf{\hat{x}})}\Phi(x,y, \mathbf{\hat{x}})\,d\mathbf{\hat{x}}, \tag{4.52}\] _and, for \(|\boldsymbol{\beta}|=3\), the integrals_ \[\int_{\mathbb{R}^{3N-3}}\partial^{l}_{x}F(x,\mathbf{\hat{x}}) \psi(x,\mathbf{\hat{x}})\big{(}D^{\boldsymbol{\beta}}_{\{P^{*},S\}}\partial^{ m}_{y}F_{c}(y,\mathbf{\hat{x}})\big{)}\overline{\psi(y,\mathbf{\hat{x}})}\Phi(x, y,\mathbf{\hat{x}})\,d\mathbf{\hat{x}}, \tag{4.55}\] \[\int_{\mathbb{R}^{3N-3}}e^{F}(x,\mathbf{\hat{x}})\partial^{l}_{x }\phi(x,\mathbf{\hat{x}})\big{(}D^{\boldsymbol{\beta}}_{\{P^{*},S\}}\partial^{ m}_{y}F_{c}(y,\mathbf{\hat{x}})\big{)}\overline{\psi(y,\mathbf{\hat{x}})}\Phi(x, y,\mathbf{\hat{x}})\,d\mathbf{\hat{x}} \tag{4.54}\] _can each be bounded as in (4.35)._ Proof.: We first prove the bound for (4.51). Use of (4.44) and (4.45)-(4.48) to expand the integral will produce a linear combination of the following terms, where \(j\in P^{c}\) and \(k\in S^{c}\), \[\partial^{\alpha_{1}+l}_{x}|x|\partial^{\beta_{2}+m}_{y}|y|\int_{ \mathbb{R}^{3N-3}}\psi(x,\mathbf{\hat{x}})\overline{\psi(y,\mathbf{\hat{x}})} \Phi(x,y,\mathbf{\hat{x}})\,d\mathbf{\hat{x}}\quad\text{if $\alpha_{2}=\beta_{1}=0$},\] \[\partial^{\alpha_{1}+l}_{x}|x|\int_{\mathbb{R}^{3N-3}}\partial^{ \beta_{1}+\beta_{2}+m}_{x_{k}}|y-x_{k}|\psi(x,\mathbf{\hat{x}})\overline{\psi( y,\mathbf{\hat{x}})}\Phi(x,y,\mathbf{\hat{x}})\,d\mathbf{\hat{x}}\quad\text{if $\alpha_{2}=0$},\] \[\partial^{\beta_{2}+m}_{y}|y|\int_{\mathbb{R}^{3N-3}}\partial^{ \alpha_{1}+\alpha_{2}+l}_{x_{j}}|x-x_{j}|\psi(x,\mathbf{\hat{x}})\overline{ \psi(y,\mathbf{\hat{x}})}\Phi(x,y,\mathbf{\hat{x}})\,d\mathbf{\hat{x}}\quad \text{if $\beta_{1}=0$},\] \[\int_{\mathbb{R}^{3N-3}}\partial^{\alpha_{1}+\alpha_{2}+l}_{x_{j} }|x-x_{j}|\partial^{\beta_{1}+\beta_{2}+m}_{x_{k}}|y-x_{k}|\psi(x,\mathbf{\hat {x}})\overline{\psi(y,\mathbf{\hat{x}})}\Phi(x,y,\mathbf{\hat{x}})\,d\mathbf{ \hat{x}}.\] Derivatives of \(|x|\) and \(|y|\) are bounded using (4.49) and \(|x|,|y|\geq\epsilon\). Now we bound each integral in absolute value. The first, second and third integrals are readily bounded by (4.50) and Lemma 4.2. Finally, for the fourth integral we use (4.59) of Lemma 4.9. Now suppose \(|\boldsymbol{\alpha}|=3\). First, we prove the bound for the integral (4.52). Recall \(F=F_{c}-F_{s}\). Using this, along with (4.44) and (4.45)-(4.48) we can expand the integral as linear combination of the following terms, where \(j\in P^{c}\) and \(k\in\{2,\dots,N\}\), \[\partial_{x}^{\alpha_{1}+l}|x|\partial_{y}^{m}|y|\int_{\mathbb{R}^{3 N-3}}\psi(x,\mathbf{\hat{x}})\overline{\psi(y,\mathbf{\hat{x}})}\Phi(x,y,\mathbf{ \hat{x}})\,d\mathbf{\hat{x}}\quad\text{if }\alpha_{2}=0,\] \[\partial_{x}^{\alpha_{1}+l}|x|\int_{\mathbb{R}^{3N-3}}\partial_{x _{k}}^{m}|y-x_{k}|\psi(x,\mathbf{\hat{x}})\overline{\psi(y,\mathbf{\hat{x}})} \Phi(x,y,\mathbf{\hat{x}})\,d\mathbf{\hat{x}}\quad\text{if }\alpha_{2}=0,\] \[\partial_{x}^{\alpha_{1}+l}|x|\int_{\mathbb{R}^{3N-3}}\partial_{x _{k}}^{m}F_{s}(y,\mathbf{\hat{x}})\psi(x,\mathbf{\hat{x}})\overline{\psi(y, \mathbf{\hat{x}})}\Phi(x,y,\mathbf{\hat{x}})\,d\mathbf{\hat{x}}\quad\text{if }\alpha_{2}=0,\] \[\partial_{y}^{m}|y|\int_{\mathbb{R}^{3N-3}}\partial_{x_{j}}^{ \alpha_{1}+\alpha_{2}+l}|x-x_{j}|\psi(x,\mathbf{\hat{x}})\overline{\psi(y, \mathbf{\hat{x}})}\Phi(x,y,\mathbf{\hat{x}})\,d\mathbf{\hat{x}},\] \[\int_{\mathbb{R}^{3N-3}}\partial_{x_{j}}^{\alpha_{1}+\alpha_{2}+l }|x-x_{j}|\partial_{x_{k}}^{m}|y-x_{k}|\psi(x,\mathbf{\hat{x}})\overline{\psi(y,\mathbf{\hat{x}})}\Phi(x,y,\mathbf{\hat{x}})\,d\mathbf{\hat{x}}\] \[\int_{\mathbb{R}^{3N-3}}\partial_{x_{j}}^{\alpha_{1}+\alpha_{2}+l }|x-x_{j}|\partial_{x_{k}}^{m}F_{s}(y,\mathbf{\hat{x}})\psi(x,\mathbf{\hat{x}}) \overline{\psi(y,\mathbf{\hat{x}})}\Phi(x,y,\mathbf{\hat{x}})\,d\mathbf{\hat{x}}\] Derivatives of \(|x|\) and \(|y|\) are bounded using (4.49) and \(|x|,|y|\geq\epsilon\). The first three integrals above are then readily bounded using (4.6) of Proposition 4.1 and that \(\nabla F_{s}\in L^{\infty}(\mathbb{R}^{3N})\). For the fourth and sixth integral we use (4.57) of Lemma 4.9 with \(\chi\equiv 1\) and \(\chi(x,y,\mathbf{\hat{x}})=\partial_{x_{k}}^{m}F_{s}(y,\mathbf{\hat{x}})\) respectively. Finally, for the fifth integral we use the same lemma, specifically (4.59). This proves (4.52). The proof of (4.54) is similar in the case where \(|\boldsymbol{\beta}|=3\). Next, using (4.44), (4.45) and (4.47) we can rewrite the integral (4.53) as a linear combination of the following two terms, where \(k\in P^{c}\), \[\partial_{x}^{\alpha_{1}+l}|x|\int_{\mathbb{R}^{3N-3}}\psi(x, \mathbf{\hat{x}})e^{F}(y,\mathbf{\hat{x}})\overline{\partial_{y}^{m}\phi(y, \mathbf{\hat{x}})}\Phi(x,y,\mathbf{\hat{x}})\,d\mathbf{\hat{x}}\quad\text{if }\alpha_{2}=0,\] \[\int_{\mathbb{R}^{3N-3}}\partial_{x_{k}}^{\alpha_{1}+\alpha_{2}+l }|x-x_{k}|\psi(x,\mathbf{\hat{x}})e^{F}(y,\mathbf{\hat{x}})\overline{\partial _{y}^{m}\phi(y,\mathbf{\hat{x}})}\Phi(x,y,\mathbf{\hat{x}})\,d\mathbf{\hat{x}}.\] Derivatives of \(|x|\) are bounded using (4.49) and \(|x|\geq\epsilon\). The first integral is then bounded by (2.6), (2.9), (2.10) and finally (4.6) of Proposition 4.1. The second integral is bounded immediately from (4.69) of Lemma 4.10. This proves (4.53). The proof of (4.55) is similar in the case where \(|\boldsymbol{\beta}|=3\) Proof of Lemma 4.4.: To prove the required inequality, first consider the case where \(|\mathbf{\alpha}|,|\mathbf{\beta}|\geq 1\). We can then use both (4.27) and (4.28) to write \[\int_{\mathbb{R}^{3N-3}}\big{(}D^{\mathbf{\alpha}}_{\{P,S^{*}\}}\partial ^{l}_{x}\psi(x,\mathbf{\hat{x}})\big{)}\overline{\big{(}D^{\mathbf{\beta}}_{\{P^{*},S\}}\partial^{m}_{y}\psi(y,\mathbf{\hat{x}})\big{)}}\Phi(x,y,\mathbf{\hat{x}} )\,d\mathbf{\hat{x}}\] \[\quad=\int_{\mathbb{R}^{3N-3}}\big{(}G^{\mathbf{\alpha},l}_{\{P,S^{*} \}}(x,\mathbf{\hat{x}})\big{)}\overline{\big{(}D^{\mathbf{\beta}}_{\{P^{*},S\}} \partial^{m}_{y}\psi(y,\mathbf{\hat{x}})\big{)}}\Phi(x,y,\mathbf{\hat{x}})\, d\mathbf{\hat{x}}\] \[\quad\quad+\int_{\mathbb{R}^{3N-3}}\big{(}D^{\mathbf{\alpha}}_{\{P,S^ {*}\}}\partial^{l}_{x}F_{c}(x,\mathbf{\hat{x}})\big{)}\psi(x,\mathbf{\hat{x}} )\overline{\big{(}G^{\mathbf{\beta},m}_{\{P^{*},S\}}(y,\mathbf{\hat{x}})\big{)}} \Phi(x,y,\mathbf{\hat{x}})\,d\mathbf{\hat{x}}\] \[\quad\quad+\int_{\mathbb{R}^{3N-3}}\big{(}D^{\mathbf{\alpha}}_{\{P,S^ {*}\}}\partial^{l}_{x}F_{c}(x,\mathbf{\hat{x}})\big{)}\psi(x,\mathbf{\hat{x}} )\big{(}D^{\mathbf{\beta}}_{\{P^{*},S\}}\partial^{m}_{y}F_{c}(y,\mathbf{\hat{x}}) \big{)}\overline{\psi(y,\mathbf{\hat{x}})}\Phi(x,y,\mathbf{\hat{x}})\,d \mathbf{\hat{x}}\] We bound each of these integrals. We start with the final integral on the right-hand side which is just (4.51) of Lemma 4.8. Next, by using (4.25)-(4.26) and (4.29)-(4.31), the first and second integrals on the right-hand side can be bounded in absolute value by some constant multiplied by \[\int_{\mathbb{R}^{3N-3}}\lambda(x,y,\mathbf{\hat{x}})^{-5/2}f_{ \infty}(x,\mathbf{\hat{x}};R)f_{\infty}(y,\mathbf{\hat{x}};R)\Phi(x,y,\mathbf{ \hat{x}})\,d\mathbf{\hat{x}}\\ \leq C\epsilon^{-5/2}\left\|\rho\right\|^{1/2}_{L^{1}(B(x,2R))} \left\|\rho\right\|^{1/2}_{L^{1}(B(y,2R))} \tag{4.56}\] where the inequality above holds for some \(C\) by (4.15) of Lemma 4.2. This proves the required bound when \(|\mathbf{\alpha}|,|\mathbf{\beta}|\geq 1\). Now we consider the case where \(|\mathbf{\alpha}|=3\), and hence \(\mathbf{\beta}=0\). We use that \(\nabla\psi=\psi\nabla F+e^{F}\nabla\phi\) and (4.27) to give \[\int_{\mathbb{R}^{3N-3}}\big{(}D^{\mathbf{\alpha}}_{\{P,S^{*}\}} \partial^{l}_{x}\psi(x,\mathbf{\hat{x}})\big{)}\overline{\partial^{m}_{y}\psi( y,\mathbf{\hat{x}})}\Phi(x,y,\mathbf{\hat{x}})\,d\mathbf{\hat{x}}\] \[\quad=\int_{\mathbb{R}^{3N-3}}\big{(}G^{\mathbf{\alpha},l}_{\{P,S^{*} \}}(y,\mathbf{\hat{x}})\big{)}\overline{\partial^{m}_{y}\psi(y,\mathbf{\hat{x }})}\Phi(x,y,\mathbf{\hat{x}})\,d\mathbf{\hat{x}}\] \[\quad\quad+\int_{\mathbb{R}^{3N-3}}\big{(}D^{\mathbf{\alpha}}_{\{P,S^ {*}\}}\partial^{l}_{x}F_{c}(x,\mathbf{\hat{x}})\big{)}\psi(x,\mathbf{\hat{x}} )\partial^{m}_{y}F(y,\mathbf{\hat{x}})\overline{\psi(y,\mathbf{\hat{x}})}\Phi(x,y,\mathbf{\hat{x}})\,d\mathbf{\hat{x}}\] \[\quad\quad+\int_{\mathbb{R}^{3N-3}}\big{(}D^{\mathbf{\alpha}}_{\{P,S^ {*}\}}\partial^{l}_{x}F_{c}(x,\mathbf{\hat{x}})\big{)}\psi(x,\mathbf{\hat{x}} )e^{F}(y,\mathbf{\hat{x}})\overline{\partial^{m}_{y}\phi(y,\mathbf{\hat{x}})} \Phi(x,y,\mathbf{\hat{x}})\,d\mathbf{\hat{x}}.\] As before, the first integral on the right-hand side can be bounded in absolute value by some constant multiplying (4.56). The second and third integrals on the right-hand side are just (4.52) and (4.53) respectively. The proof of the \(|\mathbf{\beta}|=3\) case is similar to the \(|\mathbf{\alpha}|=3\) case. We now prove two lemmas which were used to prove Lemma 4.8. **Lemma 4.9**.: _Let \(P=P(\Phi)\) and \(S=S(\Phi)\) obey \(P^{*}\cap S^{*}=\emptyset\), and take any \(R>0\)._ 1. _Let_ \(|\eta|=4\)_. Let_ \(\chi=\chi(x,y,\mathbf{\hat{x}})\in C^{\infty}(\mathbb{R}^{3N+3})\) _such that_ \(\chi,\nabla\chi\in L^{\infty}(\mathbb{R}^{3N+3})\) _(for example_ \(\chi\equiv 1\) _may be chosen). Then, for all_ \(j\in P^{c}\)_, the integral_ (4.57) \[\int_{\mathbb{R}^{3N-3}}\partial_{x_{j}}^{\eta}|x-x_{j}|\chi(x,y,\mathbf{\hat {x}})\psi(x,\mathbf{\hat{x}})\overline{\psi(y,\mathbf{\hat{x}})}\Phi(x,y, \mathbf{\hat{x}})\,d\mathbf{\hat{x}},\] _and, for all_ \(k\in S^{c}\)_, the integral_ (4.58) \[\int_{\mathbb{R}^{3N-3}}\partial_{x_{k}}^{\eta}|y-x_{k}|\chi(x,y,\mathbf{\hat {x}})\psi(x,\mathbf{\hat{x}})\overline{\psi(y,\mathbf{\hat{x}})}\Phi(x,y, \mathbf{\hat{x}})\,d\mathbf{\hat{x}}\] _can be bounded as in (_4.35_)._ 2. _Let_ \(|\alpha|+|\beta|=3\) _and_ \(|l|=|m|=1\)_. Then for any pair_ \(2\leq j,k\leq N\) _such that_ \(j\in P^{c}\) _if_ \(|\alpha|\geq 1\) _and_ \(k\in S^{c}\) _if_ \(|\beta|\geq 1\) _we have that the integral_ (4.59) \[I_{j,k}=\int_{\mathbb{R}^{3N-3}}\partial_{x_{j}}^{\alpha+l}|x-x_{j}|\partial_ {x_{k}}^{\beta+m}|y-x_{k}|\psi(x,\mathbf{\hat{x}})\overline{\psi(y,\mathbf{ \hat{x}})}\Phi(x,y,\mathbf{\hat{x}})\,d\mathbf{\hat{x}}.\] _can be bounded as in (_4.35_)._ Proof of i).: Take any \(j\in P^{c}\). We bound (4.57). By a product rule for weak derivatives we know that \[\nabla_{x_{j}}\big{(}\psi(x,\mathbf{\hat{x}})\psi(y,\mathbf{\hat{x}})\big{)}= \big{(}\nabla_{x_{j}}\psi(x,\mathbf{\hat{x}})\big{)}\psi(y,\mathbf{\hat{x}})+ \psi(x,\mathbf{\hat{x}})\big{(}\nabla_{x_{j}}\psi(y,\mathbf{\hat{x}})\big{)}.\] The functions \(\chi\) and \(\Phi\) are both smooth so are readily included in such a product rule. Take any multiindex \(\tau\leq\eta\) with \(|\tau|=1\). Using integration by parts we get that (4.57) equals \[-\int_{\mathbb{R}^{3N-3}}\partial_{x_{j}}^{\eta-\tau}|x-x_{j}|\partial_{x_{j}} ^{\tau}\big{(}\chi(x,y,\mathbf{\hat{x}})\psi(x,\mathbf{\hat{x}})\overline{ \psi(y,\mathbf{\hat{x}})}\Phi(x,y,\mathbf{\hat{x}})\big{)}\,d\mathbf{\hat{x}} \tag{4.60}\] which, using (4.50), can be bounded in absolute value by some constant, depending on \(\chi\), multiplying the expression (4.16) which is bounded in Corollary 4.3. The proof of (4.58) is similar. Proof of ii).: We first prove the case where \(j\neq k\), where integration by parts is particularly simple. Suppose \(|\alpha|\geq 1\). Using a strategy similar to the proof of i), we use integration by parts to obtain \[I_{j,k}=-\int_{\mathbb{R}^{3N-3}}\partial_{x_{j}}^{\alpha}|x-x_{j}|\partial_{ x_{k}}^{\beta+m}|y-x_{k}|\partial_{x_{j}}^{l}\big{(}\psi(x,\mathbf{\hat{x}}) \psi(y,\mathbf{\hat{x}})\Phi(x,y,\mathbf{\hat{x}})\big{)}\,d\mathbf{\hat{x}} \tag{4.61}\] which can then be bounded using Corollary 4.3. The case of \(|\beta|\geq 1\) is similar except we apply integration by parts on the \(\partial_{x_{k}}^{m}\)-derivative. This completes the proof where \(j\neq k\). For the remainder of the proof we consider the case where \(k=j\). Suppose, first, that \(k\in(S\cup P)^{c}\). To simplify calculations we write \(\eta=\alpha+l\) and \(\mu=\beta+m\). Notice that \(|\eta|,|\mu|\geq 1\). Therefore we can find some multiindex \(\mu_{1}\leq\mu\) with \(|\mu_{1}|=1\). Integration by parts then gives \[I_{k,k} =-\int_{\mathbb{R}^{3N-3}}\partial_{x_{k}}^{\eta+\mu_{1}}|x-x_{k}| \partial_{x_{k}}^{\mu-\mu_{1}}|y-x_{k}|\psi(x,\mathbf{\hat{x}})\psi(y,\mathbf{ \hat{x}})\Phi(x,y,\mathbf{\hat{x}})\,d\mathbf{\hat{x}} \tag{4.62}\] \[-\int_{\mathbb{R}^{3N-3}}\partial_{x_{k}}^{\eta}|x-x_{k}|\partial _{x_{k}}^{\mu-\mu_{1}}|y-x_{k}|\partial_{x_{k}}^{\mu_{1}}\big{(}\psi(x, \mathbf{\hat{x}})\psi(y,\mathbf{\hat{x}})\Phi(x,y,\mathbf{\hat{x}})\big{)}\,d \mathbf{\hat{x}}.\] We leave untouched the second integral above. For the first, if \(|\mu|-|\mu_{1}|\geq 1\) we can remove another first-order derivative from \(|y-x_{k}|\) by the same procedure - using integration by parts to give two new terms as in (4.62). We retain the term where the derivative falls on \(\psi(x,\mathbf{\hat{x}})\psi(y,\mathbf{\hat{x}})\Phi(x,y,\mathbf{\hat{x}})\). Whereas on the term where the derivative falls on \(|x-x_{k}|\) we repeat the procedure, so long as there remains a non-trivial derivative on \(|y-x_{k}|\). Through this process, we obtain the following formula for \(I_{k,k}\). Let \(T=|\mu|\). Then write \(\mu=\sum_{i=1}^{T}\mu_{i}\) for some collection \(|\mu_{i}|=1\), where \(1\leq i\leq T\). Furthermore, define \[\mu_{<j}=\begin{cases}0&\text{if }j=1\\ \mu_{1}&\text{if }j=2\\ \mu_{1}+\cdots+\mu_{j-1}&\text{if }j\geq 3\end{cases}\qquad\mu_{>j}=\begin{cases}0& \text{if }j=T\\ \mu_{T}&\text{if }j=T-1\\ \mu_{j+1}+\cdots+\mu_{T}&\text{if }j\leq T-2.\end{cases}\] Then, \[I_{k,k} =\,(-1)^{T}\int_{\mathbb{R}^{3N-3}}\partial_{x_{k}}^{\eta+\mu}|x -x_{k}||y-x_{k}|\psi(x,\mathbf{\hat{x}})\psi(y,\mathbf{\hat{x}})\Phi(x,y, \mathbf{\hat{x}})\,d\mathbf{\hat{x}}+\sum_{j=1}^{T}(-1)^{j}I_{k,k}^{(j)} \tag{4.63}\] where \[I_{k,k}^{(j)}=\int_{\mathbb{R}^{3N-3}}\partial_{x_{k}}^{\eta+\mu_{<j}}|x-x_{k} |\partial_{x_{k}}^{\mu_{>j}}|y-x_{k}|\partial_{x_{k}}^{\mu_{j}}\big{(}\psi(x, \mathbf{\hat{x}})\psi(y,\mathbf{\hat{x}})\Phi(x,y,\mathbf{\hat{x}})\big{)}\,d \mathbf{\hat{x}}\] Using (4.50), for each \(1\leq j\leq T-1\) we can bound \(|I_{k,k}^{(j)}|\) by some constant multiplying the expression (4.16), which is bounded in Corollary 4.3. It remains to bound \(I_{k,k}^{(T)}\), along with the first integral in the formula (4.63). Starting with the latter, we begin by expanding \(|y-x_{k}|=|x-x_{k}|+\big{(}|y-x_{k}|-|x-x_{k}|\big{)}\) and noticing that \[\big{|}|y-x_{k}|-|x-x_{k}|\big{|}\leq|x-y|\leq 2\delta. \tag{4.64}\] To bound the first integral in (4.63), it then suffices to bound the two integrals: \[\delta\int_{\mathbb{R}^{3N-3}}\big{|}\partial_{x_{k}}^{\eta+\mu} |x-x_{k}|\psi(x,\mathbf{\hat{x}})\psi(y,\mathbf{\hat{x}})\Phi(x,y,\mathbf{ \hat{x}})\big{|}\,d\mathbf{\hat{x}}, \tag{4.66}\] \[\int_{\mathbb{R}^{3N-3}}\partial_{x_{k}}^{\eta+\mu}|x-x_{k}||x-x_ {k}|\psi(x,\mathbf{\hat{x}})\psi(y,\mathbf{\hat{x}})\Phi(x,y,\mathbf{\hat{x}}) \,d\mathbf{\hat{x}}. \tag{4.65}\] We start with the first of these two integrals. Since \(k\in(P\cup S)^{c}\) we can use (4.50) to show that (4.65) is bounded by some constant multiplied by \[\delta\int_{\mathbb{R}^{3N-3}}\lambda(x,y,\mathbf{\hat{x}})^{-4}f _{\infty}(x,\mathbf{\hat{x}};R)f_{\infty}(y,\mathbf{\hat{x}};R)\Phi(x,y, \mathbf{\hat{x}})\,d\mathbf{\hat{x}}\\ \leq C\delta(\epsilon^{-4}+\delta^{-1})\left\|\rho\right\|_{L^{1} (B(x,2R))}^{1/2}\left\|\rho\right\|_{L^{1}(B(y,2R))}^{1/2} \tag{4.67}\] where the bound holds by Lemma 4.2. We can then use the simplification \(\delta(\epsilon^{-4}+\delta^{-1})\leq C\epsilon^{-3}\) for some new constant \(C\). Before looking at (4.66), we next bound \(I_{k,k}^{(T)}\). By (4.64) we get \[\left|I_{k,k}^{(T)}\right|\leq 2\delta\int_{\mathbb{R}^{3N-3}}\left|\partial_{x_{k} }^{\eta+\mu_{<T}}|x-x_{k}|\,\partial_{x_{k}}^{\mu_{T}}\big{(}\psi(x,\mathbf{ \hat{x}})\psi(y,\mathbf{\hat{x}})\big{)}\Phi(x,y,\mathbf{\hat{x}})\right|d \mathbf{\hat{x}}\] \[\quad+2\delta\int_{\mathbb{R}^{3N-3}}\left|\partial_{x_{k}}^{\eta +\mu_{<T}}|x-x_{k}|\,\psi(x,\mathbf{\hat{x}})\psi(y,\mathbf{\hat{x}})\big{(} \partial_{x_{k}}^{\mu_{T}}\Phi(x,y,\mathbf{\hat{x}})\big{)}\right|d\mathbf{ \hat{x}}\] \[\quad+\int_{\mathbb{R}^{3N-3}}|x-x_{k}|\left|\partial_{x_{k}}^{ \eta+\mu_{<T}}|x-x_{k}|\,\partial_{x_{k}}^{\mu_{T}}\big{(}\psi(x,\mathbf{\hat {x}})\psi(y,\mathbf{\hat{x}})\Phi(x,y,\mathbf{\hat{x}})\big{)}\right|d\mathbf{ \hat{x}}\] We bound each integral on the right-hand side of this inequality in turn. All will require use of (4.50) to bound derivatives of \(|x-x_{k}|\). The first and third integrals can be bounded by some constants multiplying (4.67) and (4.16) respectively. The second term is bounded by some constant multiplied by \[\delta\int_{\mathbb{R}^{3N-3}}\lambda(x,y,\mathbf{\hat{x}})^{-3}f_{\infty}(x, \mathbf{\hat{x}};R)f_{\infty}(y,\mathbf{\hat{x}};R)|\nabla\Phi(x,y,\mathbf{ \hat{x}})|\,d\mathbf{\hat{x}} \tag{4.68}\] which, using (4.14), can be bounded by some constant multiplying (4.16). Finally, it remains to bound (4.66). We simplify the calculation by denoting \(\sigma=\eta+\mu\) and writing \(\sigma=\sigma_{1}+\cdots+\sigma_{5}\) for \(|\sigma_{i}|=1\), \(i=1,\ldots,5\). Define \[\sigma_{<j}=\begin{cases}0&\text{ if }j=1\\ \sigma_{1}&\text{ if }j=2\\ \sigma_{1}+\cdots+\sigma_{j-1}&\text{ if }3\leq j\leq T\end{cases}\qquad \sigma_{>j}=\begin{cases}0&\text{ if }j=5\\ \sigma_{5}&\text{ if }j=4\\ \sigma_{j+1}+\cdots+\sigma_{5}&\text{ if }1\leq j\leq 3.\end{cases}\] We now apply the same method used above, that is, we transfer successive first order derivatives via integration by parts. To begin, we apply integration by parts to transfer \(\partial_{x_{k}}^{\sigma_{1}}\) from \(\partial_{x_{k}}^{\sigma}|x-x_{k}|\). We leave as a remainder the term where the derivative falls on \(\psi(x,\mathbf{\hat{x}})\psi(y,\mathbf{\hat{x}})\Phi(x,y,\mathbf{\hat{x}})\). However, for the term where the derivative falls on \(|x-x_{k}|\) we continue the procedure to now remove \(\partial_{x_{k}}^{\sigma_{2}}\) from \(\partial_{x_{k}}^{\sigma-\sigma_{1}}|x-x_{k}|\) using integration by parts again. Since \(|\sigma|=5\) is odd, the result after this procedure has occured five times is that (4.66) is equal to minus the same integral plus remainder terms. This explains the \((1/2)\)-factor in the following formula, \[\int_{\mathbb{R}^{3N-3}}\partial_{x_{k}}^{\sigma}|x-x_{k}||x-x_{k}| \psi(x,\mathbf{\hat{x}})\psi(y,\mathbf{\hat{x}})\Phi(x,y,\mathbf{\hat{x}})\,d \mathbf{\hat{x}}\\ =\frac{1}{2}\sum_{j=1}^{5}(-1)^{j}\int_{\mathbb{R}^{3N-3}} \partial_{x_{k}}^{\sigma_{>j}}|x-x_{k}|\partial_{x_{k}}^{\sigma_{<j}}|x-x_{k}| \partial_{x_{k}}^{\sigma_{j}}\big{(}\psi(x,\mathbf{\hat{x}})\psi(y,\mathbf{ \hat{x}})\Phi(x,y,\mathbf{\hat{x}})\big{)}\,d\mathbf{\hat{x}}.\] Using (4.49)-(4.50), each integral on the right-hand side is bounded by some constant multiplying (4.16) which is bounded according to Corollary 4.3. This completes the proof in the case where \(k\in(S\cup P)^{c}\). It remains to bound (4.59) for when \(k=j\) but where \(k\in P^{*}\) or \(k\in S^{*}\). First, suppose \(k\in P^{*}\) and hence, by the hypothesis, we need only consider \(\alpha=0\). This implies that \(k\in S^{c}\). To bound the integral, we first apply integration by parts to obtain \[I_{k,k}=-\int_{\mathbb{R}^{3N-3}}|x-x_{k}|\,\partial_{x_{k}}^{ \beta+l+m}|y-x_{k}|\,\psi(x,\mathbf{\hat{x}})\psi(y,\mathbf{\hat{x}})\Phi(x,y,\mathbf{\hat{x}})\,d\mathbf{\hat{x}}\\ -\int_{\mathbb{R}^{3N-3}}|x-x_{k}|\,\partial_{x_{k}}^{\beta+m}|y -x_{k}|\,\partial_{x_{k}}^{l}\big{(}\psi(x,\mathbf{\hat{x}})\psi(y,\mathbf{ \hat{x}})\Phi(x,y,\mathbf{\hat{x}})\big{)}\,d\mathbf{\hat{x}}\] By Lemma 3.2 we have \(|x-x_{k}|<\delta/2\) when \(\Phi(x,y,\mathbf{\hat{x}})\neq 0\). Using this, along with (4.50), we get \[|I_{k,k}|\leq\frac{\delta}{2}\int_{\mathbb{R}^{3N-3}}\big{|} \partial_{x_{k}}^{\beta+l+m}|y-x_{k}|\,\psi(x,\mathbf{\hat{x}})\psi(y,\mathbf{ \hat{x}})\Phi(x,y,\mathbf{\hat{x}})\big{|}\,d\mathbf{\hat{x}}\\ +\frac{\delta}{2}\int_{\mathbb{R}^{3N-3}}\big{|}\partial_{x_{k}}^ {\beta+m}|y-x_{k}|\,\partial_{x_{k}}^{l}\big{(}\psi(x,\mathbf{\hat{x}})\psi(y, \mathbf{\hat{x}})\big{)}\Phi(x,y,\mathbf{\hat{x}})\big{|}\,d\mathbf{\hat{x}} \\ +\frac{\delta}{2}\int_{\mathbb{R}^{3N-3}}\big{|}\partial_{x_{k}}^ {\beta+m}|y-x_{k}|\,\psi(x,\mathbf{\hat{x}})\psi(y,\mathbf{\hat{x}})\big{(} \partial_{x_{k}}^{l}\Phi(x,y,\mathbf{\hat{x}})\big{)}\big{|}\,d\mathbf{\hat{x}}.\] On the right-hand side of the above inequality, the first and second terms can be bounded by some constant multiplying (4.67). The third term is bounded by some constant multiplying (4.68). The case of \(k=j\) with \(k\in S^{*}\) is similar. **Lemma 4.10**.: _Let \(P=P(\Phi)\) and \(S=S(\Phi)\) obey \(P^{*}\cap S^{*}=\emptyset\), and take any \(R>0\). Let \(\eta,l,m\in\mathbb{N}_{0}^{3}\) obey \(|\eta|=4\) and \(|l|=|m|=1\). Then, for each \(j\in P^{c}\), the integral_ \[\int_{\mathbb{R}^{3N-3}}\partial_{x_{j}}^{\eta}|x-x_{j}|\psi(x,\mathbf{\hat{ x}})e^{F}(y,\mathbf{\hat{x}})\overline{\partial_{y}^{m}\phi(y,\mathbf{\hat{x}})} \Phi(x,y,\mathbf{\hat{x}})\,d\mathbf{\hat{x}} \tag{4.69}\] _and, for each \(k\in S^{c}\), the integral_ \[\int_{\mathbb{R}^{3N-3}}e^{F}(x,\mathbf{\hat{x}})\partial_{x}^{l}\phi(x, \mathbf{\hat{x}})\partial_{x_{k}}^{\eta}|y-x_{k}|\overline{\psi(y,\mathbf{ \hat{x}})}\Phi(x,y,\mathbf{\hat{x}})\,d\mathbf{\hat{x}} \tag{4.70}\] _can be bounded as in (4.35)._ Proof.: We prove the bound for (4.69). The case of (4.70) is similar. First, take some function \(\chi\in C_{c}^{\infty}(\mathbb{R})\), \(0\leq\chi\leq 1\), with \[\chi(t)=\begin{cases}1&\text{ if }|t|\leq 1\\ 0&\text{ if }|t|\geq 2.\end{cases}\] Furthermore, define \(\chi_{R}(t)=\chi(t/R)\) for all \(t\in\mathbb{R}\). It suffices to bound the following two integrals \[\int_{\mathbb{R}^{3N-3}}\big{(}1-\chi_{R}(|x-x_{j}|)\big{)} \partial_{x_{j}}^{\eta}|x-x_{j}|\psi(x,\mathbf{\hat{x}})e^{F}(y,\mathbf{\hat{ x}})\overline{\partial_{y}^{m}\phi(y,\mathbf{\hat{x}})}\Phi(x,y,\mathbf{\hat{x}})\,d \mathbf{\hat{x}} \tag{4.72}\] \[\int_{\mathbb{R}^{3N-3}}\chi_{R}(|x-x_{j}|)\partial_{x_{k}}^{ \eta}|x-x_{j}|\psi(x,\mathbf{\hat{x}})e^{F}(y,\mathbf{\hat{x}})\overline{ \partial_{y}^{m}\phi(y,\mathbf{\hat{x}})}\Phi(x,y,\mathbf{\hat{x}})\,d \mathbf{\hat{x}}. \tag{4.71}\] First, we see that when \(\chi_{R}(|x-x_{j}|)\neq 1\) we have \(|x-x_{j}|>R\). This, along with (2.6) and (4.49) gives some constant \(C\), depending on \(R\), such that (4.71) can be bounded in absolute value by \[\int_{\mathbb{R}^{3N-6}}\int_{\{x_{j}:|x-x_{j}|>R\}}\big{|} \partial_{x_{j}}^{\eta}|x-x_{j}|\psi(x,\mathbf{\hat{x}})e^{F}(y,\mathbf{\hat{ x}})\nabla\phi(y,\mathbf{\hat{x}})\Phi(x,y,\mathbf{\hat{x}})\big{|}\,dx_{j}d \mathbf{\hat{x}}_{1j}\\ \leq C\int_{\mathbb{R}^{3N-3}}\big{|}\psi(x,\mathbf{\hat{x}}) \nabla\phi(y,\mathbf{\hat{x}})\big{|}\,d\mathbf{\hat{x}},\] which itself can be bounded using (2.9)-(2.10) by some constant multiplying (4.6) with, for example, the same \(R\). The relevant bound then follows from Proposition 4.1, and that \(\epsilon<1\). Recall the notation introduced in (1.11)-(1.14), namely we can write \[(y,x,\mathbf{\hat{x}}_{1j})=(y,x_{2},\ldots,x_{j-1},x,x_{j+1},\ldots,x_{N}).\] Using \(\partial_{y}^{m}\phi(y,\mathbf{\hat{x}})=\partial_{y}^{m}\phi(y,x,\mathbf{ \hat{x}}_{1j})+\big{(}\partial_{y}^{m}\phi(y,\mathbf{\hat{x}})-\partial_{y}^ {m}\phi(y,x,\mathbf{\hat{x}}_{1j})\big{)}\) it follows that in order to bound (4.72) it suffices to bound the following two integrals, \[\int_{\mathbb{R}^{3N-3}}\chi_{R}(|x-x_{j}|)\partial_{x_{j}}^{\eta }|x-x_{j}|\psi(x,\mathbf{\hat{x}})e^{F}(y,\mathbf{\hat{x}})\overline{\partial _{y}^{m}\phi(y,x,\mathbf{\hat{x}}_{1j})}\Phi(x,y,\mathbf{\hat{x}})\,d\mathbf{ \hat{x}}, \tag{4.74}\] \[\int_{\mathbb{R}^{3N-3}}\chi_{R}(|x-x_{j}|)\partial_{x_{j}}^{\eta }|x-x_{j}|\psi(x,\mathbf{\hat{x}})e^{F}(y,\mathbf{\hat{x}})\overline{\big{(} \partial_{y}^{m}\phi(y,\mathbf{\hat{x}})-\partial_{y}^{m}\phi(y,x,\mathbf{ \hat{x}}_{1j})\big{)}}\Phi(x,y,\mathbf{\hat{x}})\,d\mathbf{\hat{x}}. \tag{4.73}\] Notice that when \(\chi_{R}(|x-x_{j}|)\neq 0\) we have \(|x-x_{j}|<2R\). Take any \(\theta\in(0,1)\). Then since \(\phi\in C^{1,\theta}(\mathbb{R}^{3N})\), we have local boundedness and local \(\theta\)-Holder continuity of \(\nabla\phi\). Therefore, by (2.9) and (2.10) there exists a constant \(C\) such that, when \(|x-x_{j}|<2R\), \[|\partial_{y}^{m}\phi(y,x,\mathbf{\hat{x}}_{1j})|\leq\|\nabla\phi\|_{L^{\infty} (B((y,\mathbf{\hat{x}}),2R))}\leq Cf_{\infty}(y,\mathbf{\hat{x}};4R) \tag{4.75}\] \[\big{|}\partial_{y}^{m}\phi(y,\mathbf{\hat{x}})-\partial_{y}^{m}\phi(y,x, \mathbf{\hat{x}}_{1j})\big{|}\leq|x-x_{j}|^{\theta}[\nabla\phi]_{\theta,B((y, \mathbf{\hat{x}}),2R)}\leq C|x-x_{j}|^{\theta}f_{\infty}(y,\mathbf{\hat{x}};4 R). \tag{4.76}\] The constant \(C\) depends on \(R\) and \(\theta\) but is independent of \(x,y\) and \(\mathbf{\hat{x}}\). Integration by parts in the variable \(x_{j}\) is used in (4.73) to remove a single derivative from \(\partial_{x_{j}}^{\eta}|x-x_{j}|\). Since \(\partial_{y}^{m}\phi(y,x,\mathbf{\hat{x}}_{1j})\) has no dependence on \(x_{j}\), this process avoids taking a second derivative of \(\phi\). Take any \(\tau\leq\eta\) with \(|\tau|=1\). Integral (4.73) can therefore be rewritten as \[-\int_{\mathbb{R}^{3N-3}}\partial_{x_{j}}^{\tau}\big{(}\chi_{R}(| x-x_{j}|)\big{)}\,\partial_{x_{j}}^{\eta-\tau}|x-x_{j}|\,\psi(x,\mathbf{\hat{x}} )\,e^{F}(y,\mathbf{\hat{x}})\,\overline{\partial_{y}^{m}\phi(y,x,\mathbf{ \hat{x}}_{1j})}\,\Phi(x,y,\mathbf{\hat{x}})\,d\mathbf{\hat{x}}\\ -\int_{\mathbb{R}^{3N-3}}\chi_{R}(|x-x_{j}|)\,\partial_{x_{j}}^{ \eta-\tau}|x-x_{j}|\,\overline{\partial_{y}^{m}\phi(y,x,\mathbf{\hat{x}}_{1j} )}\,\partial_{x_{j}}^{\tau}\big{(}e^{F}(y,\mathbf{\hat{x}})\psi(x,\mathbf{ \hat{x}})\Phi(x,y,\mathbf{\hat{x}})\big{)}\,d\mathbf{\hat{x}}\] which, by (2.6), (4.50) and (4.75), can be bounded in absolute value by \[C\int_{\mathbb{R}^{3N-3}}\lambda(x,y,\mathbf{\hat{x}})^{-2}f_{\infty}(x, \mathbf{\hat{x}};4R)f_{\infty}(y,\mathbf{\hat{x}};4R)\big{(}\Phi(x,y,\mathbf{ \hat{x}})+|\nabla\Phi(x,y,\mathbf{\hat{x}})|\big{)}\,d\mathbf{\hat{x}}.\] for some \(C\) depending on \(R\) and our choice of \(\chi\). The relevant bound then follows by Corollary 4.3. Using (2.6), (4.49), (4.50) and (4.76), the integral (4.74) can then be bounded in absolute value by some constant multiplied by \[\int_{\mathbb{R}^{3N-3}}\lambda(x,y,\mathbf{\hat{x}})^{-3+\theta}f_{\infty}(x,\mathbf{\hat{x}};4R)f_{\infty}(y,\mathbf{\hat{x}};4R)\Phi(x,y,\mathbf{\hat{ x}})\,d\mathbf{\hat{x}}.\] The relevant bound then follows by Lemma 4.2 with \(b=-3+\theta\). ## Appendix A Second derivatives of \(\phi\) Fix some function \(\chi\in C_{c}^{\infty}(\mathbb{R})\), \(0\leq\chi\leq 1\), with \[\chi(t)=\begin{cases}1&\text{ if }|t|\leq 1\\ 0&\text{ if }|t|\geq 2.\end{cases}\] We also set (A.1) \[g(x,y)=(x\cdot y)\ln\big{(}|x|^{2}+|y|^{2}\big{)}\] for \(x,y\in\mathbb{R}^{3}\). For each \(\mathbf{x}\in\mathbb{R}^{3N}\) we can then define the function (A.2) \[G(\mathbf{x})=K_{0}\sum_{1\leq j<k\leq N}\chi(|x_{j}|)\chi(|x_{k}|)g(x_{j},x_{ k})\] where \(K_{0}=Z(2-\pi)(12\pi)^{-1}\). The function \(G\) was first introduced by S. Fournais, T. and M. Hoffmann-Ostenhof, and T. O. Sorensen in [9] to improve the regularity of \(\phi\) via a multiplicative factor. Define (A.3) \[\phi^{\prime}:=e^{-G}\phi=e^{-(G+F)}\psi=e^{-(G+F_{c}-F_{s})}\psi\] Then, in the same paper it was shown that \(\phi^{\prime}\in C^{1,1}(\mathbb{R}^{3N})\) and the factor \(e^{-(G+F)}\) is optimal in the sense that no other multiplicative factor, depending on \(N\) and \(Z\) but not on \(\psi\) or \(E\), can produce greater regularity than \(C^{1,1}(\mathbb{R}^{3N})\) for all eigenfunctions \(\psi\) obeying (1.3). The quantitative result is the following, which is an adapted version of [9, Theorem 1.5]. **Theorem A.1**.: _For all \(0<r<R<1\) we have a constant \(C(r,R)\), depending on \(r\) and \(R\) but independent of \(\psi\), such that_ (A.4) \[\left\|\phi^{\prime}\right\|_{W^{2,\infty}(B(\mathbf{x},r))}\leq C(r,R)\left\| \phi^{\prime}\right\|_{L^{\infty}(B(\mathbf{x},R))}.\] _for all \(\mathbf{x}\in\mathbb{R}^{3N}\)._ **Remark**.: We use the standard result that if \(u\in C^{1,1}(\overline{B})\) then \(u\in W^{2,\infty}(B)\) and \(\left\|D_{ij}u\right\|_{L^{\infty}(B)}\leq[u]_{1,B}\), see for example [13] Our objective will be to use this theorem to obtain pointwise bounds for the second derivatives of \(\phi\), which we do not expect to be bounded since \(\phi\) lacks the additional factor \(e^{-G}\) present in the definition of \(\phi^{\prime}\), A.3. We obtain the following as a corollary of the above theorem. **Corollary A.2**.: _For all \(0<r<R<1\) and \(b>0\) there exists \(C\), depending on \(r,R\) and \(b\) but independent of \(\psi\), such that for any non-empty cluster \(P\) and any \(\eta\in\mathbb{N}_{0}^{3}\) with \(|\eta|=1\),_ \[\left\|D_{P}^{\eta}\nabla\phi\right\|_{L^{\infty}(B(\mathbf{x},r\lambda_{P}( \mathbf{x})))}\leq C\lambda_{P}(\mathbf{x})^{-b}\left\|\phi\right\|_{L^{\infty }(B(\mathbf{x},R))}\] _for all \(\mathbf{x}\in\Sigma_{P}^{c}\)._ Proof.: To begin, we obtain bounds for derivatives of \(g(x,y)\) and \(G(\mathbf{x})\). It can be seen that \(g\in C^{1,\theta}(\mathbb{R}^{6})\) for all \(\theta\in[0,1)\). For the second derivatives, let \(\alpha,\beta\in\mathbb{N}_{0}^{3}\) obey \(|\alpha|+|\beta|=2\), then there exists \(C\) such that (A.5) \[|\partial_{x}^{\alpha}\partial_{y}^{\beta}g(x,y)|\leq\begin{cases}C+\big{|} \ln\big{(}|x|^{2}+|y|^{2}\big{)}\big{|}&\text{if }|\alpha|=|\beta|=1\\ C&\text{otherwise}\end{cases}\] for all \(x,y\in\mathbb{R}^{3}\). It follows that (A.6) \[G,\nabla G\in L^{\infty}(\mathbb{R}^{3N}),\] and given \(b>0\) there exist constants \(C\) and \(C^{\prime}\), only the latter depending on \(b\), such that for any \(\eta\in\mathbb{N}_{0}^{3}\) with \(|\eta|=1\), and \(k=1,\ldots,N\), we have \[|\partial_{x_{k}}^{\eta}\nabla G(\mathbf{y})| \leq C\Big{(}1+\sum_{\begin{subarray}{c}l=1\\ l\neq k\end{subarray}}^{N}\chi(|y_{k}|)\chi(|y_{l}|)\big{|}\ln\big{(}|y_{k}|^{2 }+|y_{l}|^{2}\big{)}\big{|}\Big{)}\] \[\leq C^{\prime}\big{(}1+|y_{k}|^{-b}\big{)}\] for all \(\mathbf{y}=(y_{1},\ldots,y_{N})\in\mathbb{R}^{3N}\) with \(y_{k}\neq 0\). Using the above inequality for every \(k\in P\) and the definition of cluster derivatives, (1.15), we can obtain some \(C\), depending on \(b\) such that (A.7) \[|D_{P}^{\eta}\nabla G(\mathbf{y})|\leq C\lambda_{P}(\mathbf{y})^{-b}\] for all \(\mathbf{y}\in\Sigma_{P}^{c}\). Here, we also used that \(\lambda_{P}\leq 1\), the definition of \(\lambda_{P}\) in (1.21), and the formula (1.20). Now, take some \(\mathbf{x}\in\Sigma_{P}^{c}\). As in Lemma 2.4, we use (1.22) to show that \((1-r)\lambda_{P}(\mathbf{x})\leq\lambda_{P}(\mathbf{y})\) for each \(\mathbf{y}\in B(\mathbf{x},r\lambda_{P}(\mathbf{x}))\). Therefore, for \(C\) as in (A.7), we have (A.8) \[\left\|D_{P}^{\eta}\nabla G\right\|_{L^{\infty}(B(\mathbf{x},r\lambda_{P}( \mathbf{x})))}\leq C(1-r)^{-b}\lambda_{P}(\mathbf{x})^{-b}\] for all \(\mathbf{x}\in\Sigma_{P}^{c}\). We now in a position to consider derivatives of \(\phi=e^{G}\phi^{\prime}\). Firstly, we have \(\nabla\phi=e^{G}\phi^{\prime}\nabla G+e^{G}\nabla\phi^{\prime}\). And therefore the following formula holds for each \(\eta\in\mathbb{N}_{0}^{3}\), \(|\eta|=1\), \[D_{P}^{\eta}\nabla\phi=\big{(}D_{P}^{\eta}\nabla G+D_{P}^{\eta}G\,\nabla G \big{)}\phi+e^{G}D_{P}^{\eta}\phi^{\prime}\,\nabla G+e^{G}D_{P}^{\eta}G\, \nabla\phi^{\prime}+e^{G}D_{P}^{\eta}\nabla\phi^{\prime}.\] Taking the norm and using (A.6) and (A.8) we can then obtain \(C\) such that (A.9) \[\left\|D_{P}^{\eta}\nabla\phi\right\|_{L^{\infty}(B(\mathbf{x},r\lambda_{P}( \mathbf{x})))}\leq C\big{(}\lambda_{P}(\mathbf{x})^{-b}\left\|\phi\right\|_{L ^{\infty}(B(\mathbf{x},r\lambda_{P}(\mathbf{x})))}+\left\|\phi^{\prime}\right\| _{W^{2,\infty}(B(\mathbf{x},r\lambda_{P}(\mathbf{x})))}\big{)}.\] for all \(\mathbf{x}\in\Sigma_{P}^{c}\). To the second term in the above bound we may then use Theorem A.1 with constant \(C(r,R)\), followed by use of (A.6) to obtain another constant \(C^{\prime}\), also dependent on \(r,R\) but independent of \(\mathbf{x}\), such that (A.10) \[\left\|\phi^{\prime}\right\|_{W^{2,\infty}(B(\mathbf{x},r\lambda_{P}(\mathbf{ x})))}\leq C(r,R)\left\|\phi^{\prime}\right\|_{L^{\infty}(B(\mathbf{x},R))} \leq C^{\prime}\left\|\phi\right\|_{L^{\infty}(B(\mathbf{x},R))}.\] Together, (A.9) and (A.10) complete the proof. **Acknowledgments.** The author would like to thank A. V. Sobolev for helpful discussions in all matters of the current work.
2307.00309
Adversarial Attacks and Defenses on 3D Point Cloud Classification: A Survey
Deep learning has successfully solved a wide range of tasks in 2D vision as a dominant AI technique. Recently, deep learning on 3D point clouds is becoming increasingly popular for addressing various tasks in this field. Despite remarkable achievements, deep learning algorithms are vulnerable to adversarial attacks. These attacks are imperceptible to the human eye but can easily fool deep neural networks in the testing and deployment stage. To encourage future research, this survey summarizes the current progress on adversarial attack and defense techniques on point cloud classification.This paper first introduces the principles and characteristics of adversarial attacks and summarizes and analyzes adversarial example generation methods in recent years. Additionally, it provides an overview of defense strategies, organized into data-focused and model-focused methods. Finally, it presents several current challenges and potential future research directions in this domain.
Hanieh Naderi, Ivan V. Bajić
2023-07-01T11:46:36Z
http://arxiv.org/abs/2307.00309v2
# Adversarial Attacks and Defenses on 3D Point Cloud Classification: A Survey ###### Abstract Deep learning has successfully solved a wide range of tasks in 2D vision as a dominant AI technique. Recently, deep learning on 3D point clouds is becoming increasingly popular for addressing various tasks in this field. Despite remarkable achievements, deep learning algorithms are vulnerable to adversarial attacks. These attacks are imperceptible to the human eye but can easily fool deep neural networks in the testing and deployment stage. To encourage future research, this survey summarizes the current progress on adversarial attack and defense techniques on point cloud classification. This paper first introduces the principles and characteristics of adversarial attacks and summarizes and analyzes the adversarial example generation methods in recent years. Besides, it classifies defense strategies as input transformation, data optimization, and deep model modification. Finally, it presents several challenging issues and future research directions in this domain. Digular Object Identifier ## I Introduction Deep learning (DL) [1] is a subset of machine learning (ML) and artificial intelligence (AI) that analyzes large amounts of data using a structure roughly similar to the human brain. Deep learning is characterized by the use of multiple layers of neural networks, which process and analyze large amounts of data. These neural networks are trained on large datasets, which allows them to learn patterns and make decisions on their own. DL has achieved impressive results in the fields of image recognition [2, 3], semantic analysis [4, 5], speech recognition [6, 7] and natural language processing [8] in recent years. Despite the tremendous success of DL, in 2013 Szegedy _et al._[9] found that deep models are vulnerable to adversarial examples in image classification tasks. Adversarial examples are inputs to a deep learning model that have been modified in a way that is intended to mislead the model. In the context of image classification, for example, an adversarial example might be a picture of a panda that has been slightly modified in a way that is imperceptible to the human eye but that causes a deep learning model to classify the image as a gibbon. Adversarial examples can be created in two or three dimensions. In the case of 2D adversarial examples, the input is an image, and the modification is applied to the pixels of the image. These modifications can be small perturbations added to the image pixels [10, 11, 12, 13, 14, 15, 16] or they can be more significant changes to the structure of the image [17, 18, 19, 20]. Thanks to the rapid development of 3D acquisition technologies, various types of 3D scanners, LiDARs, and RGB-D cameras have become increasingly affordable. 3D data is often used as an input for Deep Neural Networks (DNNs) in healthcare [21], self-driving cars [22], drones [23], robotics [24], and many other applications. These 3D data, compared to 2D counterparts, capture more information from the environment, thereby allowing more sophisticated analysis. There are different representations of 3D data, like voxels [25], meshes [26], and point clouds [27]. Since point clouds can be received directly from scanners, they can precisely capture shape details. Therefore, it is the preferred representation for many safety-critical applications. Due to this, in the case of 3D adversarial examples, the input is a point cloud, and the modification is applied to the points in the cloud. These examples can be created by adding, dropping, and shifting some points in the input point clouds, or by generating entirely new point clouds with predefined target labels using methods such as Generative Adversarial Networks (GANs) or other transformation techniques. It is typically easier to create adversarial examples in 2D space than in 3D space because the input space is smaller and there are fewer dimensions to perturb. In general, adversarial examples exploit the vulnerabilities or weaknesses in the model's prediction process, and they can be very difficult to detect because they are often indistinguishable from normal examples to the human eye. As a result, adversarial examples can pose a serious threat to the security and reliability of DL models. Therefore, it is important to have effective methods for defending against adversarial examples in order to ensure the robustness and reliability of DL models. Adversarial defense in the 2D image and the 3D point clouds both seek to protect DL models from being fooled by adversarial examples. However, there are some key differences between the approaches used to defend against adversarial images and adversarial point clouds. Some of the main differences include the following: * Input data: Adversarial images are 2D data representations, while adversarial point clouds are 3D data representations. This means that the approaches used to defend against adversarial images and point clouds may need to take into account the different dimensions and characteristics of the input data. * Adversarial perturbations: Adversarial images may be modified using small perturbations added to the image pixels, while adversarial point clouds may be modified using perturbations applied to individual points or groups of points in the point cloud. This means that the approaches used to defend against adversarial images and point clouds may need to be tailored to the specific types of adversarial perturbations that are being used. * Complexity: Adversarial point clouds may be more complex to defend against than adversarial images, as the perturbations applied to point clouds may be more difficult to identify and remove. This may require the use of more sophisticated defenses, such as methods that are able to detect and remove adversarial perturbations from the input point cloud. On the whole, adversarial point clouds can be challenging to identify and defend against, as they may not be easily recognizable in the 3D point cloud data. Adversarial point clouds may be more harmful and harder to defend against, because their changes may be less obvious to humans due to the lack of familiarity compared to images. As a result, it is important to conduct a thorough survey of adversarial attacks and defenses on 3D point clouds in order to identify the challenges and limitations of current approaches and to identify opportunities for future research in this area. There are a number of published surveys that review adversarial attacks and defenses in general, including in the context of computer vision, machine learning, and deep learning systems. These surveys provide an overview of the various types of attacks and defenses that have been proposed, as well as their strengths and limitations. However, there is a lack of surveys specifically focused on 3D point cloud attacks and defenses. Some published surveys do mention 3D attacks and defenses briefly [28], but there is a need for more comprehensive surveys that delve deeper into this topic. Table 1 refers to a summary or overview of published surveys of adversarial attacks and defenses. Some of these surveys focus on specific domains, such as computer vision [28, 29, 30], text [31], and images [32, 33, 34, 35] while others provide a more general overview of adversarial attacks and defenses in the field of artificial intelligence [36, 37]. Our key contributions are as follows: * A review of the different types of adversarial point clouds that have been proposed and the methods that have been used to generate them, and proposing a taxonomy of these methods. * A review of the various methods that have been proposed for defending against adversarial point clouds, including data optimization, input transformation methods, and deep model modification. * Categorization of the most important datasets and models used by researchers in this field. * An assessment of the challenges and limitations of current approaches to adversarial attacks and defenses on 3D point clouds, and identification of opportunities for future research in this area. An overview of the categorization of adversarial attack and defense approaches on 3D point clouds is shown in Fig. 1. The rest of this paper is organized as follows. Section II introduces a list of notations, terms and measurements used in the paper. We discuss adversarial attacks on deep models for 3D point cloud classification in Section III. Section IV provides a detailed review of the existing adversarial defense methods. In Section V, we summarize commonly used 3D datasets and present a taxonomy of datasets and victim models used in recent studies. We discuss current challenges and potential solutions related to adversarial attacks in Section VI. Finally, Section VII concludes the survey. ## II Background In this section, we provide the necessary background in terms of notation, terminology, and point cloud distance measures used in the field of 3D adversarial attacks. By establishing clear definitions, researchers can more accurately compare the effectiveness of different approaches and identify trends or patterns in the methods. A list of symbols used in the paper is given in Table 2, along with their explanations. These symbols are used to represent various quantities related to point cloud adversarial attacks. The table provides a brief description of each symbol to help readers understand and follow the discussions and equations in the paper. Next, we briefly introduce the terminology and distance measures used in the field of adversarial attacks and defenses on 3D point clouds. ### _Definition of terms_ It is crucial to define the technical terms used in the literature in order to provide a consistent discussion of the various methods and approaches. The definitions of these terms appear below. The rest of the paper follows the same definitions throughout. * **3D point cloud** is a set of points in 3D space, typically representing a 3D shape or scene. * **Adversarial point cloud** is a 3D point cloud that has been intentionally modified in order to mislead a DL model that analyzes 3D point clouds. We focus on geometric modifications, rather than attribute (e.g., color) modifications, since these are predominant in the literature on adversarial point clouds. * **Adversarial attack** is a technique that intentionally introduces perturbations or noise to an input point cloud in order to fool a DL model, causing it to make incorrect predictions or decisions. * **Black-box attacks** are a type of adversarial attack in which the attacker only has access to the model's input and output, and has no access to the structure of the DL model being attacked. * **White-box attacks** are a type of adversarial attack in which the attacker knows all the details about the DL model's architecture and parameters. * **Targeted attacks** involve manipulating the input point cloud in a way that causes the model to output a specific target label when presented with the modified input. * **Non-targeted attacks** involve manipulating the input point cloud in a way that causes the model to output a wrong label, regardless of what that label is. * **Point addition attacks** involve adding points to the point cloud to fool the DL model. \begin{table} \begin{tabular}{c c} \hline **Surveys** & **Year** \\ \hline Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey [29] & 2018 \\ Adversarial Examples: Attacks and Defenses for Deep Learning [33] & 2018 \\ Review of artificial intelligence adversarial attack and defense technologies [36] & 2019 \\ Adversarial Examples in Modern Machine Learning: A Review [38] & 2019 \\ A survey on adversarial attacks and defences [34] & 2021 \\ Advances in Adversarial Attacks and Defenses in Computer Vision: A Survey [28] & 2021 \\ A survey on the vulnerability of deep neural networks against adversarial attacks [35] & 2022 \\ Adversarial Attack and Defense Strategies of Speaker Recognition Systems: A Survey [39] & 2022 \\ Adversarial attack and defense technologies in natural language processing: A survey [31] & 2022 \\ Adversarial Attack and Defense: A Survey [40] & 2022 \\ A Review of Adversarial Attack and Defense for Classification [41] & 2022 \\ Adversarial Attacks and Defenses for Deployd AI Models [37] & 2022 \\ Physically Adversarial Attacks and Defenses in Computer Vision: A Survey [42] & 2022 \\ Physical Adversarial Attack meets Computer Vision: A Decade Survey [30] & 2022 \\ Adversarial Examples based on Object Detection tasks: A Survey [43] & 2022 \\ Adversarial Deep Learning: A Survey on Adversarial Attacks and Defense Mechanisms on Image Classification [32] & 2022 \\ \hline \end{tabular} \end{table} TABLE I: A review of published surveys of adversarial attacks and defenses. Figure 1: Categorization of adversarial attack and defense approaches on 3D point clouds. * **Point shift attacks** involve shifting points of the point cloud to fool the DL model, while the number of points remains the same as in the original point cloud. * **Point drop attacks** involve dropping points from the point cloud to fool the DL model. * **Optimization-based attacks** are a type of attack in which the creation of an adversarial point cloud is formulated and solved as an optimization problem. * **Gradient-based attacks** are a type of attack in which the gradients of the cost function corresponding to each input point are used to generate an adversarial point cloud with higher tendency toward being misclassified. * **On-surface perturbation attacks** are a type of attack that involves modifying points along the object's surface in the point cloud. * **Out-of-surface perturbation attacks** are a type of attack that involves modifying points outside the object surface in the point cloud. * **Transferability** refers to the ability of adversarial examples generated for one DL model to be successful in causing misclassification for another DL model. * **Adversarial defense** is a set of techniques that aim to mitigate the impact of adversarial attacks and improve the robustness of the DL model against them. * **Attack success rate** refers to the percentage of times that an adversarial attack on a DL model is successful. ### _Distance Measures_ The objective of adversarial attacks is to modify points of \(\mathcal{P}\), creating an adversarial point cloud \(\mathcal{P}^{adv}\), which could fool a DL model to output wrong results. Geometric 3D adversarial attacks can be achieved by adding, dropping, or shifting points in \(\mathcal{P}\). If the adversarial point cloud is generated by shifting points, \(\mathbf{\ell_{P}}\)**-norms** can be used to measure the distance between \(\mathcal{P}\) and \(\mathcal{P}^{adv}\), as the two point clouds have the same number of points. In this case, we can talk about the vector difference (perturbation) \(\eta=\mathcal{P}-\mathcal{P}^{adv}\), and consider \(\|\eta\|_{P}\) as the distance between \(\mathcal{P}\) and \(\mathcal{P}^{adv}\). The typical choices for \(P\) are \(P\in\{0,2,\infty\}\), and the equation is: \[D_{\ell_{P}}(\mathcal{P},\mathcal{P}^{adv})=\|\eta\|_{P}=\left(\sum_{i=1}^{n} \|p_{i}-p_{i}^{adv}\|_{P}^{P}\right)^{1/P} \tag{1}\] where \(\mathcal{P}\in\mathbb{R}^{n\times 3}\) is the original point cloud consisting of \(n\) points in 3D space, \(\mathcal{P}=\{p_{i}\,|\,i=1,2,...,n\}\) and the \(i^{th}\) point, \(p_{i}=(x_{i},y_{i},z_{i})\), is a 3D vector of coordinates. \(\mathcal{P}^{adv}\) is the adversarial point cloud formed by adding the adversarial perturbation \(\eta=(\eta_{1},\eta_{2},...,\eta_{n}),\eta_{i}\in\mathbb{R}^{3}\), to \(\mathcal{P}\). The three common \(\ell_{P}\) norms have the following interpretations: * \(\mathbf{\ell_{0}}\)**-norm** or \(\|\eta\|_{0}\) counts the number of non-zero elements in \(\eta\), so it indicates how many points in \(\mathcal{P}^{adv}\) have changed compared to \(\mathcal{P}\). * \(\mathbf{\ell_{2}}\)**-norm** or \(\|\eta\|_{2}\) is the Euclidean distance between \(\mathcal{P}^{adv}\) and \(\mathcal{P}\). * \(\mathbf{\ell_{\infty}}\)**-norm** or \(\|\eta\|_{\infty}\) is the maximum difference between the points in \(\mathcal{P}^{adv}\) and \(\mathcal{P}\). As mentioned above, \(\ell_{P}\)-norm distance criteria require that \(\mathcal{P}^{adv}\) and \(\mathcal{P}\) have the same number of points. Hence, these distance measures cannot be used for attacks that involve adding or dropping points. To quantify the dis-similarity between two point clouds that don't have the same number of \begin{table} \begin{tabular}{|c|l|} \hline **Symbol** & \multicolumn{2}{c|}{**Description**} \\ \hline \(\mathcal{P}\) & An instance of an original (input) point cloud \\ \(\mathcal{P}^{adv}\) & An instance of an adversarial point cloud \\ \(p_{i}\) & \(i\)-th point in the original (input) point cloud \\ \(p_{i}^{adv}\) & \(i\)-th point in the adversarial point cloud \\ \(\eta\) & Perturbation vector (difference between the original and adversarial point cloud) \\ \(\epsilon\) & Perturbation threshold \\ \(\alpha\) & Scale parameter \\ \(n\) & Total number of points in a point cloud \\ \(Y\) & ground-truth label associated with original input \\ \(Y^{\prime}\) & Wrong label associated with an adversarial example that deep model predicts \\ \(T\) & Target attack label \\ \(f(\cdot)\) & Mapping from the input point cloud to the output label implemented by the deep model \\ \(\theta\) & Parameters of model \(f\) \\ \(J(\cdot,\cdot)\) & Loss function used for model \(f\) \\ \(\nabla\) & Gradient \\ sign\(()\) & Sign function \\ \(P\) & Parameter of the \(\ell_{P}\)-norm; typical values of \(P\) are \(1\), \(2\) and \(\infty\). \\ \(D_{\ell_{P}}\) & \(\ell_{P}\)-norm distance \\ \(D_{H}\) & Hausdorff distance \\ \(D_{C}\) & Chamfer distance \\ \(k\) & Number of nearest neighbors of a point \\ \(\kappa\) & Confidence constant \\ \(z\) & Latent space of a point autoencoder \\ \(g(\cdot)\) & Objective function \\ \(t\) & Number of iterations \\ \(\mu\) & Mean of \(k\) nearest neighbor distance of all points in a point cloud \\ \(\sigma\) & Standard deviation of \(k\) nearest neighbor distance of all points in a point cloud \\ \hline \end{tabular} \end{table} TABLE II: Symbols and their explanations. points, **Hausdorff distance \(D_{H}\)** and **Chamfer distance \(D_{C}\)** are commonly used. Hausdorff distance is defined as follows: \[D_{H}(\mathcal{P},\mathcal{P}^{adv})=\max_{p\in\mathcal{P}}\min_{p^{adv}\in \mathcal{P}^{adv}}\|p-p^{adv}\|_{2}^{2} \tag{2}\] It locates the nearest original point \(p\) for each adversarial point \(p^{adv}\) and then finds the maximum squared Euclidean distance between all such nearest point pairs. Chamfer distance is similar to Hausdorff distance, except that it sums the distances among all pairs of closest points, instead of taking the maximum: \[\begin{split} D_{C}(\mathcal{P},\mathcal{P}^{adv})=& \sum_{p^{adv}\in\mathcal{P}^{adv}}\min_{p\in\mathcal{P}}\|p-p^{adv}\|_{2}^{2 }\\ &+\sum_{p\in\mathcal{P}}\min_{p^{adv}\in\mathcal{P}^{adv}}\|p-p^ {adv}\|_{2}^{2}\end{split} \tag{3}\] Optionally, Chamfer distance can be averaged with respect to the number of points in the two point clouds. Besides the distance measures mentioned above, there are other distance measures for point clouds, such as point-to-plane distance [44], that are used in point cloud compression. However, these are not commonly encountered in the literature on 3D adversarial attacks, so we don't review them here. ## III Adversarial Attacks This section describes the seven most common approaches for generating adversarial point clouds. Our discussion encompasses the technicalities of these seven widely used methods and also briefly touches upon similar approaches related to these seven attacks. Some of the approaches [45, 46] described in this section are extended versions of adversarial examples for 2D data, adapted for use with 3D point clouds. These approaches may face new challenges due to the additional dimension of the data. Other approaches [47] are specifically designed for 3D data and may be more effective at generating adversarial point clouds than methods that are simply adapted from 2D data. These approaches may consider the unique characteristics of 3D point clouds and the deep models that process them. Overall, the goal of these approaches is to understand better how adversarial point clouds could affect current deep 3D models. The most popular approaches are also summarized in Table 3 and we explain how adversarial attacks and attack categories relate in the context of adversarial examples for point cloud classification tasks. ### _3D Fast Gradient Sign Method (3d Fgsm)_ The fast gradient sign method (FGSM) presented by Goodfellow _et al._[61]. In accordance with standard FGSM, the method adds an adversarial perturbation \(\eta\) to each point of given point cloud \(\mathcal{P}\) in order to create an adversarial point cloud as \(\mathcal{P}^{adv}=\mathcal{P}+\eta\). Perturbations are generated according to the direction of the sign of gradient at each point. The perturbation can be expressed as \[\eta=\epsilon sign(\nabla_{\mathcal{P}}J(f(\mathcal{P}:\theta),Y) \tag{4}\] where \(f\) is deep model that is parameterized by \(\theta\) and takes an input point cloud \(\mathcal{P}\) and \(Y\) denotes the label associated with \(\mathcal{P}\). \(\Delta_{x}J(.,.)\) is gradient of loss function of model w.r.t to \(\mathcal{P}\) and \(sign(.)\) denotes the sign function. The \(\epsilon\) value is an adjusting hyperparameter that determines the \(\ell_{\infty}\)-norm of the difference between the original and adversarial inputs. The FGSM was extended by Liu _et al._[54] to 3D data. There are three different ways were introduced [54] to define \(\epsilon\) value as a constraint for \(\eta\) as follows 1. Constraining the \(\ell_{2}\)-norm between each dimension of points \(\mathcal{P}\) and \(\mathcal{P}^{adv}\).. 2. Constraining the \(\ell_{2}\)-norm between each point \(\mathcal{P}\) and \(\mathcal{P}^{adv}\).. 3. Constraining the \(\ell_{2}\)-norm between all points \(\mathcal{P}\) and \(\mathcal{P}^{adv}\).. Due to the first method severely limiting the movement of points, the authors suggest the second and third methods. However, all three methods have shown little difference in the attack success rates. Yang _et al._[45] used the Chamfer distance (instead of \(\ell_{2}\)-norm) between the original point cloud and the adversarial counterpart to extend FGSM to a 3D domain. Using this approach, each point in the adversarial point clouds is perturbed slightly. There is a trade-off between the chamfer distance and the attack success rate because, as the chamfer distance decreases, it may become more difficult for an adversarial attack to achieve a high attack success rate. However, if the chamfer distance is set too high, the model may be more vulnerable to adversarial attacks. Finding the right balance between these two factors can be challenging, and it may depend on the specific characteristics of the point cloud model and the type of adversarial attack being used. Figure 2 illustrates an example of an FGSM adversarial point cloud with Chamfer distances varying from 0.01 to 0.05 between the two point clouds. The author in [45] sets it to 0.02 as an "appreciate distance". Apart from the FGSM attack, Yang _et al._[45] introduced another attack called "Momentum-Enhanced Pointwise Gradient (**MPG**)." The MPG attack, similar to [62], integrates momentum into iterative FGSM. The MPG attack produces more transferable adversarial examples. ### _3D Carlini and Wagner Attack (3d C&W)_ The C&W attack is presented by Carlini and Wagner [63]. They provided three kinds of attacks with three different distance measures, \(\ell_{0}\)-norm, \(\ell_{2}\)-norm, and \(\ell_{\infty}\)-norm. As a general rule, generating the C&W attack can describe as an optimization problem to find minimum perturbation \(\eta\) such that the label of the adversarial input \(\mathcal{P}^{adv}\) is changed to the target label \(T\) by the objective function \(g\). \[\begin{split}\min_{\eta}& D(\mathcal{P},\mathcal{P}^{ adv})+c.g(\mathcal{P}+\eta)\\ s.t.& f(\mathcal{P}^{adv})=T\end{split} \tag{5}\] where \(D(.)\) refers to distance measure (it can be defined using different distance measures like \(\ell_{P}\)-norm, Chamfer or Hausdorff distance), \(c\) is a suitably chosen constant and \(g(\mathcal{P}^{adv})\geq 0\) if and only if \(f(\mathcal{P}^{adv})=T\). By doing so, the distance and penalty term can be optimized more effectively. There were seven objective functions \(g\) listed by the authors [63]. An effective function evaluated by their experiments, which was also used in other papers, is as follows \[g(\mathcal{P}^{adv})=max(\max_{i=t}(Z(\mathcal{P}^{adv})_{i})-Z(\mathcal{P}^{ adv})_{t},-\kappa) \tag{6}\] where \(Z\) denotes the Softmax function, and \(\kappa\) represents a constant that controls confidence. In comparison with the FGSM attack, these attacks do not set a constraint for perturbation. In fact, the attacks search for minimal perturbation (without imposing any constraints) to change the label to the target label. As the first instance, a 3D version of the C&W attack was developed by Xiang _et al._[46]. According to the paper, [46], four types of attacks were proposed as follows. In Figure 3, you can see the four types of C&W attacks, where the bottle label has been misclassified as a result of these attacks. 1. **Adversarial perturbation** negligibly by using \(\ell_{2}\)-norm (between all points \(\mathcal{P}\) and \(\mathcal{P}^{adv}\)) as distance measure to shift points toward the point cloud's surface. 2. **Adding adversarial independent points** by using two different distance measures. 1. Chamfer distance between the original point cloud and the adversarial point cloud. 2. Hausdorff distance between the original point cloud and the adversarial point cloud. These measures are used to push independent points toward the point cloud's surface. 3. **Adding adversarial clusters** by the combination of three different distance measures. 1. Chamfer distance between the original point cloud and the adversarial cluster is used to push clusters toward the point cloud's surface. 2. The number of clusters added. Using this measure, only 1 to 3 clusters are added, so there is only a small number of clusters added. 3. Minimize the farthest distance. In this measure, the distance between the two most distant points in each cluster is minimized to constrain the added points clustered to be within small regions. 4. **Adding adversarial objects** by the combination of three different distance measures. 1. Chamfer distance between the original point cloud and the adversarial object is used to push adversarial objects toward the point cloud's surface. 2. The number of objects added. Using this measure, only 1 to 3 objects are added, so \begin{table} \begin{tabular}{c c|c c c c c} \hline \hline \multirow{2}{*}{**Ref**} & \multirow{2}{*}{**Attack Name**} & \multicolumn{4}{c}{**Categories**} \\ & & **Targeted/Non-targeted** & **Shift/Add/Drop/Transform** & **On-/Out-surface** & **Optimized/Gradient** & **Black-/White-box** \\ \hline \multirow{4}{*}{[46]} & Perturbation & Targeted & Shift & Out & Optimized & White \\ & Independent points & Targeted & Add & Out & Optimized & White \\ & Clusters & Targeted & Add & Out & Optimized & White \\ & Objects & Targeted & Add & Out & Optimized & White \\ \hline \multirow{4}{*}{[48]} & Drop100 & Non-Targeted & Drop & On & Gradient & White \\ & Drop200 & Non-Targeted & Drop & On & Gradient & White \\ \hline \multirow{2}{*}{[49]} & Advpc & Targeted & Transform & On & Optimized & White \\ \hline \multirow{2}{*}{[50]} & ShapeAdv & Targeted & Shift & On & Optimized & White \\ \hline \multirow{2}{*}{[51]} & LG-GAN & Targeted & Transform & On & - & White \\ & \(GeoA^{5}\) & Targeted & Shift & On & Optimized & White \\ \hline \multirow{2}{*}{[53]} & KNN & Targeted & Shift & On & Optimized & White \\ \hline \multirow{2}{*}{[54]} & Extended FGSM & Non-Targeted & Shift & Out & Gradient & White \\ \hline \multirow{2}{*}{[55]} & VSA & Non-Targeted & Add & On & Optimized & White \\ \cline{2-6} & Distributional attack & Non-Targeted & Shift & On & Gradient & White \\ \cline{2-6} & Perturbation resampling & Non-Targeted & Add & Out & Gradient & White \\ & Adversarial sticks & Non-Targeted & Add & Out & Gradient & White \\ & Adversarial sinks & Non-Targeted & Add & Out & Gradient & White \\ \hline \multirow{2}{*}{[57]} & Minimal & Non-Targeted & Shift & Out & Optimized & White \\ & Minimal & Non-Targeted & Add & Out & Optimized & White \\ \hline \multirow{2}{*}{[58]} & JGBA & Targeted & Shift & On & Optimized & White \\ \hline \multirow{2}{*}{[59]} & ITA & Targeted & Shift & On & Optimized & Black \\ \hline \multirow{2}{*}{[45]} & FGSM & Non-Targeted & Shift & Out & Gradient & White \\ & MPG & Non-Targeted & Shift & Out & Gradient & White \\ \cline{2-6} & Point-attachment & Non-Targeted & Add & Out & Gradient & White \\ \cline{2-6} & Point-detachment & Non-Targeted & Drop & On & Gradient & White \\ \hline \multirow{2}{*}{[60]} & Wicker _et al._ & Both & Drop & On & Optimized & Both \\ \end{tabular} \end{table} TABLE III: Relationship between adversarial attacks and attack categories. Fig. 2: An example of original point cloud and 3D FGSM adversarial counterpart [45] with Chamfer distances varying from 0.01 to 0.05. there is only a small number of objects added. 3. \(\ell_{2}\)-norm between a real-world object and an adversarial object is used to generate shapes similar to the real-world ones. The first attack is based on shifting points, and three other attacks are based on adding points. Since directly adding points to the unbounded 3D space is not possible due to the vast search space, the last three attacks use the position of critical points as the initial positions of adversarial points (or clusters or objects). Critical points are like key points that are effective in classification results. An example of critical points in PointNet would be calculating the remaining points after max pooling. Tsai _et al._[53] developed a shifting point attack called K-Nearest Neighbor (**KNN**) attack that limits distances between adjacent points by adding an extra distance loss to 5, which calculates K-Nearest Neighbor distance for each point. By doing so, adversarial point clouds are restricted to becoming physical objects. They use Chamfer distance to measure the distance of two point clouds. Wen _et al._[52] considered a new distance measure named consistency of local curvatures to guide perturbed points lean towards object surfaces. Adopting the C&W attack framework, the authors use the combination of Chamfer distance, Hausdorff distance, and local curvature consistency distance as the distance measure to create a geometry-aware adversarial attack (\(GeoA^{3}\)). The generated \(GeoA^{3}\) attack has smoothness and fairness surface properties, so the difference between it and the original point cloud is imperceptible to the human eye. ### _3D Projected Gradient Decent Method (3d Pgd)_ One of the most potent attacks in the 2D literature is the Projected Gradient Descent (PGD), which has its roots in the pioneering paper of Madry _et al._[64]. The iterative FGSM is considered a PGD method. Taking the iterative FGSM method, we can generate the adversarial point cloud as \[\mathcal{P}_{0}^{adv}=x,\quad\mathcal{P}_{t+1}^{adv}=Clip_{\mathcal{P}, \epsilon}[\mathcal{P}_{t}^{adv}+\alpha sign(\nabla_{\mathcal{P}}J(f(\mathcal{ P}:\theta),Y)] \tag{7}\] where \(Clip_{\mathcal{P},\epsilon}\) limits the change of the generated adversarial input in each iteration and \(t\) refers to iteration. The PGD attack try to increase the cost of the correct class \(Y\), without specifying which of the incorrect classes the model should select. The PGD attack finds the perturbation that maximizes the cost function under the \(\eta\) constraint with \(\epsilon\). \[\max_{\eta}\quad J(f(\mathcal{P}:\theta),Y) \tag{8}\] \[s.t.\quad\quad D(\mathcal{P},\mathcal{P}^{adv})\leq\epsilon-ball\] The 3D PGD attack is similar to the 2D version, but it usually uses different distance measures to calculate perturbations. In particular, Liu _et al._[56] proposed a PGD attack named **Distributional attack** by using the Hausdorff distance between the triangular mesh (original point cloud surface approximate through a triangular mesh) and the adversarial point cloud as distance measure to push adversarial points toward the triangular mesh. This method is less sensitive to the density of points in \(\mathcal{P}\) because it uses a mesh instead of a point cloud to measure perturbation. Figure 4 demonstrated two examples of adversarial point clouds generated by the distributional attack. Ma _et al._[58] proposed Joint Gradient Based Attack (**JGBA**) attack. They added an extra term to the optimization function of the PGD attack 8 to defeat the SOR (Statistical Outlier Remover), which removes outlier points. The term computes the gradient of the loss function of model w.r.t to points in \(\mathcal{P}\) after removing outliers when the first term (term in 8) computes the gradient of the loss function of model w.r.t to all points in \(\mathcal{P}\). These two terms are combined to solve the optimization problem. The JGBA attack takes \(\ell_{2}\)-norm as the distance measure to constraint shifting of points. ### _Shape Attack_ This type of attack attempts to morph the point cloud's shape. The concept of shape attacks can be compared to what is called unrestricted attacks in 2D images [65, 66, 67]. When such attacks occur, input data might change significantly while not changing the semantics. This adversarial attacks fool the classifier without making humans confused. In this regard, Liu _et al._[56] proposed three shape attacks as follows. Figure 5 demonstrates these three shape attacks. 1. **Perturbation resampling** This attack resamples the certain number of points with the lowest gradients by farthest point sampling to ensure that all points are distributed approximately uniformly. The algorithm is iterated to generate an adversarial point cloud that deceives the model. The distance measure used to maintain the similarity between \(\mathcal{P}\) and \(\mathcal{P}^{adv}\) is Hausdorff distance. 2. **Adding adversarial sticks** During this attack, the algorithm adds four sticks to the point cloud so that one end of them is attached to the point cloud and the other end has a very small distance from the first end. The algorithm optimizes the two ends of the sticks so that the label of the point cloud be changed. Finally, it adds a few points between the two ends to make them look like sticks. 3. **Adding adversarial sinks** In this case, critical points (remaining points after max pooling in PointNet) selects as sink points, and points pull in the point cloud toward them. The goal of this attack is to minimize global changes to points that are not selected by the max pooling operation. The distance measure used to maintain the similarity between \(\mathcal{P}\) and \(\mathcal{P}^{adv}\) is \(\ell_{2}\)-norm. Lee _et al._[50] also proposed Shape-aware adversarial attacks called **ShapeAdv** that are based on injecting an adversarial perturbation \(\eta\) in the latent space \(z\) of a point cloud autoencoder. To be precise, the original point cloud is processed using an autoencoder to generate an adversarial point cloud, then the adversarial point cloud is fed to the classifier. Accordingly, Lee _et al._[50] generated three attacks with varying distance measures. These measures are used as a term for C&W loss to maintain similarity between the original and the adversarial point clouds. All three attacks calculate gradient C&W loss w.r.t adversarial perturbation in the latent space \(z\). The distance measures are defined as such for three types of attacks: 1. **Shape-aware attack in the latent space.** To make a more meaningful attack, the author minimizes the \(\ell_{2}\)-norm between the latent space \(z\) and the adversarial latent space \(z+\eta\). Using this approach, the generated adversarial point cloud is highly dissimilar from the original counterpart in terms of appearance. 2. **Shape-aware attack in the point space.** In this case, an attempt is being made to resolve the previous attack's problem. In order to maintain similarity between the original point cloud and the adversarial one, the distance measure is replaced by minimizing the Chamfer distance between the two. 3. **Shape-aware attack with auxiliary point clouds.** The attack minimizes the Chamfer distance between the adversarial point cloud and the average of \(k\) nearest neighbor, sampled from the original point cloud category. This attack aims to avoid adversarial perturbation in any direction in the latent space. To guide the direction in the latent space, it employs auxiliary point clouds sampled from the category of the original input. #### 3.2.1 Shape attacks via autoencoders and generative models Hamdi _et al._[49] proposed an attack called **Advpc** by using an autoencoder that could be transferred between networks effectively. This was achieved by introducing a new loss function and pipeline. Minimizing two losses was the goal of the Loss function. The first loss is C&W loss when adversarial point clouds are fed into deep models, and the second loss is C&W loss when adversarial point clouds are fed into deep models after reconstruction with a point cloud autoencoder. Using an autoencoder to generate an adversarial point cloud makes perturbations more meaningful. Consequently, their transferability from one network to another will be more promising. Lee _et al._[50] also proposed Shape-aware attacks by injecting adversarial perturbation \(\eta\) in the latent space \(z\) of a point cloud autoencoder. In section III-D, this attack was described in detail. **LG-GAN** attack [51] is proposed to generate an adversarial point cloud based on GAN (Generative Adversarial Network). The GAN is fed with the original point clouds and target labels to learn how to generate adversarial point clouds to fool deep models. In detail, it extracts hierarchical features from original point clouds using one multi-branch adversarial network, then integrates the specified label information into multiple intermediate features using the label encoder. The encoded features will be fed into a reconstruction decoder to generate the adversarial point cloud. This attack is so fast because it only takes one forward pass to generate an adversarial point cloud. Figure 6 shows an instance of the LG-GAN attack. Daier _al._[68] proposed a new type of attack based on GAN, which is created from noise rather than the original point cloud. In fact, the noise vector and the target label as the input are fed into a graph convolutional generator. It outputs the generated adversarial point cloud. The generator uses a loss function containing four parts (the objective loss, the discriminative loss, the outlier loss, and the uniform loss) to achieve a realistic adversarial attack that fools the victim network. The objective loss encourages the victim network to assign the target(incorrect) label to the adversarial point Figure 4: two example of original point clouds and distributional attacks (3D PGD adversarial counterparts) were proposed in [56]. Figure 3: An example of original point cloud and four types of 3D C&W adversarial counterpart were proposed in [46]. cloud while the discriminative loss encourages the auxiliary network to classify the adversarial point cloud correctly. The outlier loss and the uniform loss by removing outliers and generating a more uniform point cloud force the generator to preserve the point cloud shape. Langet _et al._[69] proposed a new type of adversarial attack that alters the reconstructed geometry of a 3D point cloud rather than just the predicted label, using an autoencoder trained on semantic shape classes. Mariani _et al._[70] proposed a method for creating adversarial attacks on surfaces embedded in 3D space, under weak smoothness assumptions on the perceptibility of the attack. ### _Frequency Attack (attack on other domains)_ Liu _et al._[71] have suggested an adversarial attack based on the frequency domain, which aims to enhance the transferability of generated adversarial examples. The author transformed points onto the frequency domain via graph Fourier transform (GFT). Then divide it into low-frequency components and high-frequency components, and apply perturbations to the low-frequency components to create an adversarial point cloud. In a contrasting way, Liu _et al._[72] investigated the geometric structure of 3D point clouds by perturbing each of the three frequency components (low, mid, and high-frequency). They found that perturbing low-frequency components of point clouds significantly damaged their rough shape. To preserve the shape of the point cloud, they created an adversarial point cloud with constraints applying perturbations to the low-frequency components and guiding perturbations to the high-frequency components. Huang _et al._[73] proposed a new attack based on applying reversible coordinate transformations to points in the original point cloud, which reduces one degree of freedom and limits their movement on the tangent plane. The best direction is calculated based on the gradients of the transformed point clouds. After that, all points are assigned a score to construct the sensitivity map. Finally, top-scoring points are selected to fool deep models. The authors in [74] suggest that by analyzing the eigenvalues and eigenvectors of the graph Laplacian matrix of a point cloud, it can be determined which areas of the model are particularly sensitive to perturbations. By focusing on these areas, the attack can be crafted more effectively. Figure 5: Two examples of original point clouds and three shape attacks were proposed in [56]. Figure 6: An example of original point cloud and LG-GAN attack were proposed in [51]. ### Minimal level of point manipulations for attractive A special type of adversarial attacks exists in the 2D domain that focuses on perturbing a minimum number of pixels in adversarial attacks [63, 75, 76, 77, 78, 79, 80]. For instance, the one-pixel attack [75], which is the name given to the attack that can fool deep models by changing only one pixel, is a famous attack of this type. Taking inspiration from 2D attacks, Kim _et al._[57] proposed adversarial attacks namely **minimal attack** that manipulate only a minimal number of points. To find an adversarial point cloud, they have modified the optimization function of the PGD attack 5 by adding a term. In this term, the number of changed points is kept to a minimum. Furthermore, they used two different distance measures, Hausdorff and Chamfer distance, to preserve the similarity between \(\mathcal{P}\) and \(\mathcal{P}^{adv}\). Figure 7 illustrates examples of minimal adversarial attack In another attack called Variable Step-size Attack **(VSA)**[55], a hard boundary constraint on the number of modified points is incorporated into the optimization function of a PGD attack 5 to preserve the point cloud's appearance. In more concrete terms, certain points with the highest gradient norms (which have the most impact on classification tasks) are initialized as modified points. By controlling the step-size (large step-size (\(\alpha\)) at the beginning and smaller at the end), this method escapes local optima and finds the most appropriate locations for the modified (adversarial) points. Kim _et al._[81] proposed a class of point cloud perturbation attacks called Nudge attacks that minimize point perturbation to flip 3D DNN results. The researchers generated adversarial point clouds using gradient-based and genetic algorithms with perturbations of up to 150 points in order to deceive DNNs. The attack can fool DNN even with a single point when the point has a large distance from the surface of 3D objects. Yang _et al._[45] provided a point-attachment attack by attaching a few points to the point cloud. A Chamfer distance is used to preserve a small distance between the newly added points and the original point cloud. Hard boundary constraints limit the number of points added in the point cloud, making it more difficult to detect. Tan _et al._[82] proposed a new type of attack called **One point attack** in which only a single point in the point cloud needs to be perturbed in order to fool the deep model. The authors also present an explainability method to identify the most important points in the point cloud for the attack Shape Prior Guided Attack [83] is a method that uses a shape prior, or prior knowledge of the structure of the object, to guide the generation of the perturbations, or changes made to the point cloud to create the adversarial point cloud. The goal of this method is to create adversarial point clouds that have minimal perturbations while still being able to fool the target object detection model. ### Attacks with drop points Attacks described in the previous sections mostly revolved around shifting, adding, or transforming points (transforming points into another space and making changes there). This section reviews attacks that drop some points to generate adversarial point clouds. Depending on how points are dropped, these attacks can be made. The authors have provided various algorithms for removing critical points effectively. As an example, Zheng_et al._[48] developed a method that by using a saliency map [84] finds critical points that are important in model decision-making and drops them. The points dropped by the saliency map are illustrated in red points in Figure 8. According to this method, every point is assigned a saliency score that reflects its contribution to the deep model recognition. By shifting high-saliency points towards the point cloud center, these points will not affect the surfaces much and practically operate in the same way as drop points. Consequently, the model can be deceived by shifting high-scoring points in a point cloud, resulting in adversarial point clouds. This method was proposed in two popular dropped attacks, **Drop100** and **Drop200**, which drop 100 and 200 points respectively. An attack described in [47] identifies "adversarial drop points" in a 3D point cloud that, when dropped, significantly reduce a model's accuracy. These points are specified independently of the model by analyzing and combining fourteen-point cloud features and determining which features play key roles in the model's decision-making. In [60], the critical points can be randomly determined and checked for dropping one by one. If a point increases the probability of changing the ground-truth label \(f(\mathcal{P})=Y\) is considered a critical point and, will be dropped. Otherwise, it will not be dropped. This procedure continues iteratively until the minimum critical points are dropped according to the following optimization problem Figure 7: Two examples of original point cloud and minimal adversarial attack were proposed in [57]. \[\min_{\mathcal{P}\subseteq\mathcal{P}^{adv}} (|\mathcal{P}^{adv}|-|\mathcal{P}|) \tag{9}\] \[s.t. f(\mathcal{P}^{adv})\neq f(\mathcal{P})\] where \(|\mathcal{P}^{adv}|\) and \(|\mathcal{P}|\) are number points in the original point cloud and the adversarial one. The adversarial examples are generated by dropping critical points that optimize formula 9. In order to determine the level of effectiveness of a given point in PointNet model decision-making, Yanger _al._[45] introduced a Point-detachment attack that assigned a _class-dependent importance_ to each point. A greedy strategy is employed to generate an adversarial point cloud, in which the most important point dependent on the true class are dropped iteratively. The _class-dependent importance_ associated with a given point is determined by multiplying the two terms. The first term uses the PointNet feature matrix before max-pooling aggregation. (In this matrix, each row represents a point in the point cloud and each column represents a special feature). The second term uses from gradient the feature matrix w.r.t. the true class output, which is a sparse matrix with non-zero only at the critical points. If a given point has the largest value in some columns, the first term sums the difference between the first and second largest values in these columns. A bigger difference means more significance for the largest value. This means that a given point that corresponds to the largest value is more effective in the model decision. The second term sums up all values for a given point at a row level in the sparse matrix. ### _Miscellaneous Attacks_ Miao_et al._[85] developed an adversarial point cloud based on rotation by applying an isometry matrix to the original point cloud. To find an appropriate isometry matrix the author used the Thompson Sampling method which can quickly find a suitable isometry matrix with a high attack rate. Liu _et al._[59] proposed an Imperceptible Transfer Attack (**ITA**) that enhances the imperceptibility of adversarial point clouds by shifting each point in the direction of its normal vector. Zhang _et al._[86] proposed a Mesh Attack that directly perturbs the mesh of a 3D object. Tang _et al._[87] presented a method called NormalAttack for generating imperceptible point cloud attacks. The method deforms objects along their normals by considering the object's curvature to make the modification less noticeable. ## IV Defenses Against Adversarial Attacks Adversarial defense methods for 3D point clouds can generally be divided into three categories: input transformation, data optimization, and deep model modification. The following sections discuss defense methods under each of these categories. ### _Input Transformation_ An input transformation is a preprocessing approach that involves applying some transformations to the input point cloud before it is fed into the deep model. This transformation could be designed to reduce the sensitivity of the model to adversarial attacks or to make it more difficult for an attacker to craft an adversarial point cloud. Input transformation methods are listed below. Figure 8: Original point clouds with labels(left), dropped points (red points) associated with highest scores(middle), and adversarial point clouds with estimated labels (right) were proposed in [48]. #### 3.1.1 Simple Random Sampling (SRS) Simple random sampling [46] is a statistical technique commonly known as **SRS** that randomly drops a certain number of points (usually 500) from an input point cloud (with the same probability). #### 3.1.2 Statistical Outlier Removal (SOR) Since there exist outliers in most adversarial attacks, Zhou _et al._[88] proposed a statistical outlier removal (**SOR**) method that trimmed the points in an adversarial point cloud if the average distance a point to its \(k\) nearest neighbors falls outside the (\(\mu+\sigma.\alpha\)), which \(\mu\) is mean and \(\sigma\) is the standard deviation of \(k\) nearest neighbor distance of all points in the original point cloud. Depending on the size of the analyzed neighborhood, \(\alpha\) will be determined. (In [88]\(\alpha\) = 1.1 and \(k\)=2 are considered). A similar defense method is used in [89]. The Euclidean distance between each point and its k-nearest neighbors is used to detect outliers. Points with High mean distances are discarded as outliers. #### 3.1.3 Salient points removal This defense method [54] assumes that the adversarial points have fairly large gradient values. Taking this as true, this method calculated the saliency of each point based on the gradient output class of the model \(f\) w.r.t. to each point and points with high saliency were discarded. #### 3.1.4 Denoiser and Upsampler Network (DUP-Net) The DUP-Net defense method consists of two steps. To remove outliers, it uses SOR as a denoiser in the first step. In the second step, the output of the first step is given to an upsampler network [90] to produce a denser point cloud. It is generally found that adversarial perturbations are missing critical points from original point clouds, so this defense uses a denser point cloud tracking the underlying surface of the point cloud with uniform distribution to recover these critical points. #### 3.1.5 IF-Defense IF-Defense [91] is a preprocessing technique on the input point cloud. It first employs SOR to remove outliers from the input point cloud. In the next step, two losses are used to optimize input points' coordinates under geometry- and distribution-aware constraints. The geometry-aware loss tries to push points towards the surface in order to minimize outliers. To estimate the surfaces of objects, the authors train an implicit function network [92, 93] on original point clouds. Because output of implicit functions are continuous, the predicted surface is locally smooth. This reduces outlier effects. The distribution-aware loss encourages points to have an uniform distribution by maximizing the distance between each point and its k-nearest neighbors. Accordingly, the input point clouds are captured in a clean shape using If-Defense. Figure 9 shows the results of three different defense methods against a Drop100 attack, including SOR, DUP-Net, and If-defense. #### 3.1.6 Miscellaneous Defenses Dong _et al._[94] proposed Gather-Vector Guidance (GvG) method which is sensitive to the change of local features. In case the adversarial perturbation changes the local features, the gather-vector will also change. This method learns to ignore noisy local features. Liu _et al._[95] developed PointGuard, a method that creates a number of random subsets of points in the original point cloud, then predicts the label of the original point cloud based on the majority vote among the labels of these random subsets. Sun _et al._[96] proposed a framework for evaluating the robustness of 3D point cloud classification models to adaptive attack. Ada3Diff [97] is a method for defending against adversarial attacks on 3D point cloud models. It uses an adaptive diffusion process to smooth out perturbations in the point cloud, effectively reducing the impact of the adversarial attack. ### 3.2 Data Optimization Another category is data optimization for training, which involves optimizing the training data to improve the robustness of the deep model to adversarial attacks. This could involve techniques such as data augmentation, which involves generating additional training examples by applying transformations to the existing training data, or adversarial training, which involves intentionally introducing adversarial examples into the training data in order to improve the model's robustness to such attacks. The following methods can be used to optimize data. #### 3.2.1 Adversarial Training In terms of modified training sets, adversarial training [61] is an effective defense method, which augments the training set with adversarial examples to increase the model's robustness against attacks. To be precise, in standard training, the model is trained using only the original point clouds, while adversarial training uses both original and adversarial point clouds. The adversarial training for point clouds is described in [54] for the first time. The authors of [54] and [59] trained a deep model by augmenting the FGSM and ITA attacks. As a way to find a stronger adversarial training method, the authors in [98] used adaptive attacks. Using this new adversarial training, different types of attacks are added to the deep model by embedding a perturbation-injection module. This module is utilized to generate the perturbed features for adversarial training. Sun _et al._[99] applied self-supervised learning to adversarial training with 3D point clouds. In different tries, the authors in [45, 100] add Gaussian noise to each point by randomly sampling values from a Gaussian distribution. By doing so, the attacked models can escape from the narrow adversarial subspace. Also, they developed a Quantification Method for converting point cloud coordinates into low numerical precision with multiple quantification levels, which mitigates small variations in coordinates. These noisy point clouds are then used to augment training sets. #### 4.3.2 PointCutMix Zhang _et al._[101] proposed PointCutMix technique that generated a new training set by swapping points between two optimally aligned original point clouds and training a model with this new training set. #### 4.3.3 Low Pass Frequency-Defense (LPF-Defense) In LPF-Defense [102], deep models are trained with the low-frequency version of the original point cloud. More specifically, with the Spherical Harmonic Transform (SHT) [103], original point clouds were transformed from the spatial to the frequency domain. The low-frequency version of the original point cloud is then retrieved back into the spatial domain by filtering the high-frequency input data components. This method is based on the assumption that 3D deep models are overly dependent on features with unnecessary information in the training sets, making them vulnerable to adversarial point clouds. Therefore it discards the unnecessary information from the training data by suppressing the high-frequency contents in the training phase. ### _Deep Model Modification_ Another category is deep model modifications, which refer to modifying the architecture of the deep model itself in order to improve its robustness to adversarial attacks. This could be achieved by making changes to the original deep neural network architecture during training. Examples of this category are given below. #### 4.3.1 Defense-PointNet The authors in [104] have provided a defense method by splitting the PointNet deep model into two parts. The first part is the feature extractor, with a discriminator attached to its last layer enabling it to learn more powerful features. The feature extractor feeds a mini-batch of the original point cloud and the adversarial counterpart (generated by the FGSM attack) as input to extract features and also fool the discriminator. The second part is the PointNet classifier which is trained to classify each input correctly. The model parameters are optimized using three different loss functions: a classifier, a discriminator, and a feature extractor. While discriminator loss attempts to distinguish the original point cloud from the adversarial one, feature extractor loss misleads the discriminator to label every original/adversarial Figure 9: Results of three different defense methods on Drop100 attack. Figure taken from [91]. vector as the original and classifier loss encourages the classifier to give correct predictions for each input. #### 4.4.2 Context-Consistency dynamic graph Network (CCN) _Liet al._[105] proposed two methodologies to improve the adversarial robustness of 3D point cloud classification models. The first methodology is the introduction of a novel point cloud architecture called Context-Consistency dynamic graph Network (CCN), which is designed to be more robust to adversarial attacks. The second methodology involves an in-depth analysis of the factors that affect the robustness of point cloud models, and the development of techniques to mitigate these factors. In order to provide a more robust model against adversarial point clouds, the authors integrate the two techniques #### 4.4.3 Lattice Point Classifier (LPC) Li _et al._[106] proposed embedding a declarative node into the networks to transform adversarial examples to the clean manifold. The authors proposed an effective instantiation, the Lattice Point Classifier (LPC), which projects each point cloud onto the lattice and generates a 2D image for classification using 2D CNNs. (Structured sparse coding in the permutohedral lattice is defined as the declarative node in LPC.). The declarative nodes defend the adversarial attacks through implicit gradients by leading them to wrong updating directions for inputs. ## 5 Taxonomy of datasets and victim models A variety of 3D point cloud datasets have been collected to evaluate shape classification on DNNs, including ModelNet [107], ShapeNet [108], ScanObjectNN [109], McGill Benchmark [110], ScanNet [111], Sydney Urban Objects [112]. A summary of the characteristics of these datasets is also provided in Table 4. Among all, 4 datasets namely ModelNet10 [107], ModelNet40 [107], ShapeNet [108] and ScanObjectNN [109] have mostly been used to evaluate attack and defense techniques. Also, there is a taxonomy of datasets and victim models used in recent studies in Table 5. ## 6 Challenges and discussions This section discusses the current challenges that adversarial point clouds face, as well as the potential solutions that can be found. For both adversaries and researchers, adversarial point clouds are an interesting problem, which exploits the vulnerability of deep models and helps defenders avoid adversarial point clouds. Our discussion will focus on the following questions. What factors affect the attack on Point Cloud? ### What factors affect the success of adversarial attacks on 3D point clouds? There are some general factors that be more important for adversarial attacks on 3D point clouds including: The complexity and robustness of the model being attacked: When a deep model is less complex and less robust, it may be less immune to adversarial attacks and require a less sophisticated or weaker attack to fool it. The structure of the 3D point cloud: The distribution of points in the point cloud and the presence of outliers can potentially affect the success of most types of adversarial point clouds. ### Comparison of different defense methods A 3D point cloud's distribution and outliers can significantly impact the effectiveness of defense methods against adversarial point clouds. For example, input transformation techniques are designed to make it more difficult for an attacker to craft adversarial point clouds. These techniques may rely on modifying the distribution of points in the point cloud or dropping outliers. By doing this, the structure of the original point cloud is disrupted. This makes it harder for the attacker to make successful modifications. On the other hand, other defense methods, such as adversarial training, may not rely as heavily on these factors and may not be as efficient. Adversarial training is one of the most powerful defenses in the 2D defense techniques, but it does not do well in 3D data. The paper [64] proves that the adversarial training maximizes the classifier loss by finding a worst-case example inside a constrained search space. This procedure can change decision boundaries so that the model gets more robust to different types of attacks. This proof is based on the regular structure of 2D data. Creating 2D attacks is performed by changing the pixel values. Note that in the 2D case, the data has a regular structure. But, a point cloud consists of a set of 3D data points that are placed irregularly in space. Furthermore, the point clouds used in the literatures are constructed by randomly sampling 1024 points from each 3D object. Therefore, points are not uniformly distributed across object's surface and any two point clouds from the same class (e.g., airplane) do not have the same regular structure, as opposed to the 2D cases. These structural differences result in different defense behaviors in the adversarial training phase. Therefore, training the model with the worst-case example inside a constrained search space can not guarantee robustness against other attacks. In other words, due to the irregular structure of point clouds, it is very challenging to model adversarial points to eliminate their impact on defense. ### Comparison of 3D point clouds and image data in terms of attacks and defenses There are several differences between 3D point clouds and images in terms of adversarial attacks and defenses: An adversarial attack on 3D point clouds can be more complex. Typically, an adversarial attack on an image data involves adding small perturbations to the pixel values. In contrast, adversarial attacks on 3D point clouds can involve more complex modifications, such as adding or dropping points, or changing the connectivity of the points in the point cloud. In fact, the structure of 3D point clouds is different from that of images. Images are typically represented as 2D arrays of pixel values, while 3D point clouds are represented as sets of 3D points. This difference in structure can make it more challenging to apply defense methods that were developed for image data to 3D point clouds. On the other hand, 3D point clouds can be more sensitive to perturbations. Because 3D point clouds are used to represent physical objects in the real world, even small perturbations to the point cloud can result in significant changes to the shape or appearance of the represented object. This sensitivity can make it more difficult to develop robust defense methods for 3D point clouds. ## VII Conclusion Adversarial attacks on 3D point cloud classifications have become a significant concern in recent years. These attacks can successfully manipulate the classification of 3D point clouds, leading to incorrect decisions with potentially harmful consequences. Adversarial attacks on 3D point clouds can be categorized into several types, including drop attacks, add attacks, shift attacks, and transform attacks. To defend against these attacks, researchers have proposed two main categories of approaches: input transformation and adversarial training. Input transformation methods aim to preprocess the input data in order to make it more robust to adversarial perturbations, while adversarial training involves augmenting the training data with adversarial examples in order to improve the model's robustness. For more robust protection against adversarial attacks, input transformation techniques can be combined with adversarial training. Some potential future directions for research on adversarial attacks on 3D point clouds include optimizing attack methods by targeting only a subset of points in the point cloud and focusing on the local rather than global structure of the point cloud, as well as exploring the robustness of 3D point cloud classifiers to attacks that are specifically designed for 3D data rather than adapted from methods developed for 2D images.
2304.01547
Regularization of the policy updates for stabilizing Mean Field Games
This work studies non-cooperative Multi-Agent Reinforcement Learning (MARL) where multiple agents interact in the same environment and whose goal is to maximize the individual returns. Challenges arise when scaling up the number of agents due to the resultant non-stationarity that the many agents introduce. In order to address this issue, Mean Field Games (MFG) rely on the symmetry and homogeneity assumptions to approximate games with very large populations. Recently, deep Reinforcement Learning has been used to scale MFG to games with larger number of states. Current methods rely on smoothing techniques such as averaging the q-values or the updates on the mean-field distribution. This work presents a different approach to stabilize the learning based on proximal updates on the mean-field policy. We name our algorithm Mean Field Proximal Policy Optimization (MF-PPO), and we empirically show the effectiveness of our method in the OpenSpiel framework.
Talal Algumaei, Ruben Solozabal, Reda Alami, Hakim Hacid, Merouane Debbah, Martin Takac
2023-04-04T05:45:42Z
http://arxiv.org/abs/2304.01547v2
# Regularization of the policy updates for stabilizing Mean Field Games ###### Abstract This work studies non-cooperative Multi-Agent Reinforcement Learning (MARL) where multiple agents interact in the same environment and whose goal is to maximize the individual returns. Challenges arise when scaling up the number of agents due to the resultant non-stationarity that the many agents introduce. In order to address this issue, Mean Field Games (MFG) rely on the symmetry and homogeneity assumptions to approximate games with very large populations. Recently, deep Reinforcement Learning has been used to scale MFG to games with larger number of states. Current methods rely on smoothing techniques such as averaging the q-values or the updates on the mean-field distribution. This work presents a different approach to stabilize the learning based on proximal updates on the mean-field policy. We name our algorithm _Mean Field Proximal Policy Optimization (MF-PPO)_, and we empirically show the effectiveness of our method in the OpenSpiel framework.1 Footnote 1: This preprint has not undergone peer review or any post-submission improvements or corrections. The Version of Record of this contribution is published in PAKDD2023, and will be available online. Keywords:Reinforcement learning mean-field games proximal policy optimization. ## 1 Introduction Despite the recent success of Reinforcement Learning (RL) in learning strategies in games (e.g., the game of Go [1], Chess [2] or Starcraft [3]), learning in games with a large number of players is still challenging. Independent Learning leads to instabilities due to the fact that the environment becomes non-stationary. Alternatively, learning centralised policies can be applied to handle coordination problems and avoid the non-stationarity. However, centralised learning is hard to scale, as the joint action space grows exponentially with the number of agents. Many works in Multi-Agent Reinforcement Learning (MARL) have succeeded in decomposing the objective function into individual contributions [4], although this is also intractable when the number of agents is large. In this sense, mean field theory addresses large population games by approximating the distribution of the players. An infinite population of agents is represented by a continuous distribution of identical players that share the same behaviour. This reduces the learning problem to a representative player interacting with the representation of the whole population. This work in particular focuses on learning in Mean Field Games (MFG), non-cooperative games in which many agents act independently to maximise their individual reward, and the goal is to reach the Mean Field Nash Equilibrium (MFNE). Learning in MFG is not an easy task as most of the problems do not have an analytical solution. Traditionally numerical methods have been used to address these problems [5]; nonetheless, these methods do not scale well. In this sense, numerous game theory approaches have been brought into MFG. A classical algorithm is the Banach-Picard (BP) [6] algorithm, which uses a fixed-point iteration method to interactively update the population's behaviour based on the best response of a single representative agent against the mean-field distribution. However, acting in a best response to other agents might cause the others to actuate in the same way, leading to instabilities in the learning (referred to as the _curse of many agents_ in game-theory [7]). In practice, smoothing techniques derived from optimization theory are used to guarantee the convergence of these algorithms under reasonable assumptions [8]. More recently, deep RL has been introduced to scale MFG to games with larger state spaces [8]. Nevertheless, traditional approaches cannot be directly applied when using non-linear function approximators as neural networks to represent the objectives in the game. Traditional algorithms average the policy, the mean-field distribution, or both, in order to guarantee a theoretical convergence to the MFNE. This can be done in the case of games with small state spaces under linear or tabular policies, but it is not straightforward when using neural networks. Recent works [9] have derived deep learning algorithms based on value learning suitable for MFG. However, to the best of our knowledge, there is no approach based on policy optimization that addresses this issue. The main contribution of this paper is bringing policy-based optimization into MFG. This is performed through developing an algorithm based on Proximal Policy Optimization (PPO) [10]. We refer to this algorithm as _Mean Field Proximal Policy Optimization (MF-PPO)_. Conducted experiments in the OpenSpiel framework [11] show better convergence performance of MF-PPO compared with current state-of-the-art methods for MFG. This validates our approach and broadens the spectrum of algorithms on MFG to policy-based methods, traditionally dominant in the literature on environments with large or continuous action spaces. The remainder of this paper is organised as follows. In Section 2, we present the state-of-the-art related to solving the mean-field games. In Section 3, we provide a formal description of the problem formulation. Then, in Section 4 we present the designed algorithm MF-PPO, that we validate experimentally in Section 5. Finally, Section 6 concludes the paper. ## 2 Related works In the literature, numerous RL approaches have been designed to address MFG. These can be classified based on the property used to represent the population into (i) mean-field action and (ii) mean-field state distribution. Examples of mean-field action can be found in [12], in these works the interaction within the population is done based on the average behaviour of the neighbours. A more common approach is using the mean-field state distribution [13]. This approach approximates the infinitum of agents by the state distribution or _distribution flow_ of the population. In this case, each player is affected by other players through an aggregate population state. Also, regarding the problem setup, MFG can be classified as (i) stationary or (ii) non-stationary. In the stationary setup, the mean field distribution does not evolve during the episode [14]. A more realistic scenario, and the one discussed in this work, is the non-stationary [15]. In that case the mean-field state is influenced by the agents decisions. The methodology to address MFG in the literature is also diverse. The classical method for learning the MFNE is the (BP) algorithm [6]. BP is a fixed point iteration method that iteratively computes the Best Response (BR) for updating the mean field distribution. The convergence of the BP algorithm is restrictive [16], and in practice, it might appear with oscillations. To address this issue, the Fictitious Play (FP) algorithm [17] averages the mean field distribution over the past iterations. This stabilizes the learning and improves the convergence properties of the algorithm [18]. Several attempts have been made in the literature to scale FP. For example, [19] proposed Neural Fictitious Self-play algorithm based on fitted Q-learning that learns from best response behaviours on previous experiences. Also, Deep Average Fictitious Play [9] presents a similar idea in a model-free version of FP in which the BR policy is learned though deep Q-learning. Although learning the best response using deep RL allows scaling this method to games with larger state spaces, in practice learning the BR policy is computationally inefficient. In this sense, algorithms based on policy iteration have been also applied to MFG [20]. These methods have proved to be more efficient [8] as they do not require the computation of the best response but they perform a policy update per evaluation step. An example is Online Mirror Descent (OMD) [21], which averages the evaluation of the Q-function from where it derives the mean-field policy. A deep learning variant of it is the Deep-Munchausen OMD (D-MOMD) [9]. This algorithm uses the Munchausen algorithm [22] to approximate the cumulative Q-function when parameterized using a neural network. Last but not least, oracle-free methods [26] are complete model-free RL methods applied to MFG. Oracle-free algorithms do not require the model dynamics but they estimate the mean-field distribution induced by the population. In [23], the authors propose a two timescale approach with a Q-learning algorithm suitable for both cooperative and non-cooperative games that simultaneously updates the action-value function and the mean-field distribution in a two timescale setting. Regardless of the numerous works on value-based learning, the attention to policy optimization methods in MFG has been limited. Related works cover the linear quadratic regulator setting [27] but not general RL, a summary can be observed in Table 1. Motivated by [28], work that emphasizes the effectiveness of PPO in multi-agent games, this paper brings PPO into MFG by presenting a solution to the stabilization issues based on proximal policy updates. ## 3 Problem formulation In Mean Field Games (MFG) the interaction between multiple agents is reduced to a uniform and homogeneous population represented by the mean-field distribution. This is the distribution over states that the continuum of agents define when following the mean-field policy. The way in which MFG addresses the problem is selecting a _representative player_ that interacts with the mean-field distribution. This simplifies the problem and facilitates the computation of the equilibria. More formally, we consider the non-stationary setting with a finite time horizon in which we denote by \(n\in\{0,1,...,N_{T}\}\) the time steps in an episode. The state and actions of an agent at each time-step are denoted as \(s_{n}\in\mathcal{S}\) and \(a_{n}\in\mathcal{A}\), both finite in our setting. The mean-field state is represented by the distribution of the population states \(\mu_{n}\in\Delta^{|\mathcal{S}|}\), where \(\Delta^{|\mathcal{S}|}\) is the set of state probability distributions on \(\mathcal{S}\). In the non-stationary setting, the mean field distribution \(\mu_{n}\) evolves during the episode and it characterizes the model dynamics \(P:\mathcal{S}\times\mathcal{A}\times\Delta^{|\mathcal{S}|}\rightarrow\Delta^ {|\mathcal{S}|}\) and the reward function \(R:\mathcal{S}\times\mathcal{A}\times\Delta^{|\mathcal{S}|}\rightarrow\mathbb{R}\). The policy of the agents depends on a prior on the mean-field distribution. Although, without loss of generality, we can define a time-dependent policy \(\pi_{n}\in\Pi:\mathcal{S}\rightarrow\Delta^{|\mathcal{A}|}\) that independently reacts to the mean-field state at every step. The model dynamics are therefore expressed as \[s_{n+1}\sim P(\cdot|s_{n},a_{n},\mu_{n})\qquad a_{n}\sim\pi_{n}(\cdot|s_{n}). \tag{1}\] We define the policy \(\boldsymbol{\pi}:=(\pi_{n})_{n\geq 0}\) as the aggregated policy for every time-step, similarly the mean-field distribution \(\boldsymbol{\mu}:=(\mu_{n})_{n\geq 0}\). The value function is calculated as \(V^{\boldsymbol{\pi},\boldsymbol{\mu}}(s):=\mathbb{E}[\sum_{n=0}^{N_{T}}\gamma ^{n}r(s_{n},a_{n},\mu_{n})]\). Given a population distribution \(\boldsymbol{\mu}\) the objective for the representative agent is to learn the policy \(\boldsymbol{\pi}\) that maximizes the expected total reward, \begin{table} \begin{tabular}{l c c c c} \hline \hline & Setting & Learning & Requires Oracle & Best Response \\ \hline Heinrich et al. [19] & General RL & Value-based & Yes & Yes \\ Laurière et al. [9] & General RL & Value-based & Yes & No \\ Koppel et al. [23] & General RL & Value-based & No & No \\ Xie et al. [24] & General RL & Value-based & No & No \\ Fu et al. [25] & LQR & Policy-based & No & Yes \\ **Our Approach** & General RL & Policy-based & Yes & No \\ \hline \end{tabular} \end{table} Table 1: Summary on the RL literature for MFG. \[J(\boldsymbol{\pi},\boldsymbol{\mu})=\mathbb{E}_{a_{n}\sim\pi_{n}(\cdot|s_{n}),s_{ n+1}\sim P(\cdot|s_{n},a_{n},\mu_{n})}\left[\sum_{n=0}^{N_{T}}\gamma^{n}R(s_{n},a_{n}, \mu_{n})\mid\mu_{0}\sim m_{0}\right] \tag{2}\] where \(\mu_{0}\) is the initial mean-field state drawn from the initial distribution of the population \(m_{0}\) and \(0<\gamma<1\) denotes the discount factor. **Nash equilibrium in MFG.** The desired solution in games is computing the Nash Equilibrium. This is the set of policies that followed by all players maximize their individual reward such that no agent can unilaterally increase deviating from the Nash policy. Furthermore, in MFG the agents share the same interest and an extension of the Nash equilibrium is needed. Definition 1: A mean-field Nash equilibrium (MFNE) is defined as the pair \((\pi^{*},\mu^{*})\) that satisfies the rationality principle \(V^{\pi^{*},\mu^{*}}(s)\geq V^{\pi,\mu^{*}}(s)\;\forall s,\pi\); and the consistency principle, \(\mu^{*}\) is the mean-field state distribution induced by all agents following optimal policy \(\pi^{*}\). **Mean-field Dynamics.** This work relies on an _oracle_ to derive the mean-field state. Given the initial mean-field distribution \(\mu_{0}=m_{0}\), the oracle uses the transition function \(P\) to compute the mean-field distribution induced by the policy \(\pi_{n}\) at each time step \(n\in\{0,1,...,N_{T}\}\), \[\mu_{n+1}(s^{\prime})=\sum_{s,a\in\mathcal{S}\times\mathcal{A}}\mu_{n}(s)\pi_ {n}(a|s)P(s^{\prime}|s,a,\mu_{n})\quad\forall s^{\prime}\in\mathcal{S}. \tag{3}\] In a similar way, the policy is evaluated analytically by computing the expected total costs of the policy \(\boldsymbol{\pi}\) under the mean field \(\boldsymbol{\mu}\) as follows: \[J(\boldsymbol{\pi},\boldsymbol{\mu})=\sum_{n=0}^{N_{t}}\sum_{s,a\in\mathcal{S }\times\mathcal{A}}\mu_{n}(s)\pi_{n}(a|s)R(s,a,\mu_{n}). \tag{4}\] **Exploitability.** The metric of choice for estimating the MFNE convergence is the exploitability. This metric is well known in game-theory [29, 30] and it characterizes the maximum increase in the expected reward a representative player can obtain deviating from the policy the rest of the population adopted. The exploitability is obtained as follows: \[\phi(\boldsymbol{\pi},\boldsymbol{\mu})=\max_{\boldsymbol{\pi}^{\prime}}J( \boldsymbol{\pi^{\prime}},\boldsymbol{\mu})-J(\boldsymbol{\pi},\boldsymbol{ \mu}). \tag{5}\] An interpretation of the exploitability is to consider it as a measure of how close the learned policy is to the MFNE. Small values of exploitability indicate less incentive for any agent to change its policy. ## 4 Proposal: Proximal policy updates for MFG Learning in MFG is commonly achieved in the literature via fixed-point iteration [6], where the set \(\{(\pi_{k},\mu_{k})\}_{k\geq 0}\) is recursively updated. Particularly, at iteration \(k\) the best response policy for the MDP induced by \(\mu_{k}\) is computed and the mean-field is updated \(\mu_{k+1}\) as a result of the many agent following \(\pi_{k}^{BR}\). Under the assumptions discussed in [6], contraction mapping holds and the algorithm is proof to converge to a unique fixed point \(\{(\pi^{*},\mu^{*})\}\). This problem corresponds to finding the optimal policy for an MDP induced by \(\mu\), \(\text{MDP}_{\mu}:=(\mathcal{S},\mathcal{A},P(\mu),R(\mu),\gamma)\). This can be solved using modern RL techniques that allow in practice to scale the method to large games. However, solving the BR is demanding, and in practice, it leads to instabilities in learning. In this paper, we aim to provide a solution to these instabilities by regularizing the updates in the mean-field policy. To this end, we bring the proximal policy updates developed in PPO [10] into MFG. Let start defining how PPO can be used to estimate the best response \(\hat{\pi}_{\mu}^{BR}\) to the MDP\({}_{\mu}\). Based on the trajectories collected during the iteration \(k\), one can perform policy optimization on the following objective function \[\mathcal{J}_{\mu}^{PPO}(\theta)=\hat{E}_{n}\left[\min(r_{n}\hat{A}_{n},\;\text {clip}(r_{n}\pm\epsilon)\hat{A}_{n})\right]\qquad r_{n}(\theta)=\frac{\pi(\cdot |s_{n};\theta)}{\pi(\cdot|s_{n};\theta_{\text{old}})} \tag{6}\] where \(\pi(a_{n}|s_{n};\theta)\) is a stochastic policy, \(\pi(a_{n}|s_{n};\theta_{old})\) is the policy before the update and \(\hat{A}_{n}\) is an estimator of the advantage function at timestep \(n\). \(\hat{E}\) is the empirical expectation based in Monte-carlo rollouts. The theory behind PPO suggests relaxing the update on the policy to prevent large destructive updates by using a clip function applied on the ratio between the old policy and the current one. PPO imposes this constraint, forcing the ratio on the policy update \(r_{n}(\theta)\) to stay within a proximal interval. This is controlled with the clipping hyperparameter \(\epsilon\). In this work, we extend the regularization of the policy updates to successive iterations on the MFG. We call the algorithm _Mean-Field Proximal Policy Optimization (MF-PPO)_ and it combines a double proximal policy regularization for the intra- and inter-iteration policy updates. This prevents the mean-field policy from having a large update between iterations, obtaining a smoothing effect that has previously been reported beneficial in value-based algorithms for MFG [9]. We denote the probability ratios for the intra- and inter-iteration policy updates as \[r_{n}^{\mathrm{e}}(\theta)=\frac{\pi_{n}(a_{n}|s_{n};\theta)}{\pi_{n}(a_{n}|s_{n };\theta_{\mathrm{old}}^{e})}\qquad r_{n}^{\mathrm{k}}(\theta)=\frac{\pi_{n}(a _{n}|s_{n};\theta)}{\pi_{n}(a_{n}|s_{n};\theta_{\mathrm{old}}^{k})} \tag{7}\] where the superscript \(k\in[1,K]\) refers to the iteration and the superscript \(e\in[1,E]\) to the episode. In order to derive an appropriate objective function for MFG we extend the objective function of the classical PPO by adding an additional term that limits the policy updates w.r.t. the previous iteration. We can think in this term as a proximal update that limits the divergence between iterations preventing the policy from reaching the BR at iteration \(k\). The MF-PPO objective is therefore expressed as \[\mathcal{L}^{\mathrm{MF\text{-PPO}}}(\theta)=\hat{E}[\alpha\min( r_{n}^{\mathrm{e}}\hat{A}_{n},\mathrm{clip}(r_{n}^{\mathrm{e}}\pm\epsilon_{e}) \hat{A}_{n})\\ +(1-\alpha)\min(r_{n}^{k}\hat{A}_{n},\mathrm{clip}(r_{n}^{ \mathrm{k}}\pm\epsilon_{k})\hat{A}_{n})] \tag{8}\] where \(0<\alpha<1\) balances the proximity of the policy between the inter and intra-iteration updates. ## 5 Experimentation In this section, we describe the experiments conducted to validate the proposed MF-PPO algorithm. We analyze the hyper-parameter selection and finally, we present the numerical results obtained against the state-of-the-art algorithms namely Deep-Munchausen Online mirror decent (D-MOMD) and Deep Average-Network Fictitious Play (D-ANFP) [9]. ### Experimental setup We opted for the OpenSpiel suite [11] to benchmark the proposed algorithm in selected crowd modeling with congestion scenarios. Particularly the scenarios used for evaluation are: **Four-rooms.** A simple setup on a four-room grid with \(10\times 10\) states and a time horizon of 40 steps. The agents receive a reward for navigating close to the goal located in the bottom right room while there also exists an adversion to crowded areas. **Maze.** The maze is a more complex scenario with \(20\times 20\) states and a time horizon of 100 steps. In this setting, the agent must correctly steer through a complex maze to reach the goal while, similar to the previous case, evading congested areas. In both environments the state-space is a two-dimension grid, where the state is represented by the agent's current position. Furthermore, the action space consists of five discrete actions: up, down, left, right, or nothing. Those actions are always valid if the agent is confined within the boundaries. Finally, the reward signal is defined as: \[r(s,a,\mu)=r_{\text{pos}}(s)+r_{\text{move}}(a,\mu(s))+r_{\text{pop}}(\mu(s)) \tag{9}\] where the first term measures the distance to the target, the second penalizes movement, and the last term is a penalty which encourages the agents to avoid crowded areas, and is given by the inverse of the concentration of the distribution at a particular state. ### Numerical results In this section, we present the results MF-PPO achieves in the selected scenarios. We compare our results with Deep-Munchausen Online Mirror Descent (D-MOMD) and Deep Average-Network Fictitious Play (D-ANFP) [9], both state-of-the-art algorithms in the selected settings. We report the exploitability metric, which is used in the literature as a proxy for quantifying convergence to the MFNE. The results are depicted in Fig. 1 and summarized in Table 2. Four-rooms.Obtained results show that MF-PPO outperforms D-NAFP and D-MOMD algorithms, not only by converging to a better \(\epsilon\)-MFNE solution but, as depicted in Fig. 1, converging in a significantly fewer number of steps. We speculate that this can be credited to the fact our solution learns the optimal policy directly which in this situation is superior to learning the value function that the other methods use and then extract the optimal policy. Fig. 1(b) shows the learned mean-field distribution learned using MF-PPO. The agents gather as expected around the goal state at the right-bottom room, reaching it by equally distributing over the two symmetric paths. Maze.Similarly, on the Maze environment Fig. 1(c) shows that MF-PPO and D-MOMD converge to a favorable \(\epsilon\)-MFNE solution, whereas D-ANFP does to a sub-optimal solution. Still, the policy learned by MF-PPO is closer to the MFNE, reported by a smaller exploitability. Finally, Fig. 1(d) corroborates that the flow of agents over the maze distribute around the goal located in the lower right part of the maze. In Table 3 we present the CPU execution time of the tested algorithms. In all experiments we used AMD EPYC 7742 64-Core server processor to produce \begin{table} \begin{tabular}{l c c} \hline \hline Environment & Four Rooms & Maze \\ \hline D-MOMD & 64.41 \(\pm\) 24.84 & 153.80 \(\pm\) 93.05 \\ D-ANFP & 127.37 \(\pm\) 15.19 & 929.54 \(\pm\) 46.36 \\ **MF-PPO** & **15.84**\(\pm\) 1.95 & **93.63**\(\pm\) 38.11 \\ \hline \end{tabular} \end{table} Table 2: Comparison of the exploitability metric of the different algorithms. Results are averaged over five different seeds and reported as mean \(\pm\) std. presented results. We note that the official implementation of D-ANFP and D-MOMD was used to reproduce previously presented results. MF-PPO coverages faster than both approaches, more notably, as evidenced by Fig. 1(a) in the four rooms case, MF-PPO converges within roughly 34 minutes compared to hours by the other two methods. We see a similar, although not as remarkable, trend in the maze as well, where MF-PPO converges in roughly five and half hours to a better MFNE point in comparison with the other techniques. \begin{table} \begin{tabular}{l c c} \hline \hline Environment & Four Rooms & Maze \\ \hline D-MOMD & 3H48M \(\pm\) 1.79 Min & 7H35M \(\pm\) 1.53 Min \\ D-ANFP & 8H35M \(\pm\) 56.36 Min & 9H45M \(\pm\) 2.24 Min \\ **MF-PPO** & **33M32S**\(\pm\) 16.58 Sec & **5H36M**\(\pm\) 3.37 Min \\ \hline \end{tabular} \end{table} Table 3: Comparison of the CPU execution time of the different algorithms. Results are averaged over five different seeds and reported as mean \(\pm\) std. Figure 1: On the left, the exploitability results obtained on the (a) four-room and (c) maze environments. Results are averaged over five seeds and the confidence interval corresponds to one standard deviation. On the right, the mean-field distribution of the agents generated by the MF-PPO policy on the (b) four-room and (d) maze environments. ### Analysis on the hyper-parameters This section investigates the influence on the hyper-parameter selection in the learning process. The experiments are conducted on the Maze environment. First, we focus on the configuration where \(\alpha=0\), i.e., we update the iteration policy only and neglect the episode updates entirely. The results are depicted in Fig. 2(a), we see no sign of convergence indicated by high exploitability throughout learning. Furthermore, as the value of \(\alpha\) assigned to episode updates increases, we observe a significantly better convergence rate. Nevertheless, it introduces oscillations that impede good convergence on the MFNE. This could be explained by the following dilemma: at each iteration, the representative agent learns an policy far better than what is available to the current population. Hence, agents have the incentive to deviate from the current policy resulting in an increment in the exploitability. Moreover, at the next iteration, the distribution is updated with such policy, resulting in a sharp decline in the exploitability. This phenomenon can be smoothed by reducing the rate at which the agent's policy updates with respect to the mean-field policy. Consequently, this is controlled using the parameter \(\alpha\) as shown by the remaining curves in Fig. 2(a). Then we analyze the impact on both iteration \(\epsilon_{k}\) and episode \(\epsilon_{e}\) clipping factors. We consider two extreme cases for \(\epsilon_{k}\) and different values of \(\epsilon_{e}\). In Fig. 2(b), we set \(\epsilon_{k}=0.2\), and compare for different \(\epsilon_{e}\) values, we observe high variance in exploitability mainly due to more significant policy updates. On the other end, for \(\epsilon_{k}\) = 0.001, the curves look much smoother since the policy update is largely constrained; however, the drawback is a slower convergence rate as shown in Fig. 2(c). Finally, all the hyper-parameters used in the experiments are summarized in Table 4. ## 6 Conclusion In this work, we propose the _Mean Field Proximal Policy Optimization (MF-PPO)_ algorithm for mean field games (MFG). Opposed to current strategies for stabilizing MFG based on averaging the q-values or the mean-field distribution, Figure 2: (a) Study on the impact of the hyper-parameter \(\alpha\) on the learning. Moreover, (b) and (c) show the iteration clipping factor \(\epsilon_{k}\) contribution to the smoothness and convergence of the MF-PPO algorithm. this work constitutes the first attempt for regularizing the mean-field policy updates directly. Particularly, MF-PPO algorithm regularizes the updates between successive iterations in the mean-field policy updates using a proximal policy optimization strategy. Conducted experiments in the OpenSpiel framework show a faster convergence to the MFNE when compared to current state-of-the-art methods for MFG, namely the Deep Munchausen Online Mirror Descent and Deep Average-Network Fictitious Play. As future work, the first track would be the investigation of the mathematical analysis of the MFNE reached by the MF-PPO algorithm. Second, investigating the optimization of the computation time of the proposed approach is of interest. Finally, the application of the approach on large-scale real cases would push the boundaries of the approach.
2307.14354
Learned Gridification for Efficient Point Cloud Processing
Neural operations that rely on neighborhood information are much more expensive when deployed on point clouds than on grid data due to the irregular distances between points in a point cloud. In a grid, on the other hand, we can compute the kernel only once and reuse it for all query positions. As a result, operations that rely on neighborhood information scale much worse for point clouds than for grid data, specially for large inputs and large neighborhoods. In this work, we address the scalability issue of point cloud methods by tackling its root cause: the irregularity of the data. We propose learnable gridification as the first step in a point cloud processing pipeline to transform the point cloud into a compact, regular grid. Thanks to gridification, subsequent layers can use operations defined on regular grids, e.g., Conv3D, which scale much better than native point cloud methods. We then extend gridification to point cloud to point cloud tasks, e.g., segmentation, by adding a learnable de-gridification step at the end of the point cloud processing pipeline to map the compact, regular grid back to its original point cloud form. Through theoretical and empirical analysis, we show that gridified networks scale better in terms of memory and time than networks directly applied on raw point cloud data, while being able to achieve competitive results. Our code is publicly available at https://github.com/computri/gridifier.
Putri A. van der Linden, David W. Romero, Erik J. Bekkers
2023-07-22T19:40:00Z
http://arxiv.org/abs/2307.14354v1
# Learned Gridification for Efficient Point Cloud Processing ###### Abstract Neural operations that rely on neighborhood information are much more expensive when deployed on point clouds than on grid data due to the irregular distances between points in a point cloud. In a grid, on the other hand, we can compute the kernel only once and reuse it for all query positions. As a result, operations that rely on neighborhood information scale much worse for point clouds than for grid data, specially for large inputs and large neighborhoods. In this work, we address the scalability issue of point cloud methods by tackling its root cause: the irregularity of the data. We propose _learnable gridification_ as the first step in a point cloud processing pipeline to transform the point cloud into a compact, regular grid. Thanks to gridification, subsequent layers can use operations defined on regular grids, e.g., Conv3D, which scale much better than native point cloud methods. We then extend gridification to point cloud to point cloud tasks, e.g., segmentation, by adding a _learnable de-gridification_ step at the end of the point cloud processing pipeline to map the compact, regular grid back to its original point cloud form. Through theoretical and empirical analysis, we show that _gridified networks_ scale better in terms of memory and time than networks directly applied on raw point cloud data, while being able to achieve competitive results. Our code is publicly available at [https://github.com/computri/gridifier](https://github.com/computri/gridifier). Machine Learning, Point Cloud Processing, Learning, Point Cloud Processing ## 1 Introduction Point clouds provide sparse geometric representations of objects or surfaces equipped with signals defined over their structure, e.g., the surface normals of an underlying object (Wu et al., 2015; Qi et al., 2017) or the chemical properties of a molecule (Ramakrishnan et al., 2014; Schutt et al., 2017). Several neural operators have been developed that can be applied to such sparse representations provided by point clouds. These methods can be broadly understood as continuous generalizations of neural operators originally defined over regular discrete grids, e.g., convolution (Wu et al., 2019) and self-attention (Zhao et al., 2021). The problem of learning on raw point clouds.Unfortunately, the flexibility required from neural operators to accommodate irregular sparse representations like point clouds brings about important increases in time and memory consumption. This is especially prominent in neural operations that construct feature representations based on neighborhood information, e.g., convolution. In the case of point clouds, the irregular distances between points make these neural operations significantly more computationally demanding compared to regular grid representations like images or text. For instance, for convolution, the convolutional kernel needs to be recalculated for each point in a point cloud to account for irregular distances from the query point to other points in its neighborhood (Fig. 1 left). In contrast, grid representations standardize pairwise distances following a grid structure (Fig. 1 right). As a result, the distances from a point to all other points in its neighborhood are fixed for all points queried in the grid. Therefore, it is possible to compute the kernel once, and reuse it across all query positions. This difference illustrates that operations relying on neighborhood information scale much worse in Figure 1: Convolution on point clouds and grids. Due to the irregular nature of point clouds, convolutional kernels –and other operations based on neighborhood information – must be re-rendered for every query point in the point cloud (left). In contrast, grid data is regularly arranged, and thus pairwise distances are equal for any query point in the grid (right). As a result, the convolutional kernel can be computed once and reused for all query points. terms of memory and time for point clouds than for grid data, specially for large inputs and large neighborhoods. **A potential solution: Voxelization.** A potential solution to address the challenges posed by point clouds lies in treating the point cloud as a continuous density that can be sampled on a dense regular grid: a process called _voxelization_(Maturana & Scherer, 2015; Wu et al., 2015). The idea of voxelization is to create a grid that overlaps with the domain of the point cloud (Fig. 3). Although voxelization methods create grid representations on which neural operations defined on grids can act, e.g., Conv3D, the grids resulting from voxelization are oftentimes much larger than the number of points in the original point cloud. This is a consequence of (_i_) the high-resolution grids required to describe fine details from the point cloud, and _(ii)_ the low occupancy of the grid resulting from the sparse nature of point clouds which generally leads to many more points to process in the resulting grid than in the original point cloud. **Our proposed solution: Gridification.** In this paper, we propose an alternative solution to address the memory and computational scalability of point cloud methods by addressing its root cause: _the irregularity of the data_. We propose _learnable gridification_ as the first step in a point cloud processing pipeline to transform the point cloud into a _compact, regular grid_ (Fig. 2). Thanks to gridification, subsequent layers can use operations defined on grids, e.g., Conv3D, which scale much better than native point cloud methods. In a nutshell, gridification can be understood as a _convolutional message passing_ layer acting on a _bipartite graph_ that establishes connections between points in the point cloud to points in the grid given by a _bilateral \(k\)-nearest neighbor connectivity_. The proposed bilateral \(k\)-nearest neighbor connectivity guarantees that all points both in the point cloud and in the grid are connected, therefore allowing for the construction of expressive yet compact grid representations. In contrast to voxelization, gridification produces expressive compact grid representations in which the number of points in the resulting compact regular grid is roughly equal to the number of points in the original point cloud, yet the grid is able to preserve fine geometric details from the original point cloud. For instance, we observe that point clouds with \(\mathrm{N}{=}1000\) points can be effectively mapped to a compact dense \(10\mathrm{x}10\mathrm{x}10\) grid without significant information loss. We show through theoretical and empirical analysis that the resulting grid representations scale much better in terms of memory and time than native point cloud methods. This is verified on several comparison studies for increasing number of points in the point cloud and increasing neighborhood sizes in the construction of convolutional kernels. We demonstrate that gridification can also be used for tasks from point clouds to point clouds, e.g., segmentation. To this end, we introduce a _learnable de-gridification_ step at the end of the point cloud processing pipeline, which can be seen as an inverted gridification step that maps the compact, regular grid back to its original point cloud form. This extension allows for the construction of _gridified networks_ -networks that operate on grids- to solve global prediction tasks, e.g., classification, as well as dense prediction tasks, e.g., segmentation and regression, on point cloud data. ## 2 Method ### Point cloud and grid representations **Point cloud.** A point cloud \(\mathcal{P}{=}\{(\mathbf{c}_{i}^{\mathcal{P}},\mathbf{x}_{i}^{\mathcal{P}}) \}_{i=1}^{\mathrm{N}_{\mathcal{P}}}\) is an _unstructured_ set of \(\mathrm{N}_{\mathcal{P}}\) pairs of coordinate-feature values \(\left(\mathbf{c}_{i}^{\mathcal{P}},\mathbf{x}_{i}^{\mathcal{P}}\right)\) scattered in space without any predefined pattern or connec Figure 3: Voxelization of the Stanford Bunny (Turk & Levoy, 1994) for different resolutions. Taken from Karmakar et al. (2011). Figure 2: Gridification. Gridification maps a point cloud \(\mathcal{P}\) onto a compact regular grid \(\mathcal{G}\). The method first constructs a \(\mathrm{D}\)-dimensional grid (left) that overlaps the point cloud. Then, it connects points on the point cloud to points in the grid given by a connectivity scheme \(\mathcal{E}_{\mathcal{P}\to\mathcal{G}}\), i.e., a set of edges from points in the point cloud to points in the grid, determined by _bilateral \(k\)-nearest neighbors connectivity_ (middle). Finally, gridification propagates information from the point cloud onto the grid through a _convolutional message passing_ layer acting over the bipartite graph \(\left(\mathcal{P},\mathcal{G},\mathcal{E}_{\mathcal{P}\to\mathcal{G}}\right)\). By carefully selecting the different components of the gridification module, gridification is able to construct compact rich grid representations that can be subsequently processed with grid operations such as Conv3D. tivity. Point clouds sparsely represent geometric structures through pairs of coordinate vectors \(\mathbf{c}_{i}^{\mathcal{P}}\in\mathbb{R}^{\mathrm{D}}\) and corresponding function values over that geometric structure \(\mathbf{x}_{i}^{\mathcal{P}}\in\mathbb{R}^{\mathrm{F}_{\mathcal{P}}}\), e.g., surface normals, RGB-values, electric potentials, etc. **Grid.** A grid \(\mathcal{G}{=}\{(\mathbf{c}_{i}^{\mathcal{G}},\mathbf{x}_{i}^{\mathcal{G}})\}_{ i=1}^{\mathrm{N}_{\mathcal{G}}}\) can be interpreted as a point cloud on which the coordinate-feature pairs \((\mathbf{c}_{i}^{\mathcal{G}},\mathbf{x}_{i}^{\mathcal{G}})\) are arranged in a regular pattern that form a lattice. In contrast to general point clouds, points in a grid are evenly spaced and align along predefined axes, e.g., \(x\), \(y\), \(z\). The regular spacing between points leads to regular pairwise distances for all query points in the grid. As a result, we can calculate pairwise attributes once, and reuse them for all query points. ### Gridification: From a point cloud to a dense grid We seek to map the sparse point cloud \(\mathcal{P}{=}\{(\mathbf{c}_{i}^{\mathcal{P}},\mathbf{x}_{i}^{\mathcal{P}})\}_ {i=1}^{\mathrm{N}_{\mathcal{P}}}\) onto a compact regular grid \(\mathcal{G}{=}\{(\mathbf{c}_{i}^{\mathcal{G}},\mathbf{x}_{i}^{\mathcal{G}})\}_ {i=1}^{\mathrm{N}_{\mathcal{G}}}\) in \(\mathbb{R}^{\mathrm{D}}\). We formalize this process as an operation over a _bipartite graph_ that establishes connections between points in the point cloud \(\mathcal{P}\) to points in the grid \(\mathcal{G}\) given by a _connectivity scheme_\(\mathcal{E}_{\mathcal{P}\to\mathcal{G}}\) defined as a set of edges \(\mathbf{e}_{j\to i}\in\mathcal{E}_{\mathcal{P}\to\mathcal{G}}\). **Learnable gridification as message passing.** We aim to learn a mapping from \(\mathcal{P}\) to \(\mathcal{G}\) such that the grid representation \(\mathcal{G}{=}\{(\mathbf{c}_{i}^{\mathcal{G}},\mathbf{x}_{i}^{\mathcal{G}})\}_ {i=1}^{\mathrm{N}_{\mathcal{G}}}\), \(\mathbf{c}_{i}\in\mathbb{R}^{\mathrm{D}}\), \(\mathbf{x}_{i}^{\mathcal{G}}\in\mathbb{R}^{\mathrm{F}_{\mathcal{G}}}\), adequately represents the source point cloud \(\mathcal{P}\) for the downstream task. Given a source point cloud \(\mathcal{P}\), a target grid \(\mathcal{G}\) and a connectivity scheme \(\mathcal{E}_{\mathcal{P}\to\mathcal{G}}\), we define gridification as a _convolutional message passing_ layer (Gilmer et al., 2017) on the bipartite graph \((\mathcal{P},\mathcal{G},\mathcal{E}_{\mathcal{P}\to\mathcal{G}})\) defined as: \[\mathbf{x}_{i}^{\mathcal{G}}=\phi_{\mathrm{upd}}\left(\bigoplus_{\mathbf{e}_{j \to i}\in\mathcal{E}_{\mathcal{P}\to\mathcal{G}}}\phi_{\mathrm{msg}}\!\left( \phi_{\mathrm{node}}\left(\mathbf{x}_{j}^{\mathcal{P}}\right),\phi_{\mathrm{ pos}}\left(\mathbf{c}_{i}^{\mathcal{G}}-\mathbf{c}_{j}^{\mathcal{P}}\right) \right)\right). \tag{1}\] It consists of a node embedding network \(\phi_{\mathrm{node}}:\mathbb{R}^{\mathrm{F}_{\mathcal{P}}}\to\mathbb{R}^{ \mathrm{H}}\) that processes the point cloud features \(\mathbf{x}_{i}^{\mathcal{G}}\), a positional embedding network \(\phi_{\mathrm{pos}}:\mathbb{R}^{\mathrm{D}}\to\mathbb{R}^{\mathrm{H}}\) that creates feature representations based on the pairwise distances between coordinates in \(\mathcal{G}\) and \(\mathcal{P}\) -thus resembling a convolutional kernel-, a message embedding network \(\phi_{\mathrm{msg}}:\mathbb{R}^{2\mathrm{H}}\to\mathbb{R}^{\mathrm{H}}\) that receives both the node embedding and the relative position embedding to create the so-called _message_. After the messages are created for all nodes described by connectivity of the node, these features are aggregated via the aggregation function \(\bigoplus\), e.g., \(\max\), mean. Finally, the aggregated message is passed through the update network \(\phi_{\mathrm{upd}}:\mathbb{R}^{\mathrm{H}}\to\mathbb{R}^{\mathrm{F}_{\mathcal{G }}}\) to produce the grid feature representations \(\mathbf{x}_{i}^{\mathcal{G}}\in\mathbb{R}^{\mathrm{F}_{\mathcal{P}}}\). ### De-gridification: From a dense grid to a point cloud To extend the use of gridification to tasks from the point cloud \(\mathcal{P}\) to the point cloud \(\mathcal{P}\), e.g., segmentation, regression, we define a _de-gridification_ step that sends a grid representation \(\mathcal{G}\) back to its original point cloud form \(\mathcal{P}\). Formally, the de-gridification step is defined as: \[\mathbf{x}_{i}^{\mathcal{P}}=\phi_{\mathrm{upd}}\left(\bigoplus_{\mathbf{e}_{j \to i}\in\mathcal{E}_{\mathcal{G}\to\mathcal{P}}}\phi_{\mathrm{msg}}\!\left( \phi_{\mathrm{node}}\left(\mathbf{x}_{j}^{\mathcal{G}}\right),\phi_{\mathrm{ pos}}\left(\mathbf{c}_{i}^{\mathcal{P}}-\mathbf{c}_{j}^{\mathcal{G}} \right)\right)\right). \tag{2}\] Intuitively, de-gridification can be interpreted as a gridification step from \(\mathcal{G}\) to \(\mathcal{P}\) given by an inverted connectivity scheme \(\mathcal{E}_{\mathcal{G}\to\mathcal{P}}{=}(\mathcal{E}_{\mathcal{P}\to\mathcal{G }})^{-1}\). Note that, it is not necessary to calculate the connectivity scheme for the de-gridification step. Instead, we can obtain it simply by taking the connectivity scheme from the gridification step \(\mathcal{E}_{\mathcal{G}\to\mathcal{P}}\) and inverting the output and input nodes of the edges. ### Requirements and properties of gridification We desire to construct a compute and memory efficient grid representation \(\mathcal{G}\) that captures all aspects of the point cloud \(\mathcal{P}\) as good as possible. That is, a compact, yet rich grid representation \(\mathcal{G}\) that preserves the structure of the point cloud \(\mathcal{P}\) with as low loss of information as possible. With this goal in mind, we identify the following requirements: 1. The number of points in the grid \(\mathrm{N}_{\mathcal{G}}\) should be at least as large as the number of points in the point cloud \(\mathrm{N}_{\mathcal{P}}\). 2. The width of all hidden representations of the node embedding network \(\phi_{\mathrm{node}}\) should be _at least as large_ as the width of the point cloud features \(\mathbf{x}_{i}^{\mathcal{P}}\), i.e., \(\mathrm{F}_{\mathcal{P}}\). 3. The width of all hidden representations of the position embedding network \(\phi_{\mathrm{pos}}\) should be at least as large as the dimension of the domain \(\mathrm{D}\). 4. The width of all hidden representations of the embedding networks \(\phi_{\mathrm{upd}}\), \(\phi_{\mathrm{msg}}\) should be _at least as large_ as the width of the point cloud features \(\mathbf{x}_{i}^{\mathcal{P}}\) plus the dimension of the domain \(\mathrm{D}\). 5. Each point \(\mathbf{c}^{\mathcal{P}}\) in the point cloud should be connected to _at least_ one point \(\mathbf{c}^{\mathcal{G}}\) in the grid. 6. The positional embedding network \(\phi_{\mathrm{pos}}\) should be able to describe high frequencies. 7. Each point \(\mathbf{c}^{\mathcal{G}}\) in the grid should be connected to _at least_ one point \(\mathbf{c}^{\mathcal{P}}\) in the point cloud. **Preventing information loss.** To prevent information loss, we want to avoid any kind of compression either in the grid representation or in any intermediary representation during the gridification process. Consequently, we restrict the number of points as well as the width of all representations to be at least as big as the corresponding dimensions in the source point cloud \(\mathcal{P}\) -items (_i_)-(_iv_)-. In addition, we must make sure that all points in the point cloud are connected to points in the grid to prevent points from being disregarded during gridification -item (_v_)-. Finally, we must also make sure that the positional embedding network \(\phi_{\mathrm{pos}}\) is able to represent high frequencies -item (_vi_)-. This is important as multilayer perceptrons (\(\mathrm{MLPs}\)) with piecewise nonlinearities, e.g., \(\mathrm{ReLU}\), have been shown to have an implicit bias towards smooth functions (Tancik et al., 2020; Sitzmann et al., 2020). In the context of gridification, this means that using conventional \(\mathrm{MLPs}\) for the positional embedding network \(\phi_{\mathrm{pos}}\) could result in over-smooth grid representations unable to represent fine details from the source point cloud. We circumvent this issue by using parameterizations for \(\phi_{\mathrm{pos}}\) able to model high frequencies (Sec. 2.5.3). **Encouraging compact representations.** In addition to encouraging no information loss, we also identify requirements that encourage the resulting grid representation to be compact and expressive. First, we note that item (\(v\)) is important for this end as well, as over-smooth representations implicitly require higher resolutions to be able to encode fine-grained details. Additionally, we impose all points in the grid to be connected to points in the point cloud -item (\(vii\))- to prevent the grid representation from having low occupancy. This restriction allows us to make sure that all the spatial capacity of the grid is being used. This in turn allows us to construct compact rich grid representations. ### Materializing the gridification module Based on the previous requirements and properties, we define the components of the gridification module as follows: #### 2.5.1 The grid \(\mathcal{G}\) Let \([a,b]^{\mathrm{D}}\) be the domain of the point cloud \(\mathcal{P}\), i.e., \(\mathbf{c}_{i}^{\mathcal{P}}\in[a,b]^{\mathrm{D}}\), \(\forall\ \mathbf{c}_{i}^{\mathcal{P}}\in\mathcal{P}\). Then, we define the regular grid \(\mathcal{G}\) over the same domain \([a,b]^{\mathrm{D}}\) with \(\sqrt[]{\mathrm{N}^{\mathcal{G}}}\) points along each dimension. By doing so, we guarantee that the grid \(\mathcal{G}\) is uniformly spaced over the domain of the point cloud, therefore (\(i\)) preserving the statistics of the input point cloud, and (\(ii\)) being able to represent the underlying signal in the same range. In practice, point clouds are normalized during the preprocessing steps preceding a point cloud processing pipeline. As a result, we often have that \(a\)= - 1 and \(b\)=1, leading to a point cloud and a grid defined on \([-1,1]^{\mathrm{D}}\). #### 2.5.2 The connectivity scheme \(\mathcal{E}_{\mathcal{P}\rightarrow\mathcal{G}}\) Motivated by the requirements in Sec. 2.4, we opt for _bilateral \(k\)-nearest neighbor connectivity_ over common alternatives such as radius connectivity (Qi et al., 2017, 2018) or one-way \(k\)-nearest neighbor connectivity (Barber et al., 1996; Connor and Kumar, 2010) for the construction of the connectivity scheme \(\mathcal{E}_{\mathcal{P}\rightarrow\mathcal{G}}\) to guarantee that no points either in the grid \(\mathcal{G}\) nor the point cloud \(\mathcal{P}\) are disconnected. Bilateral \(k\)-nearest neighbor connectivity consists of a two-way \(k\)-nearest neighbor approach in which first each point \(\mathbf{c}_{i}^{\mathcal{G}}\) in the grid is linked to the \(k\) nearest points \(\mathbf{c}_{j}^{\mathcal{P}}\) in the point-cloud. Subsequently, connections are established from each point \(\mathbf{c}_{i}^{\mathcal{P}}\) in the point cloud to its nearest \(k\) points \(\mathbf{c}_{j}^{\mathcal{G}}\) in the grid (Fig. 4). By following this procedure, bilateral \(k\)-nearest neighbor connectivity creates a _complete_ connectivity scheme, i.e., with no disconnected points, from \(\mathcal{P}\) to \(\mathcal{G}\) with at least \(k\) and at most \(2k\) connections for each point. #### 2.5.3 The positional embedding network \(\phi_{\mathrm{pos}}\) In literature, the positional embedding network \(\phi_{\mathrm{pos}}\) is often parameterized as an \(\mathrm{MLP}\) with piecewise nonlinearities, e.g., \(\mathrm{ReLU}\), that receives relative positions \(\left(\mathbf{c}_{i}-\mathbf{c}_{j}\right)\) as input and retrieves the value of an spatial function at that position \(\phi_{\mathrm{pos}}(\mathbf{c}_{i}-\mathbf{c}_{j})\)(Schutt et al., 2017; Qi et al., 2017; Wu et al., 2019). However, previous studies have shown that \(\mathrm{MLPs}\) with piecewise nonlinearities suffer from an spectral bias towards low frequencies, which limits their ability to represent functions with high frequencies (Tancik et al., 2020; Sitzmann et al., 2020). In the context of modelling spatial neural operators such as \(\phi_{\mathrm{pos}}\), this implies that using piecewise \(\mathrm{MLPs}\) to parameterize spatial neural operators leads to inherently smooth operators. Consequently, applying such an operator over an input function, e.g., via a convolution operation, would implicitly perform a low-pass filtering of the input, causing the output representations to lack information regarding fine-grained details of the input. To overcome this issue, we rely on the insights from _Continuous Kernel Convolutions_(Romero et al., 2021) and parameterize the positional embedding network as a _Neural Field_(Sitzmann et al., 2020; Tancik et al., 2020). In contrast to piecewise \(\mathrm{MLPs}\), neural fields easily model high frequencies, and thus allow for powerful parameterizations of spatial neural operators that do not perform smoothing. In the context of gridification, using neural fields to parameterize \(\phi_{\mathrm{pos}}\) allows gridification to project fine-grained geometric information from the point cloud onto the grid. ### Gridified networks for global and dense prediction Gridification and de-gridification allow for the construction of _gridified networks_ able to process point clouds both for global and dense prediction tasks (Fig. 5). For global prediction tasks, e.g., classification, we construct a point cloud processing pipeline consisting of gridification, followed by a _grid network_, i.e., a neural network that operates on grid data, designed for global prediction, e.g., a ResNet (He et al., 2016) or a ViT (Dosovitskiy et al., 2020). For dense prediction tasks, e.g., segmentation, our proposed point cloud pipeline consists of gridification, followed by a grid network designed for dense predictions, e.g., a U-Net (Ronneberger et al., 2015) or a CCNN (Knigge et al., 2023). After the processed grid representation is obtained, we utilize the de-gridification step to map back the grid representation to a point cloud with the output node predictions. Figure 4: Bilateral \(k\)-nearest neighbor connectivity for \(k\)=4. ## 3 Related Work Deep learning approaches for point cloud processing can be broadly classified in two main categories: (_i_) native point cloud methods and (_ii_) voxelization methods. **Native point cloud methods.** Native point cloud methods operate directly on the raw, irregular point cloud data without any preprocessing steps such as voxelization. These methods leverage the inherent spatial distribution of the points to extract meaningful features. PointNet (Qi et al., 2017) introduced a pioneering framework for point cloud processing by employing shared multilayer perceptrons and symmetric functions to learn global and local features from unordered point sets. PointNet++ (Qi et al., 2017) extended this work with hierarchical neural networks to capture hierarchical structures in point clouds. PointConv (Wu et al., 2019) introduced a convolution operation specifically designed for point clouds, incorporating local coordinate systems to capture local geometric structures. PointGNN (Shi and Rajkumar, 2020) utilized graph neural networks to model interactions between neighboring points in point clouds. Despite the flexibility in handling irregular data that native point cloud methods provide, they suffer from scalability issues due to the increased computational and memory complexity of processing unstructured point sets. **Voxelization methods.** Voxelization methods aim to convert the irregular point cloud data into a regular grid structure, enabling the utilization of neural architectures designed for regular grid data. VoxNet (Maturana and Scherer, 2015) introduced the concept of voxelization for point clouds and employed 3D convolutions on the resulting grid representations. Volumetric CNN (Qi et al., 2016) extended this approach with an occupancy grid representation and achieved impressive performance on 3D shape classification tasks. Other works, such as VoxSegNet (Wang and Lu, 2019) explore variations of voxelization techniques to improve performance on tasks like object detection and segmentation. While voxelization methods offer a well-founded solution to the computational and memory complexity of native point cloud methods, in practice, they suffer from high memory consumption and information loss due to the discretization process. This is due to the inherent trade-off between the need to capture fine geometric details -which requires high resolution grids-, and the need for efficiency -which favors low resolution grids-. As a result, conventional voxelization methods struggle to strike a balance between resolution and speed. In contrast, gridification is able to generate compact yet expressive grid representations able to preserve fine geometric details on a low resolution grid with roughly the same number of points as the source point cloud. **Hybrid methods.** Aside from pure point cloud and voxelization methods, there exist works that attempt combine the advantages of both categories. Their main idea is to combine point-wise and grid-wise operations to perform effective feature extraction while maintaining scalability and efficiency. PointGrid (Le and Duan, 2018) uses a hybrid representation by voxelizing the point cloud and employing a combination of point-wise and grid-wise operations at each layer. Point-Voxel CNN (Liu et al., 2019) combines grid convolutions with point-wise feature extraction. It uses low-resolution voxelization to aggregate neighborhoods with regular 3D convolutions and \(\mathrm{MLPs}\) to generate point-wise features that preserve fine-grained structure. These features are then fused through interpolation. Point-Voxel Transformer (Zhang et al., 2022) follows a similar two-branch structure, but replaces convolutions with windowed self-attention. Although hybrid methods reduce the computational and memory complexity of native point cloud methods, their explicit use of voxelization still leads to a trade-off between information loss and efficiency on that branch. To compensate for the information lost during voxelization, they require a parallel raw point cloud branch, which does not scale well. In contrast, gridification does not make use of raw point cloud branches but instead focuses on the creation of descriptive compact grid representations that preserve the geometric information of the source point cloud. Hence, gridification offers a solution with better scalability properties than existing hybrid methods. ## 4 Experiments To evaluate our approach, we first analyze the expressive capacity of gridification and de-gridification on a toy point cloud reconstruction task. Next, we construct gridified networks and evaluate them on classification and segmentation tasks. In addition, we provide empirical analyses on the computational and memory complexity of gridified networks which we then corroborate with theoretical analyses. **Experimental setup.** For the position embedding function \(\phi_{\mathrm{pos}}\) we use an Random Fourier Feature Network (Tancik Figure 5: Point cloud processing pipeline for global prediction (left) and dense prediction tasks (right). et al., 2020), due to explicit control over the smoothness through the initial frequency parameter \(\Omega\). The practical setup and instantiation of the convolution blocks can be found in Appendix A. We train our models without data augmentation using AdamW (Loshchilov and Hutter, 2019) and a cosine scheduler (Loshchilov and Hutter, 2017) with 10 epochs of linear warm-up. We follow the standard procedure and preprocess all objects in the datasets to be centered and normalized. For each dataset, we choose the grid resolution such that its number of points is roughly equal to the size of the original point cloud. For ModelNet40 we use surface normals in addition to positions as node features. Dataset specific hyperparameters can be found in Appendix B. ### Random point cloud reconstruction First, we evaluate the expressivity of our proposed gridification and de-gridification procedure. To this end, we construct a dataset with 1000 synthetic random graphs -800 for training and 200 for validation- consisting of a pre-defined number of nodes \(\mathrm{N}^{\mathcal{P}}\)=\(1000\) randomly sampled on the unit cube, i.e., \(\mathrm{c}_{i}^{\mathcal{P}}\sim\mathcal{U}([-1,1]^{3})\), accompanied with a random scalar feature \(f_{i}^{\mathcal{P}}\sim\mathcal{U}(-1,1)\) at each position. **Experimental setup.** To evaluate the expressiveness of our method, we set up a network consisting only of a gridification and a de-gridification step, i.e., no intermediary layers, in a point cloud reconstruction pipeline. In other words, the task consists of propagating the point cloud into a grid representation, and mapping the grid representation back to the original point cloud (see Fig. 6). Therefore, to successfully reconstruct the original point cloud from the grid representation, the grid representation must be able to retain sufficient information from the input point cloud. **Results.** Fig. 7 shows reconstruction errors for different resolutions and different number of channels in the intermediary grid representation. We observe that it is possible to obtain good reconstructions by increasing the resolution of the grid or its number of channels. From an efficiency perspective, it is preferred to utilize low resolution representations with a larger number of channels due to the exponential growth in computational demands associated with higher grid resolutions, which instead scale linearly with the number of channels of the representation. Our experiments show that gridification is able to obtain compact grid representations that preserve the structure of the input point cloud. Furthermore, the quality of the grid representations can be efficiently improved by scaling the number of channels used. ### ModelNet40 classification Next, we evaluate gridification on point cloud classification. We deploy gridified networks on ModelNet40 (Wu et al., 2015): a synthetic dataset for 3D shape classification, consisting of 12,311 3D meshes of objects belonging to 40 classes. ModelNet40 is broadly used as a point cloud benchmark in which points are uniformly sampled from the faces of the meshes. **Results.** Our results (Tab. 1) show that gridified networks achieve competitive performance while being significantly more efficient in terms of parameters, compute and memory. Interestingly, and in contrast to voxelization methods, we observe that gridified networks operate well even on extremely low resolution grids. For instance, on a \(3\times 3\times 3\) grid, gridified networks attain an accuracy of \(90.86\%\). ### ShapeNet part segmentation Next, we evaluate gridification on point cloud segmentation. To this end, we deploy gridified networks on ShapeNet (Yi \begin{table} \begin{tabular}{l l l l} \hline \hline Model & Input & Type & Accuracy & Parameters \\ \hline PointNet++ (Qi et al., 2017b) & \(32\times 1000\) & native & 89.64 & 1.5M \\ VoxNet (Maturana and Scherer, 2015) & \(32\times 30^{3}\) & voxelization & 83.00 & 0.92M \\ PointGrid (Le and Duan, 2018) & \(32\times 16^{3}\) & voxelization & 92.00 & - \\ Point Voxel Transformer (Zhang et al., 2022) & \(32\times 1024\) & hybrid & 94.00 & 2.76M \\ Gridified Networks 3x3x3 (Ours) & \(32\times 1000\to 32\times 3^{3}\) & voxelization & 90.86 & 0.28M \\ Gridified Networks 9x9x9 (Ours) & \(32\times 1000\to 32\times 9^{3}\) & voxelization & 92.28 & 0.47M \\ \hline \hline \end{tabular} \end{table} Table 1: Classification performance on ModelNet40 benchmark. Figure 6: Random point clouds with random scalar node features are mapped to a grid representation. From the grid representation the node features need to be reconstructed via de-gridification. Figure 7: Random point cloud reconstruction error for varying grid resolution and number of channels on the grid representation. et al., 2016): a synthetic dataset with 16,000 point clouds of objects from 16 categories, each of which contains 2 to 6 parts. The objective of the task is to segment the point clouds into one of 50 possible part annotations. **Results.** Our results (Tab. 2) demonstrate that gridified networks are also able to achieve competitive performance in segmentation tasks, while being significantly more efficient in terms of parameters, compute and memory. This result validates the ability of gridification to handle dense prediction tasks via gridification and de-gridification. ### Efficiency analysis of gridification Finally, we investigate the scalability properties of gridification. Specifically, we analyze the time and memory consumption of gridified networks during inference on ModelNet40 for point clouds with increasing size, and compare the computation and memory complexity of convolutional operations on grid and point cloud data. **Scaling gridified networks to large point clouds.** Fig 8 shows the average time and memory consumption during inference on ModelNet40 for gridified networks and PointNet++. We observe that gridified networks exhibit a much more favorable scalability both in terms of inference time and GPU allocation -linear vs. quadratic- as the input size and number of channels increase. This demonstrates that gridified networks scale much better than native point cloud methods both for larger point clouds and larger networks. **Scaling the receptive field of neural operations.** Furthermore, we analyze the scalability properties of gridified networks relative to the size of its receptive fields. As illustrated in Fig. 1), for native point cloud methods the convolutional kernel must be recomputed for all query points in the point cloud. As a consequence, the construction of the convolutional kernels of size \(\mathrm{K}\) for all query points in a point cloud with \(\mathrm{N}\) points incurs in \(\mathcal{O}(\mathrm{KD})\) memory and time complexity. In contrast, on grid data, we can compute the kernel once and reuse it at all positions. As a result, on a grid, this operation incurs in \(\mathcal{O}(\mathrm{D})\) time and memory complexity. Fig. 9 show the methods' potential to scale up the receptive field of the gridification module without introducing significant computational overhead. ## 5 Limitations and future work **The resolution of gridification depends on the size of the point cloud.** The main limitation of gridification is that the resolution on the grid is directly proportional to the size of point cloud in order to preserve information. This in turn means that the whole gridified architecture must be changed for point clouds of different sizes, even if they represent the same underlying signal. This is in contrast to native point cloud methods, which, due to their continuous nature, are, in principle, able to generalize to point clouds of different sizes as long as these exhibit the same structures. **Towards no information loss.** While gridification aims to produce compact grid representations with minimal information loss, our experiments reveal that some information still gets lost in the process. Loosely speaking, it should be possible to create grid representations that do not lose any information by ensuring that the grid representation has at least as many points and as many channels as the source point cloud representation. Gaining richer theoretical understanding of gridification, could therefore lead to grid representations with no information loss either by imposing other requirements on gridification, or by considering different functional families in the gridification process. **Large scale point clouds and global context.** While we verify the scalability and efficiency of gridified networks for increasing point cloud sizes, we only carry on experiments on relatively small datasets. In future work, we aim to deploy gridification to large scale datasets. Furthermore, recent works have shown that using global receptive fields in convolutional operations consistently leads to better results across several tasks, even outperforming well-established Transformer architectures (Gu et al., 2021; Knigge et al., 2023; Poli et al., 2023). Due to the computational complexity of native point cloud methods, networks with global context have not been explored for point cloud processing. With gridification this ability becomes computationally feasible. Exploring the effect of global context for point cloud processing is an exciting research direction. **Generative tasks.** Gridification opens up the possibility of performing scalable generative tasks on large point clouds. Gridification can directly be extended to generative tasks if we assume that the point-cloud structure is preserved, i.e., if the coordinates of the output and input point clouds are equal. If this is not the case, e.g., for the generation of molecules (Xu et al., 2019; Hoogeboom et al., 2022), \begin{table} \begin{tabular}{l|l l l} \hline \hline Model & Gridified Networks & PointNet++ & PointGrid \\ \hline Type & voxelization & native & hybrid \\ \hline instance average IoU & 87.07 & 85.1 & 86.4 \\ class average IoU & 81.68 & 81.9 & 82.2 \\ \hline airplane & 88.52 & 82.4 & 85.7 \\ bag & 86.54 & 79.0 & 82.5 \\ cap & 74.09 & 87.7 & 81.8 \\ car & 80.46 & 77.3 & 77.9 \\ chair & 91.44 & 90.8 & 92.1 \\ earphone & 51.81 & 71.8 & 82.4 \\ guitar & 92.61 & 91.0 & 92.7 \\ knife & 89.44 & 85.9 & 85.8 \\ lamp & 82.07 & 83.7 & 84.2 \\ laptop & 96.07 & 95.3 & 95.3 \\ motor & 65.36 & 71.6 & 65.2 \\ mug & 92.99 & 94.1 & 93.4 \\ pistol & 86.72 & 81.3 & 81.7 \\ rocket & 88.57 & 58.7 & 56.9 \\ skateboard & 75.70 & 76.4 & 73.5 \\ table & 85.66 & 82.6 & 84.6 \\ \hline \hline \end{tabular} \end{table} Table 2: Segmentation performance on ShapeNet-part benchmark. de-gridification module must be modified to predict both the features and positions of the new point cloud. We consider this a particularly promising direction for future research. **Equivariant gridification.** In its current form, gridified networks do not respect symmetries which might be important for some applications, e.g., equivariance to 3D rotations for the prediction and generation of molecules (Schutt et al., 2017; Hoogeboom et al., 2022). In future work, we aim to extend gridification to respect these symmetries by taking inspiration from equivariant graph neural networks (Fuchs et al., 2020; Satorras et al., 2021). It is important to note that not only gridification and de-gridification must be equivariant, but also that the grid operations in between should respect these properties. This can be achieved in an efficient yet expressive manner through the use of continuous Monte-Carlo convolutions on the regular representations of the group (Finzi et al., 2020; Romero and Lohit, 2022). ## 6 Conclusion This work presents gridification, a method that strongly reduces the computational requirements of point cloud processing pipelines by mapping input point clouds to a grid representation, and performing neural operations in there. We demonstrate that gridified networks are able to match the accuracy of native point cloud methods, while being much faster and memory efficient. Through empirical and theoretical analyses, we also show that gridified networks scale much more favorably than native point cloud methods to larger point clouds and larger neighborhoods. Figure 8: Average time (left) and GPU allocation (right) during inference on ModelNet40 for a batch size of \(32\). Figure 9: Average time (left) and GPU allocation (right) on ModelNet40 validation set per batch \(B=32\) for various number of neighbors and number of channels \(C\) on the grid representation.
2303.09194
A continuation technique for maximum likelihood estimators in biological models
Estimating model parameters is a crucial step in mathematical modelling and typically involves minimizing the disagreement between model predictions and experimental data. This calibration data can change throughout a study, particularly if modelling is performed simultaneously with the calibration experiments, or during an on-going public health crisis as in the case of the COVID-19 pandemic. Consequently, the optimal parameter set, or maximal likelihood estimator (MLE), is a function of the experimental data set. Here, we develop a numerical technique to predict the evolution of the MLE as a function of the experimental data. We show that, when considering perturbations from an initial data set, our approach is significantly more computationally efficient that re-fitting model parameters while resulting in acceptable model fits to the updated data. We use the continuation technique to develop an explicit functional relationship between fit model parameters and experimental data that can be used to measure the sensitivity of the MLE to experimental data. We then leverage this inverse sensitivity analysis to select between model fits with similar information criteria, \textit{a priori} determine the experimental measurements to which the MLE is most sensitive, and suggest additional experiment measurements that can resolve parameter uncertainty.
Tyler Cassidy
2023-03-16T10:10:03Z
http://arxiv.org/abs/2303.09194v1
# A continuation technique for maximum likelihood estimators in biological models ###### Abstract Estimating model parameters is a crucial step in mathematical modelling and typically involves minimizing the disagreement between model predictions and experimental data. This calibration data can change throughout a study, particularly if modelling is performed simultaneously with the calibration experiments, or during an on-going public health crisis as in the case of the COVID-19 pandemic. Consequently, the optimal parameter set, or maximal likelihood estimator (MLE), is a function of the experimental data set. Here, we develop a numerical technique to predict the evolution of the MLE as a function of the experimental data. We show that, when considering perturbations from an initial data set, our approach is significantly more computationally efficient that re-fitting model parameters while resulting in acceptable model fits to the updated data. We use the continuation technique to develop an explicit functional relationship between fit model parameters and experimental data that can be used to measure the sensitivity of the MLE to experimental data. We then leverage this inverse sensitivity analysis to select between model fits with similar information criteria, _a priori_ determine the experimental measurements to which the MLE is most sensitive, and suggest additional experiment measurements that can resolve parameter uncertainty. Introduction As quantitative modeling becomes more prevalent across biology and medicine (Altrock et al., 2015; Perelsson, 2002; Sanche et al., 2020), mathematical models are increasingly being developed during the experimental data collection that will inform model parameters. This cooperation facilitates the use of mathematical modelling to inform experimental design and suggest potential intervention strategies (Cardenas et al., 2022; Luo et al., 2022; Sanche et al., 2020; Zhang et al., 2022). The COVID-19 pandemic is a striking example of the resulting feedback loop, where mathematical models suggest intervention strategies that influence the evolving public health crisis before being re-calibrated to new data. (Davies et al., 2020; Holmdahl and Buckee, 2020; Thompson, 2020). Each updated data set requires re-calibration of the model typically through computationally expensive optimization techniques. To reduce this computational cost of the re-calibration step, it is common to use the existing parameters as a starting point when performing parameter fitting to incoming experimental data sets. This approach recycles optimization work but does not utilize leverage the relationship between the initial and updated experimental data set. Here, we present a computational method to incorporate information about evolving data sets during the model validation and parameter estimation steps. Specifically, for given model parameters and an initial experimental data set, we develop a method to predict the best-fit parameter set to an updated experimental data set. Our approach can be viewed as a numerical continuation technique (De Souza and Humphries, 2019; Dhooge et al., 2008). However, rather than studying the dynamical properties of the mathematical model as a function of model parameters, we consider the evolution of best-fit model parameters as a function of the experimental data. We use the necessary condition for a local optima to write the best-fit parameters as an implicit function of the experimental data. Thus, we predict best-fit parameter sets for evolving experimental data without performing any optimization. Avoiding optimization leads to significant computational savings and we demonstrate these gains via two examples. In both these examples, our prediction method produces comparable model fits to randomly perturbed data sets to optimization techniques without the computational cost of solving the inverse optimization problem. While our approach does lead to increased computational efficiency, the more immediate application of our work may be in experimental design. Specifically, we identify an explicit relationship between individual best-fit parameter values and individual experimental data points through our continuation approach. We can therefore quantify which experimental measurements are the most informative for determining best-fit parameters and measure the sensitivity of parameter estimates to perturbations in data. The role of experimental design in model selection and parameterization has been extensively studied (Cardenas et al., 2022; Li and Vu, 2013, 2015; Silk et al., 2014). In particular, Li and Vu (2015) studied how correlations between best-fit model parameters can impact practical and structural identifiability of model parameters while Cardenas et al. (2022); Silk et al. (2014) explored how experimental design impacts model selection from a class of possible mathematical models. Conversely, our contribution explicitly relates individual experimental measurements with individual best-fit parameter estimates. We explicitly link our continuation technique to the Fisher information matrix commonly used in optimal experimental design (Braniff et al., 2019; Kreutz and Timmer, 2009). Taken together, our approach allows the increased confidence in model parametrization from optimal experimental design to be mapped directly to individual model parameters. Accordingly, we can therefore design experiments to address specific uncertainties in parameter estimates. Furthermore, our work offers a distinct step towards understanding how robust parameter estimates are to evolving data. Many existing computational methods quantify confidence in parameterization; formal parameter sensitivity analyses (Maiwald et al., 2016; Marino et al., 2008; Zi, 2011), virtual population approaches (Allen et al., 2016; Cassidy and Craig, 2019; Jenner et al., 2021), or parameter identifiability analysis (Castro and de Boer, 2020), often via profile likelihood computation (Kreutz et al., 2012; Raue et al., 2014, 2009), quantify how robust model predictions are to parameter variation. In particular, these techniques view the experimental data as fixed up to experimental noise and focus on the relationship between model parameters and model predictions. We offer a complementary approach to existing sensitivity analysis by explicitly studying how the best-fit parameters vary due to changes in calibration data. As we will see, our approach encodes information from local sensitivity analysis when calculating the functional relationship between the best-fit parameters and the calibration data. Consequently, while classical sensitivity analysis quantifies variability in model output due to change in model parameters, our approach considers changes in model parameters, and thus model predictions, as a function of the calibration data. We demonstrate this mapping of experimental data to best-fit parameter via an example drawn from mathematical oncology (Cassidy et al., 2021). These results, when combined with existing information criteria like the AIC or BIC (Kass and Raftery, 1995), allow for modellers to quantify the robustness of best-fit parameter estimates when comparing different model fits to experimental data. The remainder of the article is structured as follows. We begin by defining the optimization problem in Section 2.1. We develop the continuation method in Section 2.2, discuss our numerical implementation in 2.3, and explore the connection between our continuation approach and classical profile likelihood in 3.1. We then turn to two examples from mathematical biology to illustrate the utility of our technique in Section 3.2 before finishing with a brief discussion. ## 2 Methods ### Formulation of the optimization problem Here, we introduce the framework of the underlying optimization problem. We focus on ordinary differential equation (ODE) models representing biological processes, as these models are common throughout mathematical biology. However, our approach extends to partial differential equation or delay differential equation models directly. We consider a generic ODE based model throughout the remainder of this work. Let the model states be given by \(x(t)\in\mathbb{R}^{n}\) with model parameters denoted by \(\theta\in\Omega\subset\mathbb{R}^{p}\) where \(\Omega\) is a subset of biologically plausible parameter values. We explicitly allow the initial condition \(x(0)\) to depend explicitly on the model parameters \(\theta\). Taken together, we consider the differential equation model \[\frac{\mathrm{d}}{\mathrm{dt}}x(t)=f(x,\theta);\quad x(0)=x_{0}(\theta) \tag{1}\] where \(f\) is continuously differentiable in \(x\) and \(\theta\). We consider calibration data \(\{\phi_{i}\}_{i=1}^{d\times m}\) representing \(m\) measurements each taken at \(d\) time points \(\{t_{i}\}_{i=1}^{d}\). It is possible that model species are not directly comparable against the calibration data so we define the \(m\) model observables by \[y_{i}(\theta)=h(x(t_{i},\theta),\theta)\in\mathbb{R}^{d\times m}.\] In what follows, we consider \(m=1\) for notational simplicity although the analysis extends for \(m\geqslant 2\). #### Likelihood function and objective function **Remark 2.1**.: _The methods that follow do not assume a specific objective function. However, we do assume that the objective function is twice continuously differentiable as is commonly the case. For simplicity, we present the remainder of our results using the common log-likelihood formulation (Maiwald et al., 2016; Stapor et al., 2018)._ The likelihood describes the probability of observing experimental data \(\phi\) as a function of \(\theta\) and is given by \[\mathcal{L}(y(x(t,\theta)),\phi)=\prod_{i=1}^{d}\frac{1}{\sqrt{2 \pi\sigma_{i}^{2}}}\exp\left[-\frac{(y_{i}(\theta)-\phi_{i}^{*})^{2}}{\sigma_{ i}^{2}}\right] \tag{2}\] The experimental error at each measurement point, \(\sigma_{i}\), can be estimated as an additional model parameter or fixed to a known value. Here, we follow Sharp et al. (2022) and take \(\sigma_{i}\) fixed at a known constant value, although it is possible to include \(\sigma_{i}\) in the vector of unknown parameters \(\theta\). The maximum likelihood estimator (MLE) \(\theta^{*}\), and thus best-fit model parameters for the given experimental data \(\phi\), is defined by the solution of the inverse problem \[\theta^{*}=\operatorname*{argmax}_{\theta\in\Omega}\mathcal{L}( \theta,\phi^{*}).\] As the differential equations defining \(y(x(t,\theta))\) rarely have explicit solutions, the likelihood (2) is difficult to evaluate analytically. It is therefore standard to minimize the negative log-likelihood \(G(\theta,\phi)=-\log\left(\mathcal{L}(y(x(t,\theta)),\phi^{*})\right)\) given by \[G(\theta,\phi)=\sum_{i=1}^{d}\log\left(\sqrt{2\pi\sigma_{i}^{2}} \right)+\frac{(y_{i}(\theta)-\phi_{i}^{*})^{2}}{\sigma_{i}^{2}}. \tag{3}\] Under the assumption that \(\sigma_{i}=\sigma\) is fixed, the error term \(\log\left(\sqrt{2\pi\sigma^{2}}\right)\) and denominator of \(G(\theta,\phi)\) are constant and do not influence the solution of the optimization problem. The maximum likelihood estimator \(\theta^{*}\) is the parameter set that minimizes \(G(\theta,\phi^{*})\). A number of computational techniques exist to minimize \(G(\theta,\phi)\) and thus calculate \(\theta^{*}\). These optimization techniques typically require simulating the mathematical model (1) at each optimization step. Further complicating the optimization, \(G(\theta,\phi^{*})\) is often non-convex with multiple local minima. ### Continuation of maximal likelihood estimator In (3), we explicitly write the objective function \(G\) as a function of the model parameters \(\theta\) and the experimental data \(\phi\). Accordingly, the MLE \(\theta^{*}\) is an implicit function of the experimental data \(\phi\) defined as the solution of the optimization problem \[\theta^{*}(\phi)=\operatorname*{argmax}_{\theta\in\Omega}\mathcal{L}(\theta, \phi). \tag{4}\] Model fitting is increasingly performed concurrently with experiments (Luo et al., 2022) or obtained from an evolving real-world scenario, as in epidemic modelling (Sanche et al., 2020). In both of these cases, the experimental data is evolving and should not be considered as known and constant. Accordingly, we are interested in the MLE as a function of the experimental data \(\phi\). Most existing optimization techniques consider the experimental data fixed and omit this dependence. Here, we develop a continuation type technique to compute the evolution of \(\theta^{*}\) numerically as a function of \(\phi\) from an initial solution of the optimization problem. Ultimately, we calculate the evolution of \(\theta^{*}(\phi)\) as the calibration data varies to generate a curve of potential MLEs in \((\phi,\theta^{*})\) space using a numerical continuation technique. Numerical continuation methods compute branches of implicitly defined curves. A standard application of these continuation type techniques in mathematical biology is numerical bifurcation analysis (Dhooge et al., 2008; Sanche et al., 2022). In their most common form, numerical bifurcation techniques compute equilibrium systems of a non-linear dynamical system as a function of model parameters but can be used to detect much richer dynamical behaviour (De Souza and Humphries, 2019). Often, these continuation techniques leverage "predictor-corrector" algorithms. Predictor-corrector approaches use the implicit function theorem to predict the solution to the corresponding non-linear system of equations. Then, the predicted solution is used as a starting value to explicitly calculate the solution of the system of equations during the corrector step. Here, we develop a similar "prediction-correction" strategy to predict the behaviour of the solution \(\theta^{*}(\phi)\) of the inverse problem (4) as a function of the data \(\phi\). We focus on the "predictor" step, as the corrector step, if necessary, can utilize existing numerical optimization techniques to calculate the MLE. As the log-likelihood (3) is continuously differentiable, local optimal must satisfy \[\mathrm{D}_{\theta}G(\theta^{*},\phi)=0, \tag{5}\] so we necessarily have \[\theta^{*}(\phi)\in\{\theta\in\Omega|\mathrm{D}_{\theta}G(\theta^{*},\phi)=0\}.\] However, unlike the implicit equation used to determine equilibria of a dynamical system and used in continuation techniques for numerical bifurcation analysis, the optimality condition (5) is a necessary, but not sufficient, condition for \(\theta^{*}\) to be a MLE. Models that are not structurally identifiable (Raue et al., 2014) have manifolds in parameter space on which this optimality constraint holds but are not necessarily MLEs. We discuss the relationship between our approach and profile likelihood classifications of structural identifiability in Section 3.1. Now, let \(\theta_{0}^{*}\) be the MLE for calibration data \(\phi_{0}\). Further, let the Hessian \(\mathrm{D}_{\theta}^{2}G(\theta,\phi)\) be invertible at \((\theta_{0}^{*},\phi_{0})\in\mathbb{R}^{p}\times\mathbb{R}^{d}\) and consider the function \[\mathrm{D}_{\theta}G(\theta^{*},\phi):\mathbb{R}^{p}\times\mathbb{R}^{d} \rightarrow\mathbb{R}^{p}.\] Then, the implicit function theorem ensures the existence of a function \(\Psi(\phi)\) such that \[\mathrm{D}_{\theta}G(\Psi(\phi),\phi)=0\] in a neighbourhood of \(\phi_{0}\) with \(\Psi(\phi_{0})=\theta^{*}(\phi_{0})\). It is natural to consider \(\Psi(\phi)\) as the predicted MLE \(\theta^{*}(\phi)\) for \(\phi\) in a neighbourhood of \(\phi_{0}\). The implicit function theorem ensures that \(\Psi\) exists but computing \(\Psi(\phi)\) analytically is functionally impossible. However, the implicit function \(\Psi(\phi)\) is continuously differentiable and we expand \(\Psi\) as a function of the calibration data \(\phi\) using Taylor series \[\Psi(\phi+\Delta\phi)=\Psi(\phi)+\mathrm{D}\Psi(\phi)\Delta\phi+\mathcal{O}( \Delta\phi^{2}). \tag{6}\] where \(\phi+\Delta\phi\) is the updated calibration data. Then, to predict \(\Psi\) starting from a known solution \(\Psi(\phi)=\theta^{*}\) we calculate \(\mathrm{D}\Psi(\phi)\). The implicit function theorem implies that \[\mathrm{D}\Psi=-\left[\mathrm{D}_{\theta}^{2}G(\Psi(\phi),\phi)\right]^{-1} \mathrm{D}_{\theta,\phi}^{2}G(\Psi(\phi),\phi).\] We thus use \(\mathrm{D}\Psi\) to evaluate (6) and thus perform the continuation step. ### Numerical Implementation We now show how to use the objective function (3) to calculate finite difference approximations to the derivatives included in (6). As before, we assume that we are given a point \((\theta_{0}^{*},\phi_{0})\in\mathbb{R}^{p}\times\mathbb{R}^{d}\) such that \[\theta_{0}^{*}=\operatorname{argmin}_{\theta\in\Omega}G(\theta, \phi_{0}).\] For \(\theta_{n}\) denoting the \(n\)-th parameter, we calculate \[\frac{\partial G(\theta,\phi)}{\partial\theta_{n}}=\sum_{i=1}^{d} 2\left(y_{i}(\theta)-\phi_{i}\right)\frac{\partial y_{i}(\theta)}{\partial \theta_{n}}\] and so \[\left[\mathrm{D}_{\theta,\phi}^{2}G(\Psi(\phi),\phi)\right]_{(n,i)}=-2\frac{\partial y_{i}(\theta)}{\partial\theta_{n}}. \tag{7}\] The derivatives \(\partial_{\theta_{n}}y_{i}(\theta)\) can be calculated through finite difference schemes (Zi, 2011) \[\frac{\partial y_{i}(\theta)}{\partial\theta_{n}}=\frac{y_{i}( \theta+\Delta\theta_{n})-y_{i}(\theta-\Delta\theta_{n})}{2\Delta\theta_{n}}+ \mathcal{O}\left((\Delta\theta_{n})^{2}\right),\] where \(\Delta\theta_{n}\) is a small perturbation in only the \(n\)-th parameter. In practice, it is standard to take \(\Delta\theta_{n}\) to be some small percentage of the initial parameter \(\theta_{n}\)(Li et al., 2011). In this case, computing \(\mathrm{D}_{\theta,\phi}^{2}G(\Psi(\phi),\phi)\) requires \(2p\) model simulations where \(p\) is the number of model parameters. We note that \(\partial_{\theta_{n}}y_{i}(\theta)\) is commonly used to perform local sensitivity analysis and that more accurate finite difference approximations, such as centered differences, can be used to calculate \(\mathrm{D}_{\theta,\phi}^{2}G(\Psi(\phi),\phi)\). Calculating the Hessian \(\mathrm{D}_{\theta}^{2}G(\theta,\phi)\) via finite differences is simple to implement but computationally expensive due to the number of objective function evaluations. However, the Hessian, or the observed Fisher Information, is commonly used throughout parameter optimization algorithms and other techniques such as profile likelihood calculations, estimates of the likelihood function, and classical sensitivity anaylsis, which has led to recent advances in the development of computationally efficient techniques to calculate \(\mathrm{D}_{\theta}^{2}G(\theta,\phi)\)(Stapor et al., 2018) and the ability to recycle these calculations to avoid computational cost. In the following examples, we use a finite difference scheme to calculate \(\mathrm{D}_{\theta}^{2}G(\theta,\phi)\). We calculate the diagonal elements of \(\mathrm{D}_{\theta}^{2}G(\theta,\phi)\) using forward second order differences and the off-diagonal terms by \[\frac{\partial G(\theta,\phi)}{\partial\theta_{i}\partial\theta_ {j}}= \left(\frac{1}{4(\Delta\theta_{i})(\Delta\theta_{j})}\right) \left[G(\theta+\Delta\theta_{i}+\Delta\theta_{j},\phi)-G(\theta+\Delta\theta_ {i}-\Delta\theta_{j},\phi)\right.\] \[\left.+G(\theta-\Delta\theta_{i}+\Delta\theta_{j},\phi)+G(\theta -\Delta\theta_{i}-\Delta\theta_{j},\phi)\right]+\mathcal{O}\left((\Delta \theta_{i})^{2},(\Delta\theta_{j})^{2}\right).\] Thus, our computation of the Hessian requires \(2p(p+1)\) objective function evaluations, although, as mentioned, more efficient implementations are available. In fact, many gradient-based optimization techniques approximate the Hessian \(D_{\theta,\theta}^{2}G(\theta,\phi)\) at each iteration (MATLAB, 2017). For example, both fmincon and fminunc in (MATLAB, 2017) calculate \(D_{\theta,\theta}^{2}G(\theta,\phi)\) at each step and print the pre-computed Hessian as an output of the optimizer. It is therefore possible, and efficient, to recycle this calculation when calculating an update to \(\theta_{0}^{*}\) using (5). All told, this numerical implementation requires \(2p(p+2)\) model simulations to evaluate (5). This computational cost is certainly not optimal but does benefit from re-using calculations performed in local sensitivity analysis and the optimization step. Finally, while we have written (5) with the inverse of \(\mathrm{D}_{\theta}^{2}G(\theta,\phi)\), it is computationally more appropriate to solve the linear system of equations \[\mathrm{D}_{\theta}^{2}G(\theta,\phi)\mathrm{D}\Psi=-\mathrm{D}_{\theta,\phi}^ {2}G(\Psi(\phi),\phi)\] for the unknown \(\mathrm{D}\Psi\). Code to implement this continuation technique is available at [https://github.com/ttcassid/MLE_Continuation](https://github.com/ttcassid/MLE_Continuation). ## 3 Results ### Relationship with existing techniques There are a number of existing techniques to study the relationship between model parameters and data. While our continuation technique focuses on the relationship between the MLE and the calibration data, it has many ties to these existing techniques. We therefore discuss how this continuation method relates to parameter identifiability as assessed by the profile likelihood; local sensitivity analysis; and experimental design, with a focus on using the explicit relationship between data and the MLE to suggest additional experimental measurements. #### Parameter identifiability Thus far, we have explicitly written the MLE estimator as a function of the experimental data used to fit a model. Our approach is intrinsically related to parameter identifiability analysis. Identifiability analysis attempts to determine if available experimental observations are capable to uniquely determine model parameters. Accordingly, the practical identifiability of a mathematical model depends on available experimental data. The _profile likelihood_, given by \[PLE_{\theta_{i}}(c)=\min_{\theta_{i}=c,\,\theta\in\mathbb{R}^{p}}G(\theta, \phi),\] and introduced by Raue et al. (2009), is a projection of the likelihood function onto the model parameter \(\theta_{i}=c\). The profile likelihood illustrates the behaviour of the likelihood function as the parameter \(\theta_{i}\) is fixed away from the optimal value \(\theta_{i}^{*}\). The shape of \(PLE_{\theta_{i}}(c)\) illustrates the confidence interval of the parameter estimate \(\theta_{i}^{*}\) for given experimental data. Formally, Raue et al. (2009) define these confidence intervals by \[\mathrm{C.I.}(\theta_{i},\alpha)=\{c|PLE_{\theta_{i}}(c)-PLE_{\theta_{i}}( \theta_{i}^{*})<\Delta_{\alpha}\}\] where \(\Delta_{\alpha}=\chi^{2}(\alpha,df)\) is the \(\chi^{2}\) distribution at significance level \(\alpha\) and \(df\) degrees of freedom (Raue et al., 2009). A parameter is practically identifiable in the sense of Raue et al. (2009) with confidence level \(\alpha\) if \(\mathrm{C.I.}(\theta_{i},\alpha)\) is bounded in parameter space for given experimental data. Conversely, a non-identifiable parameter has a profile likelihood that does not increase past the threshold \(\Delta_{\alpha}\). The profile likelihood is intrinsically linked to the available experimental data \(\phi_{i}\). We view the PLE as a function of both the parameter \(\theta_{i}\) and the experimental data \(\phi\) \[PLE_{\theta_{i}}(c,\phi)=\min_{\theta_{i}=c,\,\theta\in\mathbb{R}^{p}}G(\theta,\phi).\] For practically unidentifiable models, it is natural to ask what perturbations to the experimental data could render the model practically identifiable. Raue et al. (2009) use the profile likelihood of a model parameter to suggest additional experiments to resolve practical non-identifiability. They simulate the model for parameter values along \(PLE_{\theta_{i}}\) to suggest additional experimental measurements at times \(t_{s,i}\), where \(t_{s,i}\) represents the \(i-\)th _simulated_ measurement time. In our framework, we define \[\theta^{*}|_{\theta_{i}=c}(\phi)=\mathrm{argmin}_{\theta_{i}=c,\,\theta\in \mathbb{R}^{p}}G(\theta,\phi),\] so that \[PLE_{\theta_{i}}(c,\phi)=G(\theta^{*}|_{\theta=c}(\phi),\phi).\] We note that the definition of \(\theta^{*}|_{\theta=c}(\phi)\) is precisely that of \(\theta^{*}(\phi)\) with the added constraint that \(\theta_{i}=c\). We can calculate \(\mathrm{D}_{\theta}\theta^{*}|_{\theta=c}\) as a function of the experimental data \(\phi\) in precisely the same manner as described previously. Consequently, our continuation approach can complement the experimental design approach suggested by Raue et al. (2009) by incorporating the sensitivity of the MLE to perturbations in the (simulated or experimental) calibration data. #### Sensitivity analysis Local sensitivity analysis quantifies how small perturbations of the best-fit parameters impact model output (Zi, 2011). A standard approach to local sensitivity analysis is using the finite difference approximation of \[S_{n}(t)=\frac{\partial y(\theta)}{\partial\theta_{n}}=\frac{h(t_{i},\theta+ \Delta\theta_{n})-h(t_{i},\theta-\Delta\theta_{n})}{\Delta\theta_{n}}+ \mathcal{O}\left(\Delta\theta_{n}\right)\] to identify which parameter values strongly impact model projections. When \(|S_{n}|\) is small, the model output is considered to be insensitive to \(\theta_{n}\). The \(n\)-th row of \(\mathrm{D}_{\theta,\phi}^{2}G(\Psi(\phi),\phi)\) is precisely \(S_{n}(t_{i})\) for \(t_{i}\) corresponding to calibration data measurements. When implementing (5), the magnitude of the continuation step \(\mathrm{D}\Psi(\phi)\Delta\phi\) in the direction of \(\theta_{n}\) is scaled by \(S_{n}\). This scaling encodes the local sensitivity of model predictions to variations in parameters in the prediction of \(\Psi(\phi)\). Consequently, our continuation method naturally includes the information gained from local sensitivity analysis. #### Experimental design In our derivation of \(\mathrm{D}\Psi\), we assumed that the Hessian matrix \(\mathrm{D}_{\theta}^{2}G(\theta,\phi)\) was invertible. The Hessian gives the curvature of the loglikelihood and is known as the observed Fisher information matrix \(\mathcal{I}_{obs}\). The observed Fisher information is a local measurement in data space. Conversely, the expected Fisher information considers the entirety of data space for fixed model parameters \(\theta\). The expected Fisher information is obtained by taking the expectation of \(\mathrm{D}_{\theta}^{2}G(\theta,\phi)\) over all possible experimental measurements \(\phi\) and is defined via \[\mathcal{I}=\mathbb{E}\left[\mathrm{D}_{\theta}^{2}G(\theta,\phi)\right].\] Many existing experimental design methods leverage the expected Fisher information matrix to minimize the covariance in model parameter estimates via the Cramer-Rao inequality. These experimental design techniques typically maximize some aspect, often the determinant, of the Fisher information matrix as a function of possible data to select the most informative calibration data set (Kreutz and Timmer, 2009). From a geometric perspective, maximizing the determinant of the Fisher information matrix corresponds to minimizing the volume of the confidence ellipsoid engendered from the covariance matrix (Braniff et al., 2019b). In particular, Braniff et al. (2019a) considered the case of bistable gene regulatory networks where the fold bifurcation and unstable manifold between stable equilibria complicates experimental design and parameter estimation. Sharp et al. (2022) considered an information-geometry perspective to propose the expected Fisher information matrix and resulting Riemannian manifold as a guide for data collection. As is often the case, both Sharp et al. (2022) and Braniff et al. (2019a) used the expected Fisher information, which considers all possible calibration data via the expectation over \(\phi\). Here, we show how our approach complements the classical Fisher information approach to experimental design, albeit through a local measurement, in \((\theta,\phi)\) space. We recall that \[\mathrm{D}\Psi\Delta\phi=-\left[\mathcal{I}_{obs}\right]^{-1}\mathrm{D}^{2}_{ \theta,\phi}G(\Psi(\phi),\phi)\Delta\phi,\] so if \(\mathrm{D}^{2}_{\theta,\phi}G(\Psi(\phi),\phi)\) were the identity, then \(\mathrm{D}\Psi\) would correspond to the Fisher information approach to measuring uncertainty in MLE. In the calculation of \(\mathrm{D}\Psi\Delta\phi\), the matrix \(\mathrm{D}^{2}_{\theta,\phi}G(\Psi(\phi),\phi)\) maps perturbations in the calibration data \(\Delta\phi\) through the curvature of the loglikelihood to changes in the MLE. Consequently, \(\mathrm{D}^{2}_{\theta,\phi}G(\Psi(\phi),\phi)\) acts as a change of basis matrix from the space of calibration data to parameter space. Simply, \(\mathrm{D}^{2}_{\theta,\phi}G(\Psi(\phi),\phi)\Delta\phi\) scales changes in the calibration data to the confidence ellipsoid in parameter space obtained from \(\left[\mathcal{I}_{obs}\right]^{-1}\). Geometrically, if \(\mathrm{D}^{2}_{\theta}G\) has eigenvalues \(\lambda_{i}\) with corresponding eigenvectors \(\nu_{i}\), then choosing \(\Delta\phi\) such that \(\nu_{i}=\mathrm{D}^{2}_{\theta}G\Delta\phi\) translates perturbations in calibration data to the corresponding eigenspace of the covariance matrix. For example, the \(i-\)th column of \(\mathrm{D}\Psi\) maps perturbations of the \(i-\)th data point to changes in the MLE. Specifically, the sum \[\frac{\Delta\theta^{*}}{\Delta\phi_{k}}=\sum_{k=1}^{p}\left|\mathrm{D}\Psi_{k, j}\right|\] measures the sensitivity of the MLE \(\theta^{*}\) to perturbations in the \(k-\)th data point. Thus, \[\left\|\mathrm{D}\Psi\right\|_{1}=\max_{k=1,2,\ldots,p}\frac{\Delta\theta^{*}} {\Delta\phi_{k}}\] and the most informative data point satisfies \[l=\mathrm{argmax}_{k=1,2,\ldots,p}\frac{\Delta\theta^{*}}{\Delta\phi_{k}},\] where informative is understood as the data point inducing the largest sensitivity in the MLE. As an extreme example, if \[\frac{\Delta\theta^{*}}{\Delta\phi_{n}}=0,\] then perturbations in \(\phi_{n}\) do not impact the MLE estimate, which implies complete insensitivity of the model fit to \(\phi_{k}\). This example corresponds to \(\Delta\phi\) belonging to the kernel of the matrix \(\mathrm{D}^{2}_{\theta,\phi}G\) since we have assumed that \(\mathrm{D}^{2}_{\theta}G\) is invertible. We can therefore utilize our analysis to identify which additional experimental measurements could increase confidence in model parameterization. Consider \(k\) additional measurements \(\{\phi_{s,i}=y_{s,i}(\theta^{*})\}_{i=1}^{k}\) taken directly from the model simulation at times \(\{t_{s,i}\}_{i=1}^{k}\) where the subscript \(s\) indicates simulated data. Including \(\{\phi_{s,i}\}\) in the objective function (3) does not change the MLE or objective value function as these simulated data exactly match the model values. However, \(\left\|\mathrm{D}\Psi(\phi+\Delta\phi_{s,i})\right\|\) quantifies the sensitivity of the MLE to variability in the \(k\) simulated measurements. Accordingly, the measurement that maximizes \(\left\|\mathrm{D}\Psi(\phi+\Delta\phi_{s,i})\right\|\) for a fixed perturbation size \(\Delta\) is a good candidate for an additional experimental measurement to decrease parameter uncertainty. ### Examples The continuation framework derived earlier is applicable to a large variety of models throughout in the mathematical biology literature. To demonstrate the utility of the continuation method, we consider two examples from distinct fields and model formulations. First, we consider a mathematical model of phenotypic heterogeneity in non-small cell lung cancer (NSCLC) (Cassidy et al., 2021). This model is given by a system of two non-local, structured PDEs representing the density of drug-sensitive and drug-tolerant NSCLC cells. The PDE model is equivalent to a system of integral equations following the introduction of two auxiliary variables which can be further reduced to a system of ODEs (see (Cassidy et al., 2021) for details). The parameters of the ODE model were fit to _in vitro_ NSCLC data taken from growth experiments in treated and untreated media (Cassidy et al., 2021). We also consider a classical model of HIV-1 viral dynamics. This model has been used extensively to understand viral dynamics data (Perelson, 2002) and the identifiability of model parameters was considered by Wu et al. (2008). In that work, Wu et al. (2008) used simulated data to validate their identifiability results; we follow Wu et al. (2008) and use simulated data to illustrate our approach. #### A PDE model of phenotypic switching in mathematical oncology Non-genetic phenotypic heterogeneity has been increasingly studied as a driver of treatment resistance in solid cancers (Goldman et al., 2015). A number of mathematical models have been derived to study the emergence of phenotypic plasticity in cancer cell lines (Caig et al., 2019; Gunnarsson et al., 2020; Jolly et al., 2018; Sahoo et al., 2021). We consider the Cassidy et al. (2021) model that tracks the density of NSCLC cells with a drug-sensitive (\(A(t,a)\)) or drug-tolerant (\(B(t,a)\)) phenotype at time \(t\) and age \(a\). The total number of cells of each phenotype is given by \[\bar{A}(t)=\int_{0}^{\infty}A(t,a)\mathrm{d}a\quad\text{and}\quad\bar{B}(t)= \int_{0}^{\infty}B(t,a)\mathrm{d}a. \tag{8}\] The total number of NSCLC cells is given by \(N(t)=\bar{A}(t)+\bar{B}(t)\). Cassidy et al. (2021) considered logistic growth with an Allee effect, wherein cooperation between cells of the same phenotype can lead to increased growth rates, given by \[R_{A}(\bar{A}(t),\bar{B}(t)) =r_{A}\left(1-\frac{\bar{A}(t)+\bar{B}(t)}{K}\right)\quad\text{and}\] \[R_{B}(\bar{A}(t),\bar{B}(t)) =r_{B}\left(1-\frac{\bar{A}(t)+\bar{B}(t)}{K}\right)f_{n}(\bar{A} (t),\bar{B}(t)). \tag{9}\] where \(r_{A}\) and \(r_{B}\) are phenotype specific growth rates, the carrying capacity is \(K\), and the strength of the Allee effect is \[f_{n}(\bar{A}(t),\bar{B}(t))=1+\left(\frac{r_{A}-r_{B}}{r_{B}}\right)\left( \frac{\bar{B}(t)^{n}}{\bar{A}(t)^{n}+\bar{B}(t)^{n}}\right).\] Finally, drug-tolerant and drug-sensitive cells have phenotype-specific death rates \(d_{B}\) and \[d_{A}=\left\{\begin{array}{cc}d_{A}&\text{If untreated}\\ d_{A}^{max}&\text{During treatment.}\end{array}\right.\] \(A(t,a)\) and \(B(t,a)\) satisfy the age structured PDEs \[\partial_{t}A(t,a)+\partial_{a}A(t,a) =-[d_{A}+R_{A}(\bar{A}(t),\bar{B}(t))]A(t,a)\] \[\partial_{t}B(t,a)+\partial_{a}B(t,a) =-[d_{B}+R_{B}(\bar{A}(t),\bar{B}(t))]B(t,a) \tag{10}\] with boundary conditions corresponding to cellular reproduction given by \[\left.\begin{aligned} A(t,0)&=2\int_{0}^{ \infty}[R_{A}(\bar{A}(t),\bar{B}(t))\beta_{AA}(a)A(t,a)+f_{n}(\bar{A}(t),\bar{B} (t))R_{B}(\bar{A}(t),\bar{B}(t))\beta_{BA}(a)B(t,a)]\,\mathrm{d}a\\ B(t,0)&=2\int_{0}^{\infty}[R_{A}(\bar{A}(t),\bar{B} (t))\beta_{AB}(a)A(t,a)+f_{n}(\bar{A}(t),\bar{B}(t))R_{B}(\bar{A}(t),\bar{B}(t ))\beta_{BB}(a)B(t,a)]\,\mathrm{d}a.\end{aligned}\right\} \tag{11}\] The functions \(\beta_{ij}\) represent the probability of a reproducing mother cell with age \(a\) and phenotype \(i\) giving birth to a daughter cell with phenotype \(j\). The probability of phenotypic inheritance is given by \[\beta_{ii}(a)=P_{ii}^{*}+(P_{ii}^{max}-P_{ii}^{*})\exp\left(-\sigma_{i}a\right),\] where \(\sigma_{i}\) represents the decay rate of intracellular signalling factors that modulate how ageing impacts the probability of daughter cells retaining the mother cells phenotype, and \[\beta_{AB}(a)=1-\beta_{AA}(a)\quad\text{and}\quad\beta_{BA}(a)=1-\beta_{BB}(a).\] Further details, including a derivation of the initial conditions of (10), model analysis, and reduction of the phenotype switching mode (10) to a system of ODEs can be found in Cassidy et al. (2021). The model (10) was fit to _in vitro_ experimental data corresponding to NSCLC cell population growth in untreated and treated environments where treatment is applied from day 3 onwards. The calibration data is 4 data points \(\{\phi_{i}\}_{i=1}^{4}\) collected at time \(t_{i}=0,2,4,6\) days in the control experiment, and two additional data points \(\{\phi_{i}\}_{i=5}^{6}\) collected on days \(t_{i}=4,6\) days during the treated experiment. As anti-cancer treatment is applied from day 3 on-wards and decreases the cancer cell population, we necessarily have \(\phi_{5}\leqslant\phi_{3}\) and \(\phi_{6}\leqslant\phi_{4}\). We denote the experimental data used to parametrize the model by \(\{\phi_{i}^{0}\}_{i=1}^{6}\). The model output corresponding to the experimental measurements is thus \[y_{i}(\theta)=N(t_{i},\theta),\] and the objective function is the standard sum of squares error given by \[G_{pheno}(\theta,\phi)=\sqrt{\sum_{i=1}^{6}\left(\log_{10}(N(t_{i},\theta)- \log_{10}(\phi_{i})\right)^{2}}.\] Cassidy et al. (2021) fit model parameters \([r_{A},r_{B},d_{A}=d_{B},d_{A}^{max}]\) to treated and untreated experimental data simultaneously for a number of cell lines. The MLE found by Cassidy et al. (2021) corresponds to \(\theta^{*}(\phi^{0})=[0.4827,0.3498,0.7025,0.4198]\). We perturbed the experimental data collected by Craig et al. (2019) with increasing amounts of Gaussian noise. We created 10 perturbed data sets \(\{\phi_{i}^{j}\}_{i=1}^{6}\) where the index \(j=1,2,...,10\), denotes the \(j\)-th perturbed data set and the normally distributed noise with \(\mu=0\), \(\sigma^{2}=1\), and scaled such that \[\|\log_{10}(\phi_{i}^{j})-\log_{10}(\phi_{i}^{*})\|=(0.05+jh_{step}\times(0.75 -2\times 0.05)\left(\frac{2}{10(11)}\right)\|\log_{10}(\phi_{i}^{0})\|\] where \(h_{step}=0.65/55\) was chosen such that \(\|\log_{10}(\phi_{i}^{10})-\log_{10}(\phi_{i}^{0})\|=0.75\|\log_{10}(\phi_{i} ^{0})\|\). We enforce that this randomly perturbed data satisfies \(\phi_{5}\leqslant\phi_{3}\) and \(\phi_{6}\leqslant\phi_{4}\). For each perturbed data set \(\{\phi_{i}^{j}\}\), we used the continuation method described in Section 2.2 to calculate \[\Psi(\phi^{j})=\theta^{*}(\phi^{j-1})+\mathrm{D}\Psi(\phi^{j-1})\Delta\phi+ \mathcal{O}(\Delta\phi^{2}). \tag{12}\] The naive approach to calculate the MLE \(\theta^{*}(\phi^{j})\) for updated data \(\phi^{j}\) would be to use the MLE from the previous data, \(\theta^{*}(\phi^{j-1})\), as an initial starting guess for the parameter fitting step. Hence, to illustrate the utility of our continuation technique, we calculated \(\Psi(\phi^{j})\) using (12) and then calculated \(G_{pheno}(\Psi(\phi^{j}),\phi^{j})\). We also calculated the true MLE \(\theta^{*}(\phi^{j})\) using the Matlab algorithm fmincon from the starting guesses \(\Psi(\phi^{j})\) and \(\theta^{*}(\phi^{j-1})\). In Figure 1 A), we show the objective function value evaluated at the updated data \(\phi^{j}\) and three parameter sets : the naive starting point, \(\theta^{*}(\phi^{j-1})\); the predicted MLE, \(\Psi(\phi^{j})\); and the true MLE, \(\theta^{*}(\phi^{j})\). We note that the non-monotonic profile of the objective function \(G_{pheno}\) in Figure 1 A) is to be expected as we are adding noise to experimental data. This noise may perturb the existing data away from dynamics that can be well-described by the mathematical model. Accordingly, the important information from Figure 1 A) is the comparison \[G_{pheno}(\theta^{*}(\phi^{i}),\phi^{i})\leqslant G_{pheno}(\Psi(\phi^{i}), \phi^{i})<G_{pheno}(\theta^{*}(\phi^{i-1}),\phi^{i}),\] which demonstrates the accuracy of the continuation step (5) in driving a relative decrease in \(G_{pheno}\). Further, in Figure 1 B), we show the cumulative number of objective function evaluations when calculating \(\theta^{*}(\phi^{j})\) for \(j=1,2,...,10\) when starting the optimization from \(\theta^{*}(\phi^{j-1})\) and \(\Psi(\phi^{j})\). The total number of function evaluations used is lower when starting the optimization from the predicted MLE \(\Psi(\phi^{j})\) than when starting from \(\theta^{*}(\phi^{j-1}.\) More strikingly, the predicted MLE \(G(\Psi(\phi^{j}),\phi^{j})\) is comparable against \(G(\theta^{*}(\phi^{j},\phi^{j})\) in Figure 1 A) and there is computational benefit to only calculating the predicted MLE \(\Psi(\phi^{j})\) rather than re-fitting the parameters. Taken together, the results shown in Figure 1 demonstrate the accuracy and computation efficiency gained by calculating \(\Psi(\phi^{j})\). Figure 1: Comparison between MLE estimates obtained using the naive and continuation approaches. Panel **A** shows a comparison of the objective function value for the naive and continuation guesses as well as the true minimal objective function value as a function of the perturbation of the experimental data from the initial data. Panel **B** shows a comparison of the number of objective value evaluations required to obtain the minimal value when starting from the naive or predicted MLE with the number of function evaluations required to calculate \(\Psi(\theta^{i})\). We now demonstrate how to utilize the continuation frame work to identify additional time points to increase confidence in model parameters. We focus on the treated environment and consider additional time points \(t_{s,i}=3.1,3.2,3.3,3.4,3.5,5,7\) days with corresponding simulated measurements \(\{\phi_{i,s}\}_{i=1}^{7}=N(t_{s,i}).\) We perturb each of these simulated measurements by a fixed amount, \(\Delta\phi=\pm 0.3N(3.1),\) to give 14 additional, perturbed measurements. We appended each of these 14 measurements to the experimental data and predicted the MLE to these appended data sets. We calculated the relative change in the MLE for each model parameter and each of the 14 appended data sets. We note that each of the simulated data point occurs following the beginning of therapy. The immediate decrease observed in \(N(t)\) following the beginning of treatment is due to the death of sensitive cells following treatment administration and controlled by the parameter \(d_{a}^{max}\). From the biological interpretation of the parameters, we expect \(d_{a}^{max}\) to be highly sensitive to perturbations in these data points. As expected, \(d_{a}^{max}\) was the most sensitive model parameter to perturbations of the simulated data and we show the percent relative change in \(d_{a}^{max}\) from the unperturbed data in Figure 2**B**). As expected, the maximal death rate of sensitive cells increased when the simulated data point was decreased from the true value and decreased when the simulated data point was increased. The treatment sensitive population rapidly shrinks during therapy. The stabilization and rebound of the population during therapy is due to the expansion of the drug resistant population. This stabilization occurs once the drug sensitive population has been maximally suppressed which due to the drug effect. The most informative simulated data point, as measured by the magnitude of the relative change in the parameter \(d_{a}^{max},\) was at time \(t_{i,s}=3.4\). At \(t=3.4\), drug sensitive cells are no longer dominant due to drug pressure. The depth of the population response to treatment, as measured by \(N(3.4),\) is thus highly sensitive to death rate of these drug sensitive cells under treatment. In Figure 2**A**), we show the simulated experimental measurements and predicted model dynamics for the most informative time point. The predicted model simulations capture the perturbed data point while retaining good fits to the true experimental data. #### Parameter continuation in a viral dynamics model The standard viral dynamics model has been extensively used to understand the dynamics of viral infection in HIV-1 (Perelson, 2002). The model tracks the concentration of uninfected target cells, \(T(t)\), infected cells \(I(t)\), and free infectious virus \(V(t)\). Here, we follow Wu et al. (2008) and consider a model of HIV-1 dynamics where the target cells are CD4\({}^{+}\) T-cells. These cells are produced at a constant rate \(\lambda\) and cleared linearly at rate \(d\). Infection occurs at a rate \(\beta\) following contact between a target cell and infectious viral particle and infected cells are cleared at rate \(\delta\). Upon lysis, infected cells release \(N\) viral particles into the circulation and free virus is cleared at a constant rate \(c\). The viral dynamics model is given by \[\left.\begin{aligned} \frac{\mathrm{d}}{\mathrm{dt}}T(t)& =\lambda-\beta T(t)V(t)-dT(t)\\ \frac{\mathrm{d}}{\mathrm{dt}}I(t)&=\beta T(t)V(t)- \delta I(t)\\ \frac{\mathrm{d}}{\mathrm{dt}}V(t)&=\delta NI(t)-cV (t).\end{aligned}\right\} \tag{13}\] It is common to set \(p=\delta N\) so the final equation for \(V(t)\) becomes \[\frac{\mathrm{d}}{\mathrm{dt}}V(t)=pI(t)-cV(t),\] and the system (13) is equipped with initial conditions \(T(0)=T_{0},I(0)=I_{0},\) and \(V(0)=V_{0}\). In typical clinical studies, temporal data is only collected for circulating free virus so the model output corresponding to the calibration measurements is \[y_{i}(\theta)=\log_{10}(V(t_{i},\theta)),\] where using \(log_{10}\) measurements of viral load is standard in HIV studies. During antiretroviral therapy (ART), the viral load may fall below the limit of detection of standard assays. While there are a number of techniques to account for this censored data, we do not consider data collected during ART, so the objective function is given by the sum of squares error \[G_{HIV}(\theta,\phi)=\sqrt{\sum_{i=1}^{n}\left(\log_{10}(V(t_{i},\theta)-\log_{ 10}(\phi_{i}))^{2}\right.}. \tag{14}\] Wu et al. (2008) characterized the identifiability of this model using a higher order derivative method. They found that, if the initial conditions of the model \(T_{0},I_{0},\) and \(V_{0}\) are known, then all six model parameters \(\theta=\{\beta,d,\delta,c,N,\lambda,\}\) are identifiable. To illustrate their results, they fixed \(\theta=\{(2\times 10^{-5},0.15,0.55,5.5,900,80)\}\) and simulated the ODE model (13). They sampled the simulated viral load at 37 distinct time points and added noise \(\epsilon_{i}\) sampled from a Gaussian distribution with \(\mu=0\) and \(\sigma^{2}=1\)(Wu et al., 2008). In Section 3.2, we demonstrated the effectiveness of our continuation technique by focusing on objective value function and computational efficiency in calculating the MLE. Here, we illustrate how model dynamics Figure 2: Evaluating additional time points to identify \(d_{a}^{max}\) in an _in vitro_ model of NSCLC. Panel **A** shows the a selection of predicted model dynamics when fit to experimental data with a single additional time point \(\phi_{i,s}^{*}\) that is perturbed by a \(\Delta\phi\) from the true simulated value. For figure clarity, model trajectories corresponding to the perturbation of \(\{\phi_{4,s}\}\) is shown. Panel **B** shows a tornado plot of the predicted relative change in the best-fit parameter \(d_{a}^{max}\) for each additional simulated data point \(\{\phi_{i,s}\}_{i=1}^{7}\). The left side of the tornado plot, in blue, shows the relative change when the perturbed value \(\phi_{i,s}=\phi_{i,s}^{*}+\Delta\phi\) is larger than the simulated value \(\phi_{i,s}^{*}\). The right-hand side, in orange, shows the relative change in \(d_{a}^{max}\) when \(\phi_{i,s}=\phi_{i,s}^{*}+\Delta\phi\) is smaller than the simulated value \(\phi_{i,s}^{*}\). evolve during MLE continuation. We follow Wu et al. (2008) but consider a smaller subset of calibration data collected at time \(t_{i}=\{0.4,1,8,14,20,36,46,58\}\). We add noise \(\epsilon_{i}^{0}\) sampled from a Gaussian distribution with \(\mu=0\) and \(\sigma^{2}=0.15\) so the initial calibration data is \[\phi_{i}^{0}=\log_{10}(V(t_{i},\theta))+\epsilon_{i}^{0}.\] We first fit the model to the simulated data \(\phi_{i}^{0}\) to obtain an initial MLE. We then generate 4 additional viral load time courses \(\{\phi_{i}^{j}\}_{j=1}^{10}\) by \[\phi_{i}^{j}=\phi_{i}^{0}+h_{step}|\epsilon_{i}^{j}|\] for \(\epsilon_{i}^{j}\) sampled from a Gaussian distribution with \(\mu=0\) and \(\sigma^{2}=1\) and \(h_{step}=\pm 0.1,\pm 0.2\). This collection of 4 data sets could feasibly represent experimental data measured from an increasingly large sample drawn from a population of HIV-1 positive individuals with population viral dynamic parameters given by \(\theta=\{(2\times 10^{-5},0.15,0.55,5.5,900,80\}\). Here, we test the ability of our continuation technique to predict reasonable viral dynamic curves _without_ refitting the data. In Figure 3 A), we compute the predicted \(\Psi(\phi^{j})\) and plot the predicted model dynamics obtained from \(\Psi(\phi^{j})\) against the perturbed data \(\phi^{j}\). In Figure 3 B), we show the fit model predictions to the perturbed data. In each case, the viral dynamics show comparable model predictions for the fit and predicted model parameters demonstrating that our continuation method can successfully predict reasonable model simulations. In fact, the Bayesian Information Criteria (Kass and Raftery, 1995) indicates no significant differences between the predicted and true MLE for all 4 data sets. However, Figure 3 C) shows the significant computational improvement obtained by only calculating the continuation step rather than fitting all model parameters at each step. The predicted model dynamics track the true viral load trajectory. It is common to find numerous local minima of (14) when fitting (13) to simulated data. As measured by the value of the log-likelihood function or information criteria, these local minima can produce comparable Figure 3: Comparison of predicted model fits to randomly perturbed data. Panels **A** and **B** show model trajectories obtained using predicted model parameters to the simulated experimental data perturbed by \(\phi_{i}^{j}=\phi_{i}^{0}+h_{step}|\epsilon_{i}^{j}|\). Panel **A** shows the predicted model fits to the experimental data while **B** shows the model fits to data resulting from the true MLE. Panel **C** shows the number of objective value evaluations required to predict the MLE using this continuation technique or fit the model parameters to the perturbed data using the known parameters as a starting guess. fits to a given data set despite different dynamics. We perturbed the initial data set \(\phi_{0}\) by \[\log(\phi_{i}^{1})=\log(\phi_{i}^{0})+0.8\epsilon_{i}\] for \(\epsilon_{i}\) sampled from a Gaussian distribution with \(\mu=0\) and \(\sigma^{2}=1\). We fit this perturbed data from 10 distinct initial guesses using fmincon[10]. These 10 starting initial guesses converged to two local minima. We denote the corresponding parameter estimates by \(\hat{\theta}_{1}\) and \(\hat{\theta}_{2}\) and plot the resulting model trajectories in Fig 4. These fits are indistinguishable by BIC and both appear to accurately describe the viral load data. Consequently, it is not obvious which of \(\hat{\theta}_{1}\) and \(\hat{\theta}_{2}\) best describe the data. However, it is reasonable to expect that the MLE should be robust to small perturbations of the calibration data. We measure the robustness of each of these minima by calculating \(\|\mathrm{D}\Psi(\phi^{1})\|\) at \(\hat{\theta}_{1}\) and \(\hat{\theta}_{2}\). A smaller norm \(\|\mathrm{D}\Psi(\phi^{1})\|\) implies less sensitivity of the MLE to perturbations of the calibration data. For the example shown in Fig 4, there is a 16 fold difference in sensitivity to calibration data. In this way, \(\mathrm{D}\Psi\) can be used to distinguish between otherwise similar fits. We suggest that, when choosing between multiple fits with similar BIC values, the parameter estimate with the smaller sensitivity to the data is a more robust, and thus preferential, fit. Figure 4: Comparison of two potential fits to randomly perturbed viral dynamics models. Model trajectories obtained from two local minima from fitting 10 initial guesses to viral load data shown in black. Both trajectories accurately describe the viral load dynamics as evidenced by a small difference in BIC. However, the parameter estimate corresponding to the oscillatory trajectory is much more robust as measured by \(\|\mathrm{D}\Psi(\phi^{1})\|\). ## 4 Discussion Parameter fitting is crucial step when using mathematical models to predict novel treatment strategies, extrapolate from clinical trials, identify new drug targets or schedules, or propose non-pharmaceutical interventions (Brady and Enderling, 2019; Cassidy and Craig, 2019; Cassidy et al., 2020). However, parameter fitting can be difficult and computationally expensive. A large variety of fitting techniques have therefore been developed to calibrate model predictions against data (Horbelt et al., 2002; Kreutz et al., 2013; Lauss et al., 2018; Toni et al., 2009). Moreover, mathematical modeling is increasingly applied to understand emerging data and make real-time predictions. In this case, as new data emerges, the model parameters must be refit with potential computational cost. Here, we developed a continuation type technique to quantify how updates to experimental data will impact the MLE and predict the evolution of the MLE as a function of the experimental data used to calibrate the model. We used the implicit function theorem to calculate the trajectory of the MLE through parameter space. As the implicit function theorem only guarantees the existence of a differentiable trajectory \(\Psi\) through calibration data-parameter space, we utilized the first order Taylor expansion \(\Psi\) to extrapolate the evolution of the MLE due to changes in experimental data. We showed how this calculation is intrinsically linked to local sensitivity analysis and the curvature of the objective function. In two examples drawn from mathematical biology, we showed how this continuation technique can predict acceptable model fits to experimental data while significantly reducing computational overhead. In fact, in most applications, our continuation technique requires no dedicated computational overhead as the Hessian of the objective function is calculated at each step when using common optimization algorithms, such as fmincon(MATLAB, 2017), and local sensitivity analysis is a standard step in model fitting. Perhaps more importantly that gains in computational efficiency, our approach explicitly identifies relationships between individual experimental measurements and parameter estimates. Our approach addresses similar questions to local sensitivity analysis from a distinct perspective. Rather than using simulations to understand how small perturbations in model parameters from the best-fit parameters change model outputs as in standard sensitivity analysis, we quantify how changes in the training data impact the best-fit parameters and measure the sensitivity of the best-fit parameters to variations in this calibration data. As we showed in Section 3.2, this perspective can be used to suggest additional experimental measurements to increase confidence in model parameterization. Further, we showed how to use \(D\Psi\) to understand which experimental measurements are most informative for model parameterizations and identify redundant measurements that do not provide additional information for parameter estimation. Our technique is a type of local analysis that explores the functional dependence of the MLE on experimental data starting from a pre-identified MLE. Specifically, we assume that the Hessian of the objective function is invertible at the MLE and our results are necessarily local in parameter space as we are extrapolating from a pre-identified MLE. Nevertheless, our examples show the utility of our continuation approach for even large perturbations of the experimental data. Despite these limitations, we developed a continuation-type technique to predict the functional dependence of a MLE on the experimental data used to train a mathematical model. While we have focused on applications in mathematical biology, our approach is immediately portable to other domains. As our method is independent of the number of data points, our approach could be particularly useful in big-data applications. Ultimately, our results offer a unified approach to quantify the relationship between training data and best-fit model parameters and to leverage this understanding to suggest additional experiments to increase confidence in model parameterization. ## Data access statement The code and data underlying the results in this manuscript is available at [https://github.com/ttcassid/MLE_Continuation](https://github.com/ttcassid/MLE_Continuation).
2301.09935
Time-evolving Impact of Trees on Street Canyon Microclimate
Nowadays, cities are frequently exposed to heatwaves, worsening the outdoor thermal comfort and increasing cooling energy demand in summer. Urban forestry is seen as one of the viable and preferable solutions to combating extreme heat events and urban heat island (UHI) in times of climate change. While many cities have initiated tree-planting programmes in recent years, the evolving impact of trees on street microclimate, in a time span of up to several decades, remains unclear. We investigate the cooling effects of linden trees in five groups, i.e., 10-20, 20-30, 30-40, 40-60, and 60-100 years old. The leaf area index (LAI) and leaf area density (LAD) vary nonlinearly as the trees grow, peaking at different ages. Computational fluid dynamics (CFD) simulations solving microclimate are performed for an idealized street canyon with trees of varied age groups. Turbulent airflow, heat and moisture transport, shortwave and longwave radiation, shading and transpiration are fully coupled and solved in OpenFOAM. The meteorological data, including air temperature, wind speed, moisture, and shortwave radiation of the heatwave in Zurich (June 2019), are applied as boundary conditions. The results show that young trees in the age group of 10-20 years old provide little heat mitigation at the pedestrian level in an extreme heat event. Optimal heat mitigation by trees is observed for the group of 30-60 years old trees. Finally, the potential impact of growing trees as a heat mitigation measure on air ventilation is evaluated.
Haiwei Li, Yongling Zhao, Ronita Bardhan, Aytac Kubilay, Dominique Derome, Jan Carmeliet
2023-01-24T11:38:44Z
http://arxiv.org/abs/2301.09935v1
# Time-evolving Impact of Trees on Street Canyon Microclimate ###### Abstract Nowadays, cities are frequently exposed to heatwaves, worsening the outdoor thermal comfort and increasing cooling energy demand in summer. Urban forestry is seen as one of the viable and preferable solutions to combating extreme heat events and urban heat island (UHI) in times of climate change. While many cities have initiated tree-planting programmes in recent years, the evolving impact of trees on street microclimate, in a time span of up to several decades, remains unclear. We investigate the cooling effects of linden trees in five groups, i.e., 10-20, 20-30, 30-40, 40-60, and 60-100 years old. The leaf area index (LAI) and leaf area density (LAD) vary nonlinearly as the trees grow, peaking at different ages. Computational fluid dynamics (CFD) simulations solving microclimate are performed for an idealized street canyon with trees of varied age groups. Turbulent airflow, heat and moisture transport, shortwave and longwave radiation, shading and transpiration are fully coupled and solved in OpenFOAM. The meteorological data, including air temperature, wind speed, moisture, and shortwave radiation of the heatwave in Zurich (June 2019), are applied as boundary conditions. The results show that young trees in the age group of 10-20 years old provide little heat mitigation at the pedestrian level in an extreme heat event. Optimal heat mitigation by trees is observed for the group of 30-60 years old trees. Finally, the potential impact of growing trees as a heat mitigation measure on air ventilation is evaluated. ## 1 Introduction The percentage of inhabitants living in cities is expected to rise from about 50% in 2010 to nearly 70% in 2050 [1]. Increasing urbanization, population densification and climate change raise many problems for urban microclimate. Air temperature in urban areas is usually higher than the air temperature in surrounding rural areas, which is defined as the urban heat island (UHI) effect [2]. The rise in urban temperature increases cooling energy demand, the concentration of airborne pollutants, and heat-related illness and mortality. Urban forestry is recognized as one of the vital sustainable measures for UHI mitigation [3], as it has enormous environmental benefits, such as reducing air and surface temperature [4], reducing air pollution and flood damage [5, 6], buffering traffic noises [7], and increasing urban biodiversity [8]. Furthermore, the implementation of greenery also adds aesthetic advantages to cities and improves mental and physical health of social communities. Vegetation alters the thermal and wind condition of the urban microclimate through several coupled multi-physical mechanisms, including shading, radiation trapping, evapotranspiration, and aerodynamic influence. As trees grow with time, these mechanisms vary and play different roles as a consequence of the changing vegetation properties, for instance, tree size, height, leaf area density (LAD), and leaf area index (LAI). In terms of radiation and transpiration, the effects of trees often follow a diurnal cycle. During the daytime, the foliage of street trees provides shading and absorbs radiation but decreases the sky view factor of the canyon. At night, the trees may trap the longwave radiation under the tree canopy. LAD, LAI, and the dimensions of the tree foliage, especially the crown width, are significant factors that may influence the tree shades and the surrounding heat balance. A study using thermal satellite imagery in Terre Haute, Indiana, USA, showed that for every unit increase of LAI, the surface temperature is reduced by 1.2 \({}^{\circ}\)C [9]. Another study in the city of Dresden, Germany, showed that the surface temperature is reduced as the leaf area density (LAD) increases [10]. However, the study of Hien and Jusuf [11] shows that mature and bulk trees may lead to nighttime warming as the longwave radiation trapping is increased. High LAI and LAD values can also lead to the high transpirative cooling potential of trees [12]. A field study in Munich, Germany, observed three times higher transpiration in Tilla cordata trees compared to the Robinia pseudoacacia [13]. The LAI of the former species is 30% higher than the latter one. In terms of the aerodynamic influences, the shape, porosity, and drag of the trees determine the aerodynamic properties and the airflow around the trees, altering the flow structures and the turbulence mixing. The structure of the foliage increases the aerodynamic resistance and forms a thick stagnant layer around the leaves [14]. Plants with high foliage density are seen to reduce the air ventilation in the street canyons and, therefore, affect the pollutant dispersion below the urban canopy [15, 16]. Understanding the effects of tree growth is essential for optimizing their environmental benefits. However, few studies investigate the time-evolving impacts trees' properties during decade-long growth, such as the height, absolute and relative size, LAI and LAD of the foliage. This study models the transpirative cooling, shading and aerodynamic influences of two rows of linden trees in a street canyon during three days of 2019 heatwave in Zurich. Six scenarios are simulated using in-house code urbanMicroclimateFoam in OpenFOAM, where the growth of trees' crown dimensions, height and the change of LAI and LAD are characterized by a series of tree age ranges, i.e., 0, 10-20, 20-30, 30-40, 40-60, and 60-100 years old. ## 2 Methodology ### Numerical method The study is carried out with a numerical urban microclimate computational fluid dynamics (CFD) simulations model to obtain high-resolution aerodynamic and thermal data of the local microclimate in OpenFOAM. The airflow is solved in the air subdomain using Reynolds-averaged Navier-Stokes (RANS) with k-\(\varepsilon\) turbulence models, with the heat and moisture transport in air, taking into account buoyancy. The model also takes into account the shortwave and longwave radiative exchanges in a radiosity model based on the view factor approach [17]. In the solid subdomains that model urban materials, such as buildings and streets, the coupled heat and moisture transport (HAM) equations are solved. The solid subdomains are coupled with the air subdomain at the boundaries, enabling the modeling of the temperature and moisture storage and transport at the surfaces [18]. Trees are modeled as porous medium in air subdomain, using sink and source terms for momentum, moisture, temperature, and turbulence quantities. The LAD value of trees defines the aerodynamic influence. The buoyancy is calculated with the Boussinesq approximation. The leaf energy balance is solved by discretizing the foliage of trees into small volumes, where the radiative, latent and sensible heat fluxes are calculated [12]. It is assumed that the stomata always have enough water uptake from the moist soil. The stationary energy balance of a single leaf, with the heat fluxes at the leaf surface, are defined by equations (1-3): \[q_{rad,l}-q_{lat,l}-q_{sen,l}\ =0 \tag{1}\] \[q_{lat,l}=L_{\nu}\ h_{c,m}\ (p_{\nu,l}-p_{\nu}) \tag{2}\] \[q_{sen,l}=h_{c,h}\ (T_{l}-T) \tag{3}\] Where \(q_{rad,l}\) (W/m\({}^{2}\)) represents the radiative flux, \(q_{lat,l}\) (W/m\({}^{2}\)) represents the latent heat flux and \(q_{sen,l}\) (W/m\({}^{2}\)) represents the sensible heat flux. \(L_{\nu}\) (2.5 \(\times\) 10\({}^{6}\) J/kg) is the constant latent heat of vaporization. \(h_{c,m}\) (s/m) is the convective mass transfer coefficient (CMTC), \(p_{\nu,l}\) (Pa) is vapor pressure inside the leaf, \(p_{\nu}\) (Pa) is the ambient vapor pressure. \(p_{\nu,l}\) is assumed to be the saturated vapor pressure at leaf temperature. \(h_{c,h}\) (W/m\({}^{2}\)K) is the convective heat transfer coefficient (CHTC) at the leaf surface, \(T_{l}\) (K) is the leaf surface temperature, \(T\) (K) is the air temperature. Finally, the equation of leaf energy balance can be combined and solved iteratively with the \(T_{l}\) by equation (4): \[T_{l}\ =T+\frac{q_{rad,l}-q_{lat,l}}{h_{c,h}} \tag{4}\] ### Description of the case study The case setup in the computational domain has a dimension of 480 m (streamwise, \(x\)) \(\times\) 480 m (spanwise, \(y\)) \(\times\) 122 m (height, z). A single stand-alone street canyon (aspect ratio = 1) with two identical buildings, a street, and two rows of trees is modeled, as shown in figure 1. A computational grid is generated following a sensitivity analysis in order to select optimal cell refinement for a high-resolution and accurate simulation. The cells of the air subdomain are refined from 4 m at a distance of 60 m away from the buildings to under 0.4 m within a distance of 10 m from the buildings. The cells are further refined near and in the vegetation. Prism layers, smaller than 0.2 m, are added to the building and street surfaces. The total cell number in the air domain is approximately 1.9 million. The boundary conditions follow the actual metrological data in the city of Zurich, Switzerland, during the heatwave on 25-27 June 2019. The hourly ambient temperature, humidity ratio, solar radiation parameters, and wind velocity magnitude data are used in the simulation. The incoming wind is in \(x\) direction, and the wind direction is assumed to remain the same during the three days of simulation for simplicity. The profile of wind speed at the inlet follows a fully-developed atmospheric boundary layer, with the assumption of a neutral stratification, as defined in equations (5-7): \[U(z)=\frac{u_{ABL}^{*}}{\kappa}\ln(\frac{z+z_{0}}{z_{0}}) \tag{5}\] \[k(z)=\frac{u_{ABL}^{*}}{C_{\mu}^{0.5}} \tag{6}\] \[\epsilon(z)=\frac{u_{ABL}^{*}}{\kappa(z+z_{0})} \tag{7}\] where \(U(z)\) represents the horizontal wind speed at height z, \(u_{ABL}^{*}\) represents the atmospheric boundary layer friction velocity, \(z_{0}\) represents the aerodynamic roughness length, which is chosen as 1 m, \(\kappa\) represents the von Karman constant and \(C_{\mu}\) is a model constant, which equals to 0.09. The vegetation properties change as the trees grow up. A common species of trees in Europe, small-leaved linden, is modeled, which is also named Tilla cordata Mill. As the linden trees grow up, the height of the trees can reach 30-35 m. LAD and LAI are not seen to have a linear growth with trees' age [19]. The height, crown diameter, LAD and LAI of the trees, among other properties, are provided in table 1 following the literature [20, 21, 22, 23]. ## 3 Results and discussion The spatial-averaged air temperature and humidity ratio at pedestrian level (1.8 m height) in the street canyon are shown in figure 2. The scenario without trees has the highest air temperature at all times. The peak temperature is observed at 15:00 on the second day of the heatwave. With the implementation of street trees, the air temperature is overall largely reduced, especially during the daytime, when the shading and transpirative cooling collectively provide the cooling effects to the canyon. The highest air temperature reduction reaches 4 \({}^{\circ}\)C at the location under the trees, and 1.1 \({}^{\circ}\)C for the pedestrian level spatial-averaged reduction, at the peak temperature when the highest \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline & \multicolumn{3}{c}{Age (years)} & \multicolumn{3}{c}{H\({}_{\text{tree}}\)/H\({}_{1}\)} & \multicolumn{3}{c}{H\({}_{\text{tree}}\) (m)} & \multicolumn{3}{c}{D\({}_{\text{crown}}\) (m)} & \multicolumn{3}{c}{LAD (m\({}^{2}\)/m\({}^{3}\))} & \multicolumn{3}{c}{(m\({}^{2}\)/m\({}^{2}\))} & \multicolumn{3}{c}{a\({}_{\text{tree}}\)} & \multicolumn{1}{c}{r\({}_{\text{s,min}}\) (s/m)} \\ \hline **1** &. & 0 &. & 0 & 0 &. & 0 &. & 0 &. & 0.15 & 150 \\ **2** &. & 10-20 &. & 0.25 & 5.12 & 2.3 & & 0.7 & 1.68 &. & 0.15 & 150 \\ **3** &. & 20-30 &. & 0.5 & 10.24 & 4.7 & & 1.4 & 6.91 &. & 0.15 & 150 \\ **4** &. & 30-40 &. & 0.75 & 15.36 & 7 & 1.6 & 11.65 &. & 0.15 & 150 \\ **5** & 40-60 & & 1 & 20.48 & 8.5 & 1.4 & 13.48 & 0.15 & 150 \\ \hline **6** & 50-100 & 1.25 & 25.6 & 10 & 1.2 & 12.91 & 0.15 & 150 \\ \hline \end{tabular} \end{table} Table 1: Detailed tree vegetation properties in 6 scenarios of age groups. H\({}_{\text{tree}}\)/H\({}_{1}\) denotes the relative height of the trees to the height of the canyon, H\({}_{\text{tree}}\) denotes the height of the trees, D\({}_{\text{crown}}\) denotes the diameter of the crown, LAD is the leaf area density, LAI is the leaf area index, a\({}_{\text{tree}}\) is the albedo of the leaves, r\({}_{\text{s,min}}\) is the minimal stomata resistance of the leaves. Figure 1: Case study setup in the CFD simulation. (a) and (b) demonstrate the implementation of street trees in the canyon. The y-axis direction is towards the north. The spacing of the trees (S\({}_{\text{tree}}\)) is 10.24 m. The thickness of the pavement (d\({}_{\text{pavement}}\)) and the thickness of the soil (d\({}_{\text{soil}}\)) are 0.1 m and 1 m under the trees. The surface roughness length (z\({}_{0}\)) is 1 m, which corresponds to a uniformly-built town. The horizontal wind velocity at approximate roof height (z\({}_{\text{ref}}\)) is 1.45 m/s. (c) demonstrates the sizes of trees in 6 different scenarios. trees (60-100 years old) are present. For cases with trees, the spatial-averaged humidity ratio is also higher than the case without trees during the daytime due to the transpiration of leaves. Figure 3 presents the spatial profiles of the Universal Thermal Climate Index (UTCI, \({}^{\circ}\)C) on two vertical spanwise planes (\(\gamma-z\) planes) in the canyon at the peak temperature time, 15:00 on the second day of the heatwave, where figure 3(a\({}_{1}\)-f\({}_{1}\)) are obtained in the center of the trees near the windward walls (plane 1) and figure 3(a\({}_{2}\)-f\({}_{2}\)) are in the center of the canyon (plane 2). UTCI is an assessment of the thermophysiological effects on the atmospheric environment, which is calculated based on the metabolic rate, clothing insulation, air temperature, mean radiant temperature, air speed, and relative humidity [24]. The mean radiant temperature takes into account the shortwave and longwave radiation received by the human body. The results show that trees of different ages bring a distinctly different level of cooling and influence on UTCI around trees. In plane 1, at the location of the above trees, the UTCI in the youngest trees scenario (figure 3a\({}_{2}\)) is slightly higher than the UTCI in the no tree scenario (figure 3a\({}_{1}\)). The UTCI reduction is also seen around the trees as the trees grow up. For plane 1, UTCI is overall reduced at the locations around and under the trees. On plane 2(figure 3a\({}_{2}\)-f\({}_{2}\)), the UTCI reduction at the pedestrian level is limited for tree age groups 10-20 and 20-30 (figure 3b\({}_{2}\)-c\({}_{2}\)), and the reduction becomes visible as the trees are older than 30 years old (figure 3d\({}_{2}\)-f\({}_{2}\)). Figure 2: Pedestrian level spatial-averaged (a) air temperature (\(T\), \({}^{\circ}\)C) and (b) humidity ratio (\(w\), kg/kg) during three days of the heatwave. The peak temperature is observed at 15:00 on the second day of the heatwave. Six different lines represent six scenarios of tree age, from 0 to 100 years old. The presence of street trees may also influences the air ventilation of the street canyon. Figure 4 shows the air ventilation rate at peak temperature hour, 15:00, on the second day of the heatwave for all the age group scenarios. The analysis of the ventilation rate at the canyon openings was used in the literature [14, 25], where the air ventilation rate consists of air removal and air entrainment. It is seen in figure 4 that the air ventilation of the street canyon gets smaller as the trees grow. The larger foliage of trees causes a larger aerodynamic resistance in the canyon. The reduction in air removal is particularly higher than the reduction of air entrainment. When the largest trees are present, the ventilation can be reduced to less than half of the case without trees. Figure 4: Normalized air ventilation rate from all openings (air removal and air entrainment) at peak temperature time, 15:00 on the second day of the heatwave. The normalized air ventilation Figure 3: The Universal Thermal Climate Index (UTCI) contour distribution on vertical spanwise (\(y-z\) plane), (a\({}_{1}\)-f\({}_{1}\)) plane 1, center of the trees near the windward wall, and (a\({}_{2}\)-f\({}_{2}\)) plane 2, the center of the canyon, at peak temperature time, 15:00 on the second day of the heatwave. The dashed line represents the pedestrian level height at 1.8 m. rate is calculated by the air ventilation rate normalized by \(\mathrm{H_{1}}\times\mathrm{H_{2}}\times\mathrm{U(z_{ref})}\), following literature [14, 25]. Both air removal and air entrainment are reduced as the trees grow up. ## 4 Conclusion Our study investigates a street canyon microclimate with trees in a range of age groups, using an urban microclimate CFD simulation model. The simulation results reveal that the tree properties can significantly affect their cooling performance during heatwaves. UTCI analysis and air ventilation rate analysis are performed to study the thermal comfort and air removal and entrainment in the canyon. Our results show that: * As the trees grow taller and larger, the LAI and LAD are changed, and the cooling effects of the trees become more pronounced. * Overall, the maximum air temperature reduction can reach 4 \({}^{\circ}\)C directly under the trees and 1.1 \({}^{\circ}\)C for the whole pedestrian-level spatial-averaged temperature reduction. * At the pedestrian level, trees under 30 years old provide overall very limited cooling potential, while the shading and transpirative cooling effects of trees only become beneficial after reaching 30 years old. * The presence of the largest trees, e.g., aged 60-100, significantly reduces the air ventilation of the canyon, which may cause adverse effects on pollutant dispersion and heat removal from the canyon by the wind. * Urban planners should carefully select and manage street trees, in terms of their size, height, LAI, and LAD, to benefit the most from the trees. Proper simulations may carry out when planning the implementation of trees. The simulation is simplified to center on the time-evolving impact of trees. A single street canyon morphology with an aspect ratio of 1 is studied. Moreover, the boundary condition of wind is simplified to flow always from west to east. In future work, we will investigate the time-evolving impact of trees in a realistic neighborhood model. The impacts of different tree properties will be studied systematically.
2306.13712
Neutrino-Driven Winds in Three-Dimensional Core-Collapse Supernova Simulations
In this paper, we analyze the neutrino-driven winds that emerge in twelve unprecedentedly long-duration 3D core-collapse supernova simulations done using the code Fornax. The twelve models cover progenitors with ZAMS mass between 9 and 60 solar masses. In all our models, we see transonic outflows that are at least two times as fast as the surrounding ejecta and that originate generically from a PNS surface atmosphere that is turbulent and rotating. We find that winds are common features of 3D simulations, even if there is anisotropic early fallback. We find that the basic dynamical properties of 3D winds behave qualitatively similarly to those inferred in the past using simpler 1D models, but that the shape of the emergent wind can be deformed, very aspherical, and channeled by its environment. The thermal properties of winds for less massive progenitors very approximately recapitulate the 1D stationary solutions, while for more massive progenitors they deviate significantly due to aspherical fallback. The $Y_e$ temporal evolution in winds is stochastic, and there can be some neutron-rich phases. Though no strong r-process is seen in any model, a weak r-process can be produced and isotopes up to $^{90}$Zr are synthesized in some models. Finally, we find that there is at most a few percent of a solar mass in the integrated wind component, while the energy carried by the wind itself can be as much as 10-20% of the total explosion energy.
Tianshu Wang, Adam Burrows
2023-06-23T18:00:06Z
http://arxiv.org/abs/2306.13712v2
# Neutrino-Driven Winds in Three-Dimensional Core-Collapse Supernova Simulations ###### Abstract In this paper, we analyze the neutrino-driven winds that emerge in twelve unprecedentedly long-duration 3D core-collapse supernova simulations done using the code Fornax. The twelve models cover progenitors with ZAMS mass between 9 and 60 solar masses. In all our models, we see transonic outflows that are at least two times as fast as the surrounding ejecta and that originate generically from a PNS surface atmosphere that is turbulent and rotating. We find that winds are common features of 3D simulations, even if there is anisotropic early infall. We find that the basic dynamical properties of 3D winds behave qualitatively similarly to those inferred in the past using simpler 1D models, but that the shape of the emergent wind can be deformed, very aspherical, and channeled by its environment. The thermal properties of winds for less massive progenitors very approximately recapitulate the 1D stationary solutions, while for more massive progenitors they deviate significantly due to aspherical accretion. The \(Y_{e}\) temporal evolution in winds is stochastic, and there can be some neutron-rich phases. Though no strong r-process is seen in any model, a weak r-process can be produced and isotopes up to \({}^{90}\)Zr are synthesized in some models. Finally, we find that there is at most a few percent of a solar mass in the integrated wind component, while the energy carried by the wind itself can be as much as \(10-20\%\) of the total explosion energy. Supernova, Neutrino-Driven Wind, R-process 0000-0002-4000-0002]Tianshu Wang 0000-0002-2882-7888]Adam Burrows ## 1 Introduction The successful explosion of a core-collapse supernova (CCSN) sweeps away a large fraction of the infalling matter enveloping the proto-neutron star (PNS) (Muller et al., 2017; Stockinger et al., 2020; Burrows et al., 2020; Bollig et al., 2021). Therefore, the ram pressure exterior to the PNS decreases progressively with time. This enables the post-explosion emergence of a neutrino-driven wind that expands into the outer layers and eventually catches up with the primary supernova ejecta. Similar to the original model for the Parker solar wind (Parker, 1958) and to analytic predictions by Duncan et al. (1986), the atmosphere of the PNS becomes unstable when its bounding pressure subsides to a level (dependent upon the coeval core neutrino luminosities) sufficient to generate such a secondary outflow powered by neutrino heating via charged-current absorption in the PNS atmosphere (Burrows, 1987; Burrows et al., 1995). The wind accelerates and becomes transonic, but when it emerges its detailed properties are functions of model and progenitor specifics. The neutrino-driven winds have usually been studied using stationary transonic wind models in spherical symmetry, given a PNS mass and neutrino luminosity (Duncan et al., 1986; Qian and Woosley, 1996; Otsuki et al., 2000; Wanajo et al., 2001; Thompson et al., 2001). These are found approximately to support with long term one-dimensional core-collapse simulations (Hudepohl et al., 2010; Fischer et al., 2012; Roberts, 2012). Nevertheless, realistic multidimensional CCSN simulations suggest more complex behavior. In 2D, Navo et al. (2022) see cone-like winds only towards the southern pole in one of their simulations. In 3D, while Stockinger et al. (2020) witness spherical winds for relatively low-mass ZAMS progenitors (e.g., 8.8, 9.0 and 9.6 solar masses), Muller et al. (2017) (18 solar masses) and Bollig et al. (2021) (17 solar masses), looking for spherically-symmetric outflows, fail to identify them. We find that this is not due to the absence of the wind, but due to the fact that 1) the wind can emerge seconds after bounce, beyond the simulation time of most researchers, and 2) asymmetrical post-explosion accretion or fallback (later referred to collectively as infall) can interfere with the wind's emergence in some patches of solid-angle around the PNS. This does not mean that the PNS wind does not emerge aspherically, and this is what we see universally using our long-term detailed 3D simulations. This paper presents and details these findings. Long-lasting post-explosion accretion is commonly witnessed in many different CCSN models for a variety of progenitors (Muller et al., 2017; Burrows et al., 2020; Bollig et al., 2021). However, the overall consequences for PNS winds of such long-term infall has to date been unclear. In addition to breaking the spherical symmetry and obstructing some directions, these downflows onto the PNS boost the emergent neutrino luminosity, thus increasing the wind strength along other directions. They also can slightly increase the PNS mass and some of this mass later is ejected in the wind. But the infalling matter can also interact with the expanding wind before the latter reaches a sonic point, thereby thwarting the production of a classic transonic wind. Therefore, it is sometimes hard to determine what the net effect of the asymmetric accretion may be on the total wind intensity without performing time-consuming long-term three-dimensional CCSN simulations. The electron fraction (\(Y_{e}\)) of the wind material depends on the interaction history with electron-type neutrinos (\(\nu_{e}\)) and their anti-particles (\(\bar{\nu}_{e}\)). If the wind is neutron-rich (\(Y_{e}<0.5\)), it is possible that the rapid neutron-capture process (r-process) can take place. In earlier work (Meyer et al., 1992; Woosley et al., 1994), the wind was thought to have a very high entropy above 300 \(k_{b}\) per nucleon (\(k_{b}\) is the Boltzmann constant) and a strong r-process could take place. However, such high entropies were not produced in subsequent investigations (Takahashi et al., 1994; Qian and Woosley, 1996; Otsuki et al., 2000; Thompson et al., 2001). Later work showed that PNS winds were able to produce at most only some of the lightest r-process nuclei (Wanajo, 2013; Arcones and Thielemann, 2013; Wanajo, 2023), unless other mechanisms are introduced to increase the entropy (e.g., Nevins and Roberts (2023)) or decrease the electron fraction \(Y_{e}\)(e.g., Roberts (2012)). All these studies were done in one-dimension, and multi-dimensional effects were ignored. We note that for the same explosion energy, an asymmetrical explosion can manifest faster expansion speeds along the direction of explosion and that matter from slightly deeper PNS layers can be ejected. This can lead to lower \(Y_{e}\) in the ejected matter. Interaction with accreta can also lead to smaller wind-termination radii, and this can extend the duration of nucleosynthesis. In this paper, we analyze the early phase of neutrino-driven winds in 12 long-term three-dimensional CCSN simulations. We use multiple methods to prove the existence of winds and to measure their strength. We study the temporal evolution of the physical properties of the wind, and discuss wind nucleosynthesis. We also determine the morphology of the wind regions and the angular mass distribution of the wind ejecta. This paper is arranged as follows: In section 2, we describe the methods used in this work and summarize the general features of all 12 simulations. In section 3.2, we prove the existence of winds in multiple ways. In Section 3.3, we study the time-dependent behavior of the winds and in Section 3.4 we summarize the nucleosynthesis results. In section 3.5, we discuss the morphology and direction of the complicated wind structures. Finally, in section 4, we summarize our results and provide further insights into the 3D PNS wind phenomenon in the core-collapse supernova context. ## 2 Method For this study, we have used simulations generated by the multi-group multi-dimensional radiation/hydrodynamics code Fornax(Skinner et al., 2019; Vartanyan et al., 2019; Burrows et al., 2019, 2020). The ZAMS masses of the 12 progenitors are 9 (two models), 9.25, 9.5, 11, 15.01, 17, 18, 20, 23, 25, and 60 \(M_{\odot}\). The 9, 9.25, 9.5, 11 and 60 \(M_{\odot}\) progenitors come from Sukhbold et al. (2016), while all the others come from Sukhbold et al. (2018). The initial progenitor density profiles and the evolution of the mean shock radii are shown in Figure 1, while some basic properties of the models are summarized in Table 1. None of these models has initial perturbations except for 9(a), which has an initial velocity perturbation between 200-1000 km with \(l=10\) and \(v_{max}=100\) km/s. We use the SFHo equation of state (EOS) of Steiner et al. (2013), consistent with most known laboratory nuclear physics constraints (Tews et al., 2017). All of the models are run with a 1024\(\times\)128\(\times\)256 grid, with outer boundary radii varying from 30000 to 10000 km. We use 12 logarithmically-distributed energy groups for each of our three neutrino species (electron-type, anti-electron type, and the rest are bundled as "\(\mu\)-type"). To follow the movement of wind materials, we add 270,000-300,000 post-processed tracer particles to each simulation. Tracers are passive mass elements that are advected according to the fluid velocity. We use the backward integration method, in which the tracers start at their final positions and the equations of motion are integrated backward in time. Sieverding et al. (2022) show that this method leads to more accurate tracer trajectories and thermal histories after the tracers leave the chaotic convection region near the proto-neutron star (PNS). This advantage is essential for studying winds, because wind matter emerges from and through such chaotic regions; using forward integration could lead to wrong ejection times (and, thus, to wrong mass loss rates). The fluid velocity fields of the simulations are saved every millisecond, while the hydrodynamic timesteps of the simulations are around one microsecond. To get better time resolution and to avoid allowing the tracer particles to bypass multiple grid cells in a timestep, we divide the 1-ms time interval into \(N_{sub}\) equal length substeps. The velocity field is linearly interpolated in time and space to the position of the tracer particle at each substep. We use adaptive \(N_{sub}\) to ensure each tracer no more than half the grid cell size along each direction per substep. This leads to \(N_{sub}>100\) when the tracer occupies the chaotic convective regions and \(N_{sub}\sim 10\) when the tracer is more than a few hundreds of kilometers above the PNS. This adaptive method saves a lot of time taken by the post-processed tracers, because in our long-term simulations the tracers spend most of the time at large radii moving in an almost homologous way. We do not make any assumption in advance concerning the distribution of wind matter, so the tracers are placed logarithmically along the r-direction above 1000 km and uniformly along the \(\theta\)- and \(\phi\)-directions at the end of each simulation1. Footnote 1: We can do this because we are integrating backward. The nucleosynthesis calculations for the wind matter are done using SkyNet (Lippuner & Roberts, 2017), including 1540 isotopes and the JINA Reaclib (Cyburt et al., 2010) database. We include neutrino interactions with protons and neutrons, but reactions for the \(\nu-\)process are not included. The detailed neutrino spectra extracted from Fornax are then fitted to Fermi-Dirac functions whose parameters are fed into SkyNet, which requires spectra to be in this simplified format. The nuclear statistical equilibrium (NSE) temperature is set at 0.6 MeV (\(\sim\)7 GK). SkyNet will switch to the NSE evolution mode if the temperature is above this threshold and the strong interaction timescale is shorter than the timescale of density changes (Lippuner & Roberts, 2017). We use the \(Y_{e}\) calculated by Fornax when the temperature is above this threshold, because the neutrino spectra can actually be non-thermal. \(Y_{e}\) evolution below this temperature is handled by SkyNet so that the \(\nu p\)-process is included. ## 3 Results ### Definition of "Wind" Most previous studies on neutrino-driven winds were done in spherical symmetry (Duncan et al., 1986; Qian & Woosley, 1996; Otsuki et al., 2000; Wanajo et al., 2001; Thompson et al., 2001; Hudepohl et al., 2010; Fischer et al., 2012; Roberts, 2012; Wanajo, 2013; Nevins & Roberts, 2023) and didn't have multi-dimensional effects. Examples of these 3D effects include convection and turbulent velocity fields, the simultaneous explosion and accretion, and the rotation of the proto-neutron star. These multi-dimensional effects introduce additional complexity and new features into the picture of PNS winds. Therefore, it is essential to clarify the definitions and notations used in this work, since this is the first detailed study of winds using state-of-the-art 3D simulations. In this paper, the neutrino-driven wind is defined as a transonic outflow originating from below 80 km, powered by neutrino heating. This definition is based upon the expected dynamical properties of neutrino-driven winds. We have varied the 80-km threshold between 30 and 100 km, and the associated outflows do not much vary. A main consideration is the wind start time, because we wait until the PNS radius falls below this threshold. Compared with the winds traditionally studied in 1D, we don't require the wind to be spherical or to be in a (quasi-)steady state. The early explosive ejecta lifted by the shock wave (and also driven by neutrino heating) has also experienced a transonic phase, but such ejecta are not considered winds because their interaction with infalling matter leads to very different dynamical and thermal histories. Therefore, it is important to distinguish these two types of outflow. One major difference is the formation time, as the early material is ejected just after the explosion directly by the supernova blast wave, while the winds emerge later on. This time difference is discussed in detail in Section 3.3. Another major difference between winds and early ejecta is the velocity, where winds are usually at least two times faster than the earlier ejecta. This results in wind-termination shocks (also called secondary shocks) behind the explosive shock wave, which can be seen in Figure 2. In this figure, we show the radial velocity fields of the 17 \(M_{\odot}\) and the 23 \(M_{\odot}\) 3D models. In both models, there are regions with significantly higher velocities than found in surrounding ejecta, and the interface regions are wind-termination shocks. Such high-velocity regions and wind-termination shocks are general features in our simulations, which means that the wind phenomenon is a common aspect of long-term CCSN simulations. However, there is also some variation. First, the termination velocities in different models vary by a factor of three, ranging from around 15000 km s\({}^{-1}\) to above 45000 km s\({}^{-1}\). Even the slowest wind termination velocity is still about two times faster than the surrounding ejecta. Second, the sizes and shapes of the wind regions vary significantly. The 17 \(M_{\odot}\) model is one of our most energetic explosions, and the explosion sweeps away the envelope materials more easily. Thus, its winds are less influenced by accretion and they form a cone-like region along the explosion direction, as indicated in Figure 2. However, there is more infall in other weaker explosions and the winds can become multiple thin, twisted tubes, like those in the 23 \(M_{\odot}\) model. The shapes we describe here are on larger-scales (\(\sim\)10000 km), which is the combined result of winds, infall, and the more slowly-moving ejecta. Further discussion of the morphology of the winds can be found in Section 3.5. ### Existence In addition to Figure 2, in this subsection we provide several different methods to demonstrate the existence of the winds. The simplest method is to look at the angle-averaged behavior of the model. Figure 3 depicts the radii of constant mass coordinate layers as a function of time. In the 9, 9.25, 11 and 23 \(M_{\odot}\) models, it is clear that the individual layers fall onto the PNS and reside there for a while, but then after a delay move out. This indicates that there are indeed outflows later from the PNS. Models not manifesting such clear behavior nevertheless experience later-time outflows from the PNS. However, they are weaker and can't be seen so easily in the angle-averaged plots. Nevertheless, the entropy background on these plots indicates that all models have high-entropy outflows from the PNS; this is a clear indication of the universal presence of winds. A more rigorous way to demonstrate the presence of winds is to follow the matter motion using Lagrangian tracers. We select a "wind tracer" based on following criteria: 1. The final radius of the tracer is at least 3000 km. 2. The tracer has reached a radius below 80 km before its ejection. 3. The maximum Mach number of the tracer is greater than 1. 4. The minimum outer shock radius when the specific tracer reaches its smallest radius is at least 3000 km. The first three conditions ensure that the tracer represents matter in the transonic outflows originating from the PNS. We assume and find that the main acceleration phase of the winds occurs between 100 and 3000 km. This demonstrated to be a good assumption when we check the hydrodynamic histories of the selected tracers. We varied the values used in this assumption, and they lead to similar results. The last condition ensures that the acceleration is not caused by the precursor explosive shock wave itself. As shown in Section 3.3, this condition is important to technically distinguish the early ejecta and the wind. However, we note that this condition may miss its very earliest phase. About 5-10% (15k-30k) of the tracers we traditionally employ in each simulation satisfy the above wind conditions. This indicates a wind mass of around a few percent of a solar mass, while the energy carried by this component can be 10-20% of the total supernova explosion energy. Figure 4 depicts the relation between velocity and radius for the selected tracers in models 9(b), 9.25, 9.5, 11, 17, and 23 \(M_{\odot}\). The winds are launched from small radii below 80 km and are accelerated to 15000-50000 km s\({}^{-1}\) at \(\sim\)3000 km. There is a second branch around zero velocity in some models, and this is because some of the wind matter hits the accreta and stays there for a short period of time. These elements are then pushed away by follow-on winds. Models that explode more weakly seem to have a stronger second branch. Figure 5 shows the wind Mach number as a function of radius and all models clearly show transonic behavior characteristic of winds. Apart from the common transonic features, there are certainly differences between models. First, the wind termination velocities vary from 15000 to above 40000 km s\({}^{-1}\). The termination velocity depends not only on the wind strength, but also the interactions with other debris and continuing infall. Some wind matter exits the acceleration phase earlier because its hits the late-time accreta or more slowly-moving ejecta, and experiences lower termination velocities. This occurs more often in models with stronger accretion and weaker explosions, such as the 23 \(M_{\odot}\) model. Second, the widths of the bands in Figure 4 and 5 vary from model to model. This width reflects the velocity variation in wind matter. The turbulent velocity field at smaller radii sets the initial velocity of the wind almost randomly, which sets the width of the band at small radii. Driven by neutrino heating, the turbulent Mach number in this region varies between 0.1 and 1 for different models, and the turbulence extends from the PNS surface to a radius of up to 100 km. At larger radii, interaction with accreta or slowly-moving blast ejecta also contributes to the depicted variation in the wind velocity. As a result, more massive progenitor models (or models with higher compactness) tend to have greater wind velocity variation because they experience higher initial accretion rates (\(\dot{M}\)) and stronger turbulence. ### Evolution In this subsection we study the temporal evolution of the winds. The left two panels in Figure 6 shows the PNS mass and radius as a function of time for all the twelve models studied in this paper. The PNS radii of different models decrease at a similar rate, which slows down when the radii are below 20 km, and the accumulated PNS masses stop changing in most models before 1.5 seconds after bounce. Therefore, PNS properties, other than the emergent neutrino luminosities, are only secondary factors in the temporal evolution of winds. The right two panels in Figure 6 show the neutrino luminosity (\(\nu_{e}+\bar{\nu}_{e}\)) measured at 10000 km and the mass flow rate of the inflows measured at 100 km. Note that this is not the net accretion rate (which is the mass flow rate difference between inflow and outflow). The inflow rates in less massive progenitor models decrease faster, while infall in more massive models generally lasts longer. In the initially non-rotating context, the luminosity and mass accretion rate are the main factors that influence the evolution of winds. The left panel of Figure 7 shows the angle-averaged temperatures of the 9(b), 11, 17 and the 23 \(M_{\odot}\) models at 1.0, 1.5, and 2.0 seconds after bounce. At early times, the temperature profile of the PNS has a peak around 10 km. This peak moves inward and diminishes with time. Eventually, the thermal profile becomes similar to the functional form often used to describe the PNS (Kaplan et al., 2014). However, achieving this state takes more than a few seconds, and the early phase of the neutrino-driven wind is certainly influenced by the presence of this thermal spike. The right panel of Figure 7 depicts the angle-averaged net neutrino heating rate profiles of the same models. The gain region (where matter gains energy from neutrinos) can be clearly seen. In general, the inner boundary of the gain region is about two times the PNS radius, and moves inward as the PNS shrinks. This means that the origination points of the winds also move inward. However, the effect of the velocity spread of the wind matter shown in Figure 4 and 5 is more pronounced than this start-point variation. Moreover, on the velocity-radius and Ma-radius plots we don't see a clear temporal change in the band widths and magnitude. This means we can assume that the winds experience roughly the same acceleration phase in the first few seconds. Therefore, the dynamical history of the wind matter is determined only by the wind-termination time (or equivalently, the wind termination radius). Figure 8 shows the temporal evolution of the wind mass flow rates and the relation between the wind mass flow rate and the neutrino luminosity for all the twelve models. The wind mass flow rate is measured at 3000 km using the wind tracers. Despite the very different accretion rates (see Figure 6), wind mass flux in all models seems to decay at a roughly similar rate. The peak mass flux of the winds is between \(5\times 10^{-3}\) and \(5\times 10^{-2}M_{\odot}\)s\({}^{-1}\). In addition, this mass flux in all models follows the \(\dot{M}_{wind}\propto L^{2.5}\) relation (black dashed lines). This \(\dot{M}_{wind}\propto L^{2.5}\) power law relation is predicted by the spherical stationary wind solutions (Duncan et al., 1986; Burrows, 1987; Qian & Woosley, 1996; Thompson et al., 2001) (assuming "\(L\propto T^{4}\)," see these papers). However, the predicted dependence on the PNS mass (\(\dot{M}_{wind}\propto M_{PNS}^{-2}\)) is not seen here, since models with higher PNS mass (like the 17 \(M_{\odot}\) model) don't show weaker winds. It is possible that models with higher PNS masses also have longer-lasting accretion which provides more matter into the wind region and thereby enhances the wind mass loss flux. It is worth mentioning that the wind mass flow rate in some models is only a small fraction of the total mass outflow rate. Figure 9 depicts the evolution of the entropy of all matter ejected from below 100 km. Matter included here encompasses both the early ejecta and the winds. In the 9(b) \(M_{\odot}\) model, there is a clear transition between two phases of entropy evolution. The first phase in which the entropy grows faster is associated with the early ejecta, while the second phase tracks the predictions of 1D wind solutions (e.g., Figure 14 in Wanajo (2023)). Other models also show the two-phase structure, but the entropy evolution in the second phase can be different. The vertical white dashed line indicates the time when the minimum shock radius has reached 3000 km (the fourth wind condition we use in Section 3.2), and matter to the right of this vertical line is identified with winds. We can see that the time cut we use ensures that the selected tracers represent the wind instead of the early ejecta, but this is a conservative criterion and the early wind phase might be missed in some models. In Figure 9, we see that the wind entropy in the less massive models increases with time. This is because accretion in less massive models terminates earlier, resulting in lower densities in the wind region. However, more massive models generally have longer-lasting late-time accretion and in these cases the density in the wind regions don't drop as quickly, leading to more complex behavior in the evolution of the entropy. None of our simulations has a wind entropy above 80 \(k_{b}\) per baryon, which is significantly lower than what is required for the rapid neutron capture process (r-process). The evolution of electron fraction \(Y_{e}\) is shown in Figure 10. Similar to Figure 9, the matter upon which we focus here includes both the early ejecta and the winds, and they are separated by the vertical white dashed line. The \(Y_{e}\) is measured when the material freezes out from nuclear statistical equilibrium (NSE), i.e., when the temperature drops below 0.6 MeV (\(\sim\)7 GK). Most wind matter is proton-rich, but during some time periods it can be a bit neutron-rich. We don't yet find a clear progenitor-dependent trend in the \(Y_{e}\) temporal evolution. However, it is interesting that the neutron-rich phases in the 11 and 17 \(M_{\odot}\) models also manifest a fast decrease in the entropy. It is possible that strong late-time infall might result in mixing in the outer PNS. Such mixing may inject neutron-rich matter into the wind-forming region, thereby increasing the density there and resulting in slightly lower \(Y_{e}\)s at the wind base. ### Nucleosynthesis Figure 11 shows the entropy and \(Y_{e}\) of the wind. As mentioned in Section 3.3, the entropy never grows above 80 \(k_{b}\) per baryon and the \(Y_{e}\) can only be slightly below 0.5. This rules out the possibility of strong r-process in the winds. However, it is still possible to produce some of the lightest r-process elements, as discussed in Wanajo (2013); Arcones and Thielemann (2013); Wanajo (2023). But the yield of such isotopes can vary a lot due to the large variation in the \(Y_{e}\) evolution of the winds. Figure 12 portrays the bulk nucleosynthesis in the context of our 3D CCSN models as calculated using SkyNet (Lippuner and Roberts, 2017). The production factor is calculated based on the solar system abundances in Lodders (2021). In these calculations, we don't distinguish winds that terminate at different radii. As a result, the nucleosynthesis results shown here reflect a range of termination times. The termination time determines how long the material stays above the minimum temperature (\(\sim\)0.2 MeV) for which most nucleosynthesis occurs. Therefore, winds that terminate earlier create isotopes up to the iron-group via \(\alpha\)-rich freeze-out and show similar behavior to that seen during classical explosive nucleosynthesis, while the almost freely-expanding matter can have higher neutron-to-seed ratios with which to build elements up to Zr. This component is uniquely associated with winds. In this figure, we see that winds generally have higher helium fractions. For models with more neutron-rich matter (such as the 11 \(M_{\odot}\) model and 9(a)/9(b) models), isotopes up to \({}^{90}\)Zr can be produced via a weak r-process. The \(\nu p\)-process (which we include in this study) can also help produce heavy elements, but a strong \(\nu p\)-process occurs only if the wind terminates neither too early nor too late (Arcones and Thielemann, 2013), which is a condition probably hard to satisfy in realistic simulations. This means that the \(\nu p\)-process will be sensitive to the morphology of the winds and the explosion, and that it is hard to predict its yield without doing actual 3D simulations. The 9(a), 9(b), 9.25, 11 and 17 \(M_{\odot}\) models show some production of heavier elements, while the 9.5 and 23 \(M_{\odot}\) models don't. This is in part a direct result of the chaotic \(Y_{e}\) evolution shown in Figure 10, and it also indicates that the nucleosynthesis results may depend on the morphology of the winds and the explosion. Although our simulations automatically include sound waves from the PNS and the heating and momentum flux due to them, we don't see strong r-process predicted in Nevins and Roberts (2023). There are some possible reasons. First, the entropy in winds is significantly lower than in those 1D simulations. Even if an extra energy source is included, it's unlikely to increase the entropy from below 80 to above a few hundreds of \(k_{b}\) per baryon. Second, the assumed \(Y_{e}=0.48\) in Nevins and Roberts (2023) is rarely achieved by most of our models, except the 11 and 17 \(M_{\odot}\) models. In addition, our simulation resolution may not be high enough to fully capture the non-linear sound-wave damping. It is worth noting that the mass of wind matter is much less than the mass of the total ejected matter, even if we don't include the outer envelope swept away by the explosion shock wave. In all our models, the peak mass flow rate in winds is at most a few \(10^{-2}\)\(M_{\odot}\)s\({}^{-1}\), and the mass in winds is no more than a few percent of a solar mass (See Table 1). Another important point is that the transition in thermal properties between the early ejecta and the later neutrino-driven wind is smooth, so there is no sudden change in the nucleosynthesis. For the purposes of this paper, we applied a time cut (the fourth condition in Section 3.2) to distinguish the early ejecta from the wind, but they should be considered jointly in a more detailed nucleosynthetic analysis. We leave such a nuanced study to a future paper. ### Morphology The morphology of the wind regions is determined jointly by the winds, the late-time infall, and the earlier matter blasted outward by the supernova shock. Figure 2 depicts the morphology of high-velocity regions on a 10000-km scale. The larger-scale wind regions are located along directions where the shock radii are larger. Actually, the wind direction coincides with the center of the higher-velocity bubbles (green bubbles in Figure 2), but not all higher-velocity bubbles have wind buried inside. This is either because some higher-velocity bubbles are not strong enough to clear out the infall and develop a wind, or because the winds inside them have decelerated and merged into the general flow during the expansion. In addition, there seems to be a correlation between the open angle of the wind region and the explosion energy. Winds in more energetic explosions (like the 17 \(M_{\odot}\) model) tend to have larger opening angles. These two correlations can be explained in a similar way. In directions where there exist relatively higher shock velocities and more energetic ejecta, the winds tend to sweep away the outer matter earlier and more efficiently; as a consequence, the winds develop more easily. After hitting the wind-termination shock, the wind matter enters the slowly-moving region and is mixed with other matter. Therefore, the wind region itself doesn't necessarily follow the distribution of wind matter. Figure 13 compares the angular mass distribution of the wind matter and all matter with a positive binding energy (all ejecta) at the end of the simulations of the 9.25, 11 and 17 \(M_{\odot}\) models. On larger scales (e.g., in the dipolar structures), the wind and ejecta distributions are correlated, since both components emerge more easily along the larger shock radius directions. On smaller scales, the winds and ejecta can be anti-correlated, with winds more distributed in the low-mass directions. This is because the wind matter is found more often in the high-velocity, high-entropy, low-density explosion bubbles. ## 4 Conclusions In this paper, we have analyzed the properties of early-phase neutrino-driven winds using twelve long-duration 3D state-of-the-art core-collapse simulations covering a large progenitor mass range from 9 to 60 solar masses. This is the first comprehensive paper on winds in the context of sophisticated 3D CCSN simulations. We define the wind to be the transonic outflow that originates below 80 km, and we use a time cut to distinguish the wind and early ejecta launched by the supernova shock wave itself (see Section 3.1 and 3.2). This is a technically simple definition which captures most of the central wind features. We find that the winds emerge naturally after the successful explosion, and that they universally generate wind-termination shocks (also known as secondary shocks) behind the primary explosion shock wave. The winds are seen in all simulations, indicating that they are a common phenomenon. However, the winds are generally aspherical, and in more massive (higher compactness) progenitor models they are distorted and channeled by long-lasting infall. The velocity profiles of the winds clearly show the transonic feature characteristic of winds. While all models experience similar inaugurating blast phases, winds in more massive models with higher compactness experience larger velocity variations due both to interaction with the primary blast ejecta and to the turbulent velocity field just above the PNS in its atmosphere. We find that there is at most a few percent of a solar mass in the wind component, while the energy carried by the wind to infinity can be as much as \(10-20\%\) of the total explosion energy. Neutrino-driven winds in 3D simulations approximately follow the same \(\dot{M}_{wind}\propto L^{2.5}\) relation predicted by the 1D stationary wind solutions. However, the \(\dot{M}_{wind}\propto M_{pns}^{-2}\) relation is not seen. Models with higher PNS masses also have stronger wind mass loss rates. This is in part because the higher-PNS mass models have higher neutrino luminosities and longer-lasting infall, which itself brings more matter into the wind-forming region. The entropy evolution of the 9 \(M_{\odot}\) model follows the 1D predictions very well, while the wind entropy in more massive models increases more slowly and has a greater spread in values. Our models with high PNS masses don't manifest the high entropies often predicted in 1D studies (e.g., Wanajo (2023)), and none of our models has an entropy above 80 \(k_{b}\) per baryon at the termination of the simulation. The electron fraction (\(Y_{e}\)) evolution in all our models is stochastic, but some wind matter can have \(Y_{e}<0.5\). This allows a weak r-process to occur. In our calculations, the first peak of the r-process up to Zirconium can be synthesized. Neutrino-driven winds are more likely to emerge along directions with the highest blast shock velocities, because in those directions the outer matter has been more efficiently cleared away. After hitting the wind-termination shock, wind matter decelerates. Most wind matter resides in the relative high-velocity, high-entropy, and low-density bubbles. Therefore, on larger angular scales the winds are distributed along the same directions as the primary blast ejecta, while on smaller scales the wind matter is not so tightly correlated with those directions, instead concentrating in low-density pockets. We note that our study, though it encompasses 3D simulations of unprecedented duration after bounce, is still limited to the first few seconds of the neutrino-driven winds; we have yet to capture the entire PNS wind phase. Moreover, it is technically difficult to distinguish completely the wind from the tail end of the earlier explosion ejecta. We have applied a time cut, but this might eliminate a small fraction of the early phase of the wind. Because the mass flow rate in winds decays quickly with time, missing the early phases may lead to non-negligible differences in the inferred properties of the wind vis a vis the summed ejecta. In addition, the tracers we employed in this study were post-processed based on the fluid velocity field saved every millisecond. Although a sub-iteration method was used to increase the temporal resolution, the tracer trajectory may still deviate from the true trajectory in the turbulent region around the PNS. However, this has little influence on the nucleosynthetic yields calculated and how they may be partitioned between the wind and earlier blast components. This is due in part to the fact that the temperature in the turbulence region is always above our NSE criteria (\(\sim\)7 GK) and no nucleosynthetic process has then yet started. But since we find that the atmosphere from which the PNS wind emerges is actually turbulent, the simple traditional picture found in the literature of a spherical wind emerging from a quiescent atmosphere is challenged by our new 3D insights into its true character. ## Acknowledgments We thank Matt Coleman and David Vartanyan for their technical help and advice during the conduct of this investigation. We also acknowledge support from the U. S. Department of Energy Office of Science and the Office of Advanced Scientific Computing Research via the Scientific Discovery through Advanced Computing (SciDAC4) program and Grant DE-SC0018297 (subaward 00009650), support from the U. S. National Science Foundation (NSF) under Grants AST-1714267 and PHY-1804048 (the latter via the Max-Planck/Princeton Center (MPPC) for Plasma Physics), and support from NASA under award JWST-GO-01947.011-A. A generous award of computer time was provided by the INCITE program, using resources of the Argonne Leadership Computing Facility, a DOE Office of Science User Facility supported under Contract DE-AC02-06CH11357. We also acknowledge access to the Frontera cluster (under awards AST20020 and AST21003); this research is part of the Frontera computing project at the Texas Advanced Computing Center (Stanzione et al., 2020) under NSF award OAC-1818253. In addition, one earlier simulation was performed on Blue Waters under the sustained-petascale computing project, which was supported by the National Science Foundation (awards OCI-0725070 and ACI-1238993) and the state of Illinois. Blue Waters was a joint effort of the University of Illinois at Urbana-Champaign and its National Center for Supercomputing Applications. Finally, the authors acknowledge computational resources provided by the high-performance computer center at Princeton University, which is jointly supported by the Princeton Institute for Computational Science and Engineering (PICSciE) and the Princeton University Office of Information Technology, and our continuing allocation at the National Energy Research Scientific Computing Center (NERSC), which is supported by the Office of Science of the U. S. Department of Energy under contract DE-AC03-76SF00098.
2307.04147
A Survey and Approach to Chart Classification
Charts represent an essential source of visual information in documents and facilitate a deep understanding and interpretation of information typically conveyed numerically. In the scientific literature, there are many charts, each with its stylistic differences. Recently the document understanding community has begun to address the problem of automatic chart understanding, which begins with chart classification. In this paper, we present a survey of the current state-of-the-art techniques for chart classification and discuss the available datasets and their supported chart types. We broadly classify these contributions as traditional approaches based on ML, CNN, and Transformers. Furthermore, we carry out an extensive comparative performance analysis of CNN-based and transformer-based approaches on the recently published CHARTINFO UB-UNITECH PMC dataset for the CHART-Infographics competition at ICPR 2022. The data set includes 15 different chart categories, including 22,923 training images and 13,260 test images. We have implemented a vision-based transformer model that produces state-of-the-art results in chart classification.
Anurag Dhote, Mohammed Javed, David S Doermann
2023-07-09T10:35:19Z
http://arxiv.org/abs/2307.04147v1
# A Survey and Approach to Chart Classification ###### Abstract Charts represent an essential source of visual information in documents and facilitate a deep understanding and interpretation of information typically conveyed numerically. In the scientific literature, there are many charts, each with its stylistic differences. Recently the document understanding community has begun to address the problem of automatic chart understanding, which begins with chart classification. In this paper, we present a survey of the current state-of-the-art techniques for chart classification and discuss the available datasets and their supported chart types. We broadly classify these contributions as traditional approaches based on ML, CNN, and Transformers. Furthermore, we carry out an extensive comparative performance analysis of CNN-based and transformer-based approaches on the recently published CHARTINFO UB-UNITECH PMC dataset for the CHART-Infographics competition at ICPR 2022. The data set includes 15 different chart categories, including 22,923 training images and 13,260 test images. We have implemented a vision-based transformer model that produces state-of-the-art results in chart classification. Keywords:Chart Classification Deep Learning Chart Mining ## 1 Introduction Charts provide a compact summary of important information or research findings in technical documents and are a powerful visualization tool widely used by the scientific and business communities. In the recent literature, the problem of chart mining has attracted increased attention due to numerous advantages, as suggested in the comprehensive survey published by Davila et al. in 2019 [11]. The term Chart mining refers to the process of extracting information represented by charts. Another motivating factor in the increased attention paid to this problem is a series of competitions held in conjunction with significant conferences to address the critical challenges in the chart mining pipeline[10, 12, 13]. Since a variety of charts are possible, chart classification is often the first step in chart mining. The task of chart image classification can be formalized as, given a chart image extracted from a document, classifying the image into one of \(N\) defined categories. The wide variety of chart types in the literature adds to the complexity of the task[6, 11, 34]. Some additional problems include interclass similarity, noise in authentic chart images, and more state-of-the-art datasets that cover multiple chart types and incorporate 2.5 or 3D charts and noise into the training samples[34]. The rise of robust deep learning models has contributed significantly to the success of chart classification. Deep learning approaches have outperformed traditional machine learning approaches regarding robustness and performance. Yet there need to be more state-of-the-art solutions that can provide stable results and are robust enough to address noise in some data sets. In this paper, we provide a performance comparison of several deep learning models that are state-of-the-art in the ImageNet[28] classification task. In addition, we report the performances of several popular vision transformers, which, to the best of our knowledge, have yet to be used for chart classification, except for the recent ICPR 2022 CHART-Infographics competition[13]. This paper is organized as follows. Section 2 summarizes the existing chart classification literature covering traditional and deep learning-based methods, including a brief discussion on transformer-based chart classification. Section 3 reports and summarizes publicly available datasets. Section 4 briefly highlights the popular ImageNet pre-trained deep learning-based models that will be used for our comparative study. Section 5 describes the latest edition of the UB PMC dataset, the training and testing protocols, and a discussion on their performance for chart classification. Section 6 provides information on possible improvements and suggestions for future research. Finally, Section 7 concludes with a summary of the paper. ## 2 Chart Classification Techniques Based on the type of approaches used to implement the chart classification task in the literature, they can be grouped into traditional ML, CNN-based deep learning, and Transformer-based deep learning. Each type of approach is described briefly below. ### Traditional ML approaches Traditional approaches rely on feature extraction methods that are often manual and general-purpose. Features are extracted and then represented in mathematical form for direct processing by machine learning classifiers. Savva et al.[29] present a system that automatically reformats visualizations to increase visual comprehension. The authors use low-level image features for classification in conjunction with text-level features. The system uses a multiclass SVM classifier trained on a corpus containing 2601 chart images labeled with ten categories, following Gao et al.'s manual extraction approach. In [14], researchers propose VIEW, a system that automatically extracts information from raster-format charts. The authors used an SVM to separate the textual and graphical components and classify the chart images based on the graphic elements extracted from the visual components. The text is typically found in three chart categories - bar charts, pie charts, and line graphs, with 100 images for each category collected from various real-world digital resources. Instead of taking an image as input, Karthikeyani and Nagarajan[19] present a system to recognize chart images from PDF documents using eleven texture features that are part of a Gray Level Co-Occurrence Matrix. A chart image is located in the PDF Document database, and the features are extracted and fed to the learning model. SVM, KNN, and MLP are the classifiers used for classification. Cheng et al.[7] employ a multimodal approach that uses text and image features. These features are provided as input to an MLP. The output is characterized as a fuzzy set to get the final result. The corpus contains 1707 charts with three categories and a 96.1% classification result. ### CNN-based Deep Learning Approaches Liu et al.[22] used a combination of Convolutional Neural Networks (CNNs) and Deep Belief networks (DBNs) to capture high-level information present in deep hidden layers. Fully Connected Layers of Deep CNN are used to extract deeply hidden features. A DBN is then used to predict the image class using the deep hidden features. The authors use transfer learning and perform fine-tuning to prevent overfitting. They use a data set that includes more than \(5,000\) images of charts, including pie, scatter, line, bar, and flow classes. Deep features are useful over primitive features to provide better stability and scalability to the proposed framework. The proposed method achieves an average accuracy of 75.4%, which is 2.8% more than the method that uses only deep ConvNets. Given the results of CNN in the classification of natural images, Siegel et al.[30] used two CNN-based architectures for chart classification. They evaluated AlexNet and ResNet-50, which are pre-trained on the ImageNet data set and then fine-tuned for chart classification. This transfer learning approach is prevalent in subsequent works addressing this particular problem. The proposed frameworks outperformed the state-of-the-art model at the time, such as ReVision, by a significant margin. ResNet-50 achieved the best classification accuracy of 86% on a data set that contained more than 60000 images spread over seven categories. Amara et al.[1] proposed a CNN-based on LeNet to classify images from their corpus of 3377 images into 11 categories. The model comprises eight layers, one input layer, five hidden layers, one fully connected layer, and one output layer. The fully connected layer is used as a classifier, while the hidden layers are convolution and pooling layers designed to extract features automatically. A fully connected layer employs softmax activation to classify images into defined classes. For evaluation of the model's performance, an 80-20 split is performed on the data set for training and assessment. The proposed model performs better than the LeNet and pretrained LeNet architectures with an accuracy of 89.5%. Jung et al. [18] present a classification method using the deep learning framework Caffe and evaluate its efficacy by comparing it with ReVision[29]. The authors use GoogLeNet[32] for classification and compare its results with shallower networks like LeNet-1 and AlexNet[20]. GoogLeNet outperforms LeNet-1 and AlexNet with an accuracy of \(91.3\%\). Five-fold cross-validation is used for calculating the accuracy on an image corpus with \(737\) - \(901\) images for each chart type. The test concludes that ChartSense provides higher classification accuracy for all chart types than ReVision. With studies adapting the deep learning approach for chart image classification, a comparative study of traditional vs. CNN architectures was required. Chagas et al.[6] provide a comparative analysis of conventional vs. CNN techniques. Authors evaluated CNN architectures (VGG19[31], Resnet-50[15], and Inception-V3[33]) for chart image classification for ten classes of charts. The performance is compared with conventional machine learning classifiers, Naive Bayes, HOG features combined with KNN, Support Vector Machines, and Random Forests. Pre-trained CNN models with fine-tuned last convolutional layers were used. The authors concluded that CNN models surpass traditional methods with an accuracy of \(77.76\%\) (Resnet-50) and \(76.77\%\) (Inception-V3) compared to \(45.03\%\) (HOG + SVM). Dia et al.[9] employ four deep learning models on a corpus of \(11\),\(174\) chart images of five categories. Of AlexNet[20], VGG16[31], GoogLeNet[32] and ResNet[15], the authors get the best accuracy of \(99.55\%\) for VGG16 model. VGG16 outperforms the models used in ChartSense paper by a large margin. Significant roadblocks to chart mining research are caused by the fact that current chart data sets must be larger and contain sufficient diversity to support deep learning. To address this problem, Jobin et al.[21] presented DocFigure, a chart classification data set with \(33,000\) charts in \(28\) different classes. To classify charts, the author's proposed techniques utilize deep features, deep texture features, and a combination of both. Among these baseline classification techniques, the authors observed that combining deep features and deep texture features classifies images more efficiently than individual features. The average classification accuracy improved by \(3.94\%\) and \(2.10\%\) by concatenating FC-CNN and FV-CNN over individual use of FC-CNN and FV-CNN, respectively. The overall accuracy of the combined feature methods turned out to be \(92.90\%\). Luo et al. proposed a unified method to handle various chart styles[26], where they show that generalization can be obtained in deep learning frameworks with rule-based methods. The experiments were performed on three different datasets of over \(300,000\) images with three chart categories. In addition to the framework, an evaluation metric for the bar, line, and pie charts is also introduced. The authors concluded that the proposed framework performs better than traditional rules-based and pure deep learning methods. Araujo et al.[2] implemented four classic CNN models that performed well on computer vision tasks, including Xception[8], VGG19[31], ResNet152[15] and MobileNet[16]. The weights of these models were pre-trained on the ImageNet dataset, and the authors further performed hyperparameter tuning to obtain a stable learning rate and weight decay. These models were employed on a self aggregated chart image corpus of 21,099 images with 13 different chart categories. Xception outperforms the other models by hitting an accuracy of 95%. The problem of small datasets has been prevalent since the problem of chart mining was first introduced. Most work tries to increase the size of the dataset. However, Bajic and Job[4] use a Siamese CNN network to work with smaller datasets. The authors show that an accuracy of 100% can be achieved with 50 images per class, which is significantly better than using a vanilla CNN. With the increase in datasets for chart images and the rise of deep learning models being employed on said datasets, an empirical study of these deep learning models was due. Thiyam et al.[35] compared 15 different deep-learning models on a self-aggregated dataset of 110,182 images spfeatures24 different chart categories. In addition, the authors tested the performance of these models on several preexisting test sets. They concluded that Xception(90.25%) and DenseNet121(90.12%) provide the most consistent and stable performance of all the deep learning models. The authors arrived at this decision by employing a five-fold cross-validation technique and calculating the standard deviation for each model across all datasets. Davila et al.[10] summarized the work of different participants in the competition's first edition by harvesting raw tables from Infographics that provided data and tools for the chart recognition community. Two data sets were provided for the classification task. One was a synthetically generated AdobeSynth dataset, and the other UB-PMC data set was gathered from the PubMedCentral open-access library. The highest average F1-measure achieved for the synthetic data set was 99.81% and the highest F1-measure achieved for the PMC data set was 88.29%. In the second edition of the competition, the PMC set was improved and included in the training phase. An ensemble of ResNet152 and DenseNet121 achieved the highest F1-score of 92.8%. The third edition of the competition was recently held at ICPR 2022. The corpus of real chart images was made up of 36,183 chart images. The winning team achieved an F1 score of 91% with a base Swin transformer model with a progressive resizing technique. We summarize the competition details in Table 1 \begin{table} \begin{tabular}{|l|l|c|c|c|l|c|} \hline **Competition** & **Dataset** & **\#Classes** & **Train** & **Test** & **Top performing** & **F1-measure** \\ & & & **Size** & **Size** & **Model** & \\ \hline \hline ICDAR 2019 [10] & AdobeSynth & 10 & 198,010 & 4540 & ResNet-101 & 99.81\% \\ & PMC & 7 & & 4242 & & 88.29\% \\ \hline ICPR 2020 [12] & AdobeSynth & 12 & 14,400 & 2,999 & DenseNet-121 + & 100\% \\ & UB PMC & 15 & 15,636 & 7,287 & ResNet-152 & 92.8\% \\ \hline ICPR 2022 [13] & UB PMC & 15 & 22,923 & 13,620 & Swin Transformer & 91\% \\ \hline \end{tabular} \end{table} Table 1: Competition on Harvesting Raw Tables from Infographics (CHART-Infographics) ### Transformer-based Deep Learning Approaches Since the inception of Vision Transformer, there has been a lot of development in various computer vision tasks such as image classification, object detection, and image segmentation. Vision transformer has outperformed CNN-based models in these tasks on the ImageNet dataset. However, there has not been widespread application of vision transformers to chart image classification. To our knowledge, only the Swin transformer[24] has been used for chart classification as reported in [13], which won the CHART-Infor graphics challenge ICPR2022. The authors applied a Swin Transformer Base Model with a progressive resizing technique. The models were initially trained on a scale (input size) of 224 followed by 384[13]. The existing models in the literature are summarised in Table 2. \begin{table} \begin{tabular}{|l|l|l|l|c|} \hline **Authors** & **Dataset** & **Model** & **Metric** & **Performance** \\ \hline \hline Savva et al.[29] & Self-acquired & SVM & Accuracy & 96.00\% \\ \hline Gao et al.[14] & Self-acquired & SVM & Accuracy & 97.00\% \\ \hline Kartikeyani & & MLP & Accuracy & 69.68\% \\ and & Self-acquired & KNN & & 78.06\% \\ Nagarajan[19] & & SVM & & 76.77\% \\ \hline Cheng et al.[7] & Self-acquired & MLP & Accuracy & 96.10\% \\ \hline Liu et al.[22] & DeepChart & CNN + DBN & Accuracy & 75.40\% \\ \hline Siegel et al.[30] & ChartSeer & AlexNet & Accuracy & 84.00\% \\ & & ResNet-50 & & 86.00\% \\ \hline Amara et al.[1] & Self-acquired & CNN & Accuracy & 89.50\% \\ \hline Jung et al.[18] & Chart-Sense & GoogleNet & Accuracy & 91.30\% \\ \hline Balaji et al.[5] & Self-acquired & CNN & Accuracy & 99.72\% \\ \hline Chagas et al.[6] & Chart-Vega & ResNet-50 & Accuracy & 76.76\% \\ & & Inception-V3 & & 76.77\% \\ \hline Dai et al.[9] & Self-acquired & ResNet & Accuracy & 98.89\% \\ & & GoogleNet & & 99.07\% \\ & & AlexNet & & 99.48\% \\ & & VGG-16 & & 99.55\% \\ \hline Liu et al.[23] & Self-acquired & VGG-16 & Accuracy & 96.35\% \\ \hline Davila et al.[10] & Synthetic & ResNet-101 & F1-measure & 99.81\% \\ & UB-PMC & ResNet-101 & & 88.29\% \\ \hline Jobin et al.[21] & DocFigure & FC-CNN + FV-CNN & Accuracy & 91.30\% \\ \hline Bajic et al.[3] & Self-acquired & VGG-16 & Accuracy & 89.00\% \\ \hline Araujo et al.[2] & Self-acquired & Xception & Accuracy & 95.00\% \\ \hline Luo et al.[26] & Chart-OCR & CNN & Custom(Bar) & 91.90\% \\ & & & Custom(Pie) & 91.80\% \\ & & & Custom(Line) & 96.20\% \\ \hline Davila et al.[12] & UB-PMC & DenseNet-121 + ResNet-152 & F1-measure & 92.80\% \\ \hline Bajic and Job[4] & Self-acquired & Siamese CNN & Accuracy & 100\% \\ \hline Thiyam et al.[35] & Self-acquired & Xception & Accuracy & 90.25\% \\ & & DenseNet121 & & 90.12\% \\ & & DenseNet201 & & 90.53\% \\ \hline Davila et al.[13] & UB-PMC & Swin Transformer & F1-measure & 91.00\% \\ \hline \end{tabular} \end{table} Table 2: Published Literature on Chart Classification ## 3 Chart Classification Datasets There has been a significant increase in the size of datasets both in terms of the number of samples and the number of chart types. The Revision dataset[29] had only 2,601 images and 10 chart types. The recent publicly available dataset[13] comprises around 33,000 chart images of 15 different categories. The details of several publicly available datasets are discussed in this section. _ChartSense [18]:_ The ChartSense dataset was put together using the ReVision dataset, and the authors manually added some additional charts. The corpus has 5659 chart images that cover ten chart categories. \begin{table} \begin{tabular}{|l|l|l|c|c|} \hline **Dataset** & **Year** & **\#Samples** & **\#Category** & **Public** \\ & & & & **(Y/N)** \\ \hline \hline ReVision[29] & 2011 & 2601 & 10 & Y \\ \hline View[14] & 2012 & 300 & 3 & N \\ \hline Self[19] & 2012 & 155 & 8 & N \\ \hline Self[7] & 2014 & 1707 & 3 & N \\ \hline DeepChart[22] & 2015 & 5000 & 5 & Y \\ \hline ChartSeer[30] & 2016 & 60000 & 7 & N \\ \hline Self[1] & 2017 & 3377 & 11 & N \\ \hline Chart-Sense[18] & 2017 & 6997 & 10 & Y \\ \hline Chart-Text[5] & 2018 & 6000 & 2 & N \\ \hline Chart-Vega[6] & 2018 & 14471 & 10 & Y \\ \hline Chart decoder[9] & 2018 & 11,174 & 5 & N \\ \hline Self[23] & 2019 & 2500 & 2 & N \\ \hline Synthetic[10] & 2019 & 202550 & 10 & Y \\ UB-PMC [10] & & 4242 & 7 & Y \\ \hline DocFigure[21] & 2019 & 33000 & 28 & Y \\ \hline Self[3] & 2020 & 2702 & 10 & N \\ \hline Self[2] & 2020 & 21099 & 13 & N \\ \hline Chart-OCR[26] & 2021 & 386966 & 3 & N \\ \hline UB-PMC[12] & 2021 & 22924 & 15 & Y \\ \hline Self[4] & 2021 & 3002 & 10 & N \\ \hline Self[35] & 2021 & 110182 & 24 & N \\ \hline UB-PMC[13] & 2022 & 33186 & 15 & Y \\ \hline \end{tabular} \end{table} Table 3: Chart Classification Datasets ChartVega [6]:This dataset has ten chart types and was created due to a need for a benchmark dataset for chart image classification[6]. The dataset contains both synthetic and real chart images. The set contains 14,471 chart images, of which 12059 are for training and 2412 are for testing. In addition, a validation set of 2683 real chart images is provided. No separate annotations are provided, as chart images are separated according to their types. DocFigure [21]:This corpus consists of 28 categories of annotated figure images. There are 33,000 images that include non-chart categories like natural images, tables, 3D objects, and medical images. The train set consists of 19,797 images, and the test set contains 13173 images. The labels are provided in a text document. ChartOCR [26]:The dataset contains 386,966 chart images created by the authors by crawling public excel sheets online. The dataset contains only three classes of chart images. The dataset is divided into the train, validation, and test sets. The training corpus contains 363,078 images, the validation set contains 11,932 images, and the test set contains 11,965 images. The annotations for the chart images are provided in JSON format. UB-PMC CHART-Infographics:This dataset was introduced in the first edition of Competition on Harvesting Raw Tables from Infographics (ICPR 2019 CHART Infographics)[10]. This dataset has synthetic images created using matplotlib. For the testing, a large set of synthetic data and a small set of real chart images harvested from PubMedCentral3 were used. The training set has 198,010 images, whereas the synthetic test set has 4540 images, and the real test set has 4242 images. The dataset has ten different chart categories. Footnote 3: [https://www.ncbi.nlm.nih.gov/pmc/](https://www.ncbi.nlm.nih.gov/pmc/) The second edition of the competition[12] provided a dataset containing 22923 real chart images of 15 different chart categories in both training and testing sets. The training set has 15636 images, while the test set has 7287 images. The annotations for the chart image samples are provided in both JSON and XML formats. The dataset presented as a part of the third and most recent competition comprises 36183 images of 15 different chart categories. The training set contains 22,923 images, while the test set contains 13,260 images. Similar to the previous edition, the annotations are provided in JSON and XML formats. To the best of our knowledge, this is the largest publicly available dataset for chart image classification. The existing classification data sets for charts are summarized in Table 3, and the composition of the publicly available datasets is reported in Table 4. categories of deep learning models - CNN-based and Transformer-based for the comparative study. For CNN-based models, we have considered the proven state-of-the-art models for image classification on the large-scale benchmark dataset ImageNet[28] over the years. For vision transformer models, we have chosen the models that have been proven to outperform CNN-based models in computer vision tasks. ### ResNet[15] The Deep Residual Network was introduced in 2015 and was significantly deeper than the previous deep learning networks. The motivation behind the model was to address the degradation problem: Degrading training accuracy with increasing depth of the model. The authors added shortcut connections, also known as skip connections, that perform the proposed identity mapping and are significantly easier to optimize than unreferenced mappings. Despite being deeper than the previous models, ResNet still needed to be simplified. It achieved the top-5 error of 3.57% and claimed the top position in the 2015 ILSVRC classification competition[28]. We use a 152-layer version of this Deep Residual Network called ResNet-152 for our classification problem. \begin{table} \begin{tabular}{|l|c|c|c|c|c|} \hline **Chart Type** & **UB-PMC** & **DocFigure** & **Chart-Sense** & **Chart-OCR** & **Chart-Vega** \\ & **[13]** & **[21]** & **[18]** & **[26]** & **[6]** \\ \hline \hline Arc & - & - & - & - & 1440 \\ Area & 308 & 318 & 509 & - & 1440 \\ Block & - & 1024 & - & - & - \\ Bubble & - & 339 & - & - & - \\ Flowchart & - & 1074 & - & - & - \\ Heatmap & 377 & 1073 & - & - & - \\ Horizontal Bar & 1421 & - & - & - & - \\ Horizontal Interval & 586 & - & - & - & - \\ Line & 13956 & 9022 & 619 & 122890 & 1440 \\ Manhattan & 256 & - & - & - & - \\ Map & 906 & 1078 & 567 & - & - \\ Parallel Coordinate & - & - & - & - & 1339 \\ Pareto & - & 311 & 391 & - & - \\ Pie & 433 & 440 & 568 & 76922 & 1440 \\ Polar & - & 338 & - & - & - \\ Radar & - & 309 & 465 & - & - \\ Re-orderable Matrix & - & - & - & - & 1440 \\ Scatter & 2597 & 1138 & 696 & & 1640 \\ Scatter-Line & 3446 & - & - & - & - \\ Sunburst & - & - & - & - & 1440 \\ Surface & 283 & 395 & - & - & - \\ Table & - & 1899 & 594 & - & - \\ Treemap & - & - & - & - & 1440 \\ Venn & 206 & 889 & 693 & - & - \\ Vertical Bar & 9199 & 1196 & 557 & 187154 & 1512 \\ Vertical Box & 1538 & 605 & - & - & - \\ Vertical Interval & 671 & - & - & - & - \\ \hline Total & 36183 & 33071 & 5659 & 386966 & 14471 \\ \hline \end{tabular} \end{table} Table 4: Composition of publicly available datasets ### Xception[8] Xception is a re-interpretation of the inception module. The said inception module is replaced with depth-wise separable convolutions. The number of parameters in both Inception V3 and Xception is the same, so the slight performance improvement is due to the more efficient use of parameters. Xception shows a better performance improvement than Inception V3 on the JFT dataset on the ImageNet dataset. It achieves the top five accuracy of 94.5%. Xception also shows promising results in the chart classification literature, as reported by [2] and [35]. ### DenseNet[17] The Dense Convolutional Network, introduced in 2018, connects each layer in the network architecture to all other layers. This allows for the exchange of feature maps at every level and considers the same input as input gathered from all the previous layers rather than just one preceding layer. The difference between DenseNet and Resnet lies in the way that they combine features. ResNet combines features through summation, whereas DenseNet combines them through concatenation. DenseNet is easier to train due to the improved flow of gradients and other information through the network. The vanilla DenseNet has fewer parameters than the vanilla ResNet network. We used DenseNet-121 for our classification task as it was one of the best models for the chart image dataset as reported in [35]. ### ConvNeXt[25] ConvNeXt model was introduced as a response to hierarchical transformers outperforming convnets in image classification tasks. Starting with a standard ResNet architecture, the model is carefully modified to adapt the specific characteristics of a typical hierarchical transformer. This resulted in a CNN-based model that matches the transformers in robustness and scalability across all benchmarks. ConvNeXt achieves a top-1 accuracy of 87.8% on ImageNet. ### DeIT Transformer[36] The authors proposed the Data Efficient Image Transformer(DeIT) with 86M parameters to make the existing vision transformer more adoptable. This convolution-free approach achieves competitive results against the existing state-of-the-art models on ImageNet. The proposed vision transformer achieved a top-1 accuracy of 85.2% on the ImageNet classification task. We use the base Base DeIT transformer for the chart classification task. ### Swin Transformer[24] A hierarchical transformer that employs shifting windows to obtain representations for vision tasks. The authors note that the hierarchical architecture provides linear computational complexity and scalability concerning image size. The limitation of self-attention calculation concerning noncoincident local windows due to the shifting windows allows for better cross-window connectivity. The qualities above contribute to the Swin transformer's excellent performance across computer vision tasks. It achieves 87.3% top-1 accuracy on the ImageNet-1k dataset. We perform experiments with all the 13 available Swin Transformer models and report their performance in Table 5. Furthermore we refer to the best performing Swin Transformer model as Swin-Chart in Table 6. ## 5 Experimental Protocol ### Dataset We use the ICPR2022 CHARTINFO UB PMC[13] dataset to perform our comparative study of deep learning models. The dataset is divided into training and testing sets. The number of chart images in the training and test set is 22,923 Figure 1: Sample of chart images used in this study from UB-PMC[13] dataset and 11,388, respectively. The ground truth values are annotated in JSON and XML formats. We further divide the provided training set into training and validation sets with an 80/20 ratio. The dataset contains charts of 15 categories: area, map, heatmap, horizontal bar, Manhattan, horizontal interval, line, pie, scatter, scatter-line, surface, Venn, vertical bar, vertical box, and vertical interval. Samples of each chart type present in the dataset are shown in Figure 1 ### Training and Testing Setup We choose ResNet152, DenseNet121, Xception, and ConvNeXt CNN-based models and DeIT and Swin Transformers-based models for chart image classification. The CNN-based models were selected based on their performance in the existing literature on the ImageNet image classification task. The transformer-based models are chosen because they beat the CNN-based models. We use the pre-trained ImageNet weights of these models and fine-tune them for our chart classification task. The models are trained on a computer with an RTX 3090 video card with 24 GB memory. Pytorch[27] was used as the engine for our experiments. We use a batch size of 64 for CNN-based models and a batch size of 16 for transformer-based models. A learning rate of \(10^{-4}\) is used to train each model for 100 epochs. Label Smoothing Cross Entropy Loss is used as a loss function. The evaluation measures the average over all classes and reports precision, recall, and F1-score. ### Comparative Results The models were trained following the steps mentioned in the previous section and were tested on the UB-PMC test data set. We calculate all deep learn \begin{table} \begin{tabular}{|l|c|c|c|} \hline Model & Precision & Recall & F1-measure \\ \hline \hline SwinT & 0.929 & 0.924 & 0.922 \\ SwinT\_s3 & 0.931 & 0.923 & 0.922 \\ SwinS & 0.931 & 0.926 & 0.925 \\ SwinS\_s3 & 0.928 & 0.922 & 0.919 \\ SwinB\_224 & 0.933 & 0.926 & 0.925 \\ SwinB\_384 & 0.936 & 0.932 & 0.931 \\ SwinB\_224\_in22k\_ft1k & 0.934 & 0.930 & 0.929 \\ SwinB\_384\_in22k\_ft1k & 0.933 & 0.929 & 0.927 \\ SwinB\_s3 & 0.927 & 0.923 & 0.921 \\ **SwinL\_224** & **0.937** & **0.933** & **0.932** \\ SwinL\_384 & 0.937 & 0.931 & 0.929 \\ SwinL\_224\_in22k\_ft1k & 0.937 & 0.933 & 0.932 \\ SwinL\_384\_in22k\_ft1k & 0.934 & 0.930 & 0.929 \\ \hline \end{tabular} \end{table} Table 5: Comparative Performance of all the 13 Pre-trained Swin Transformer Models on ICPR2022 CHARTINFO UB PMC dataset ing models' average precision, recall, and F1 score. Among CNN-based models, ResNet-152 and ConvNeXt provide the best results across all evaluation metrics. The ResNet-152 result is consistent with the results in [13] for CNN-based models. For Swin transformer we perform experiments on 13 models consisting Swin Tiny(SwinT), Swin Small(SwinS), Swin Base(SwinB) and Swin Larger(SwinL) and their variants. SwinL with input image dimension 224 performs best with an F1-score of 0.932. So, **SwinL** model is further referred as **Swin-Chart**. The scores of all the Swin Transformer models are summarized in Table 5. The best performing CNN based models fail to compete with Swin-Chart for the chart classification task as it outperforms the other five models with an average F1-score of 0.932. The scores for the deep learning models are summarized in Table 6. Furthermore, we compare our best-performing model(Swin-Chart) with the models reported in [13]. This comparison is summarized in Table 7. We note that Swin-Chart surpasses the winner of the ICPR 2022 CHART-Infographics competition with an average F1-score of 0.931. ## 6 Future Directions Although there has been a significant increase in published articles on chart classification, several problems still need to be addressed. ### Lack of Standard Benchmark Data Sets The chart image classification problem has been extensively addressed in previous work. Efforts have been made to increase the size of chart image datasets \begin{table} \begin{tabular}{|l|c|c|c|} \hline Model & Precision & Recall & F1-score \\ \hline \hline Resnet-152[15] & 0.905 & 0.899 & 0.897 \\ Xception[8] & 0.882 & 0.870 & 0.866 \\ DenseNet-121[17] & 0.887 & 0.879 & 0.875 \\ ConvNeXt[25] & 0.906 & 0.898 & 0.896 \\ DeIT[36] & 0.888 & 0.879 & 0.874 \\ Swin-Chart & **0.937** & **0.933** & **0.932** \\ \hline \end{tabular} \end{table} Table 6: Comparative performances of the CNN-based and Transformer-based models on ICPR2022 CHARTINFO UB PMC dataset \begin{table} \begin{tabular}{|l|c|c|c|} \hline Team & Precision & Recall & F1-score \\ \hline \hline Our (Swin-Chart) & **0.937** & **0.933** & **0.932** \\ IIIT\_CVIT & 0.926 & 0.901 & 0.910 \\ UB-ChartAnalysis & 0.900 & 0.881 & 0.886 \\ six seven four & 0.865 & 0.808 & 0.827 \\ CLST-IITG & 0.704 & 0.657 & 0.654 \\ \hline \end{tabular} \end{table} Table 7: Comparison of Swin-Chart from Table 6 with models stated in [13] on the ICPR2022 CHARTINFO UB PMC dataset that also cover a wide variety of charts[10, 35]. With the growing literature in various domains, authors are finding creative ways to use different charts. This adds to the variety of chart types. Integrating such diverse chart types while creating chart datasets remains an open challenge. In addition, the popularity of charts such as bar, line, and scatter over others such as Venn, surface, and area adds to the problem of disparity between the number of samples in particular chart types. ### Lack of Robust Models Recent work makes some problematic assumptions in addressing this problem[11]. A lack of a diverse benchmark dataset adds to this problem, as there needs to be more consistency in model performance across publicly available datasets. The inherent intra-class dissimilarity and inter-class similarity of several chart types affect the model's performance. ### Inclusion of Noise Most of the work in the existing literature ignores the effect of noise. Different types of noise, such as background grids, low image quality, composite charts, and multiple components along with figures, lead to poor performance for models that perform exceptionally well on noiseless data[34]. In addition to the noiseless chart image dataset, if a small set of chart images could be provided that incorporates the noisy images, it would help fine-tune the models to work through the inclusion of noise and be invariant to the same. ## 7 Conclusion We have provided a brief survey of existing chart classification techniques and datasets. We used a Transformer model to obtain state-of-the-art results. Although there has been a significant development both in terms of variety in models and in the size of datasets, we observe that the chart classification problem still needs to be solved, especially for noisy and low-quality charts. Our comparative study showed that Swin-Chart outperforms the other vision transformer and CNN-based models on the latest UB-PMC dataset. In the future, we plan to generalize the results of the Swin-Chart over other publicly available datasets and try to bridge the gap to a robust deep-learning model for chart image classification.
2305.05420
Estimating related words computationally using language model from the Mahabharata -- an Indian epic
'Mahabharata' is the most popular among many Indian pieces of literature referred to in many domains for completely different purposes. This text itself is having various dimension and aspects which is useful for the human being in their personal life and professional life. This Indian Epic is originally written in the Sanskrit Language. Now in the era of Natural Language Processing, Artificial Intelligence, Machine Learning, and Human-Computer interaction this text can be processed according to the domain requirement. It is interesting to process this text and get useful insights from Mahabharata. The limitation of the humans while analyzing Mahabharata is that they always have a sentiment aspect towards the story narrated by the author. Apart from that, the human cannot memorize statistical or computational details, like which two words are frequently coming in one sentence? What is the average length of the sentences across the whole literature? Which word is the most popular word across the text, what are the lemmas of the words used across the sentences? Thus, in this paper, we propose an NLP pipeline to get some statistical and computational insights along with the most relevant word searching method from the largest epic 'Mahabharata'. We stacked the different text-processing approaches to articulate the best results which can be further used in the various domain where Mahabharata needs to be referred.
Vrunda Gadesha, Keyur D Joshi, Shefali Naik
2023-05-09T13:13:26Z
http://arxiv.org/abs/2305.05420v1
# Estimating related words computationally using language model from the Mahabharata ###### Abstract 'Mahabharata' is the most popular among many Indian pieces of literature referred to in many domains for completely different purposes. This text itself is having various dimension and aspects which is useful for the human being in their personal life and professional life. This Indian Epic is originally written in the Sanskrit Language. Now in the era of Natural Language Processing, Artificial Intelligence, Machine Learning, and Human-Computer interaction this text can be processed according to the domain requirement. It is interesting to process this text and get useful insights from Mahabharata. The limitation of the humans while analyzing Mahabharata is that they always have a sentiment aspect towards the story narrated by the author. Apart from that, the human cannot memorize statistical or computational details, like which two words are frequently coming in one sentence? What is the average length of the sentences across the whole literature? Which word is the most popular word across the text, what are the lemmas of the words used across the sentences? Thus, in this paper, we propose an NLP pipeline to get some statistical and computational insights along with the most relevant word searching method from the largest epic 'Mahabharata'. We stacked the different text-processing approaches to articulate the best results which can be further used in the various domain where Mahabharata needs to be referred. NLP Pipeline, Text-Processing, Analysis of Mahabharata, Word2Vac ## I Introduction Natural Language processing is the cutting-edge technology equipped with efficient tools and techniques to deal with unstructured text data. Using NLP pipeline techniques, a large amount of text can be processed very quickly and accurately. The most important point of processing the fictional text using NLP is that the text will be analyzed without adding any sentiments to it. 'Mahabharata' is the story orally often narrated and recreated across the world in different forms. Thus, humans have sentiments attached to them by default. So, to get the computational details about Mahabharata we used the elements of the NLP pipeline to answer the following questions which do not have any sentiment aspect attached with it. 1) How rich is 'Mahabharata' in terms of words? 2) Does the sentence length of 'Mahabharata' distribute normally across the whole literature? Apart from this, we are addressing the problem that how can we find the _most related_ word from such a large text without reading it. In this paper, we are approaching the NLP Pipeline followed by the language model which is searching for the most related words from the large text. ## II Literature Review Mahabharata is a treasure of life lessons. To make this treasure of life lessons understandable for common people, it is important to translate it into the local languages used by people in daily life. The first translation of Mahabharata was written in the Persian Language entitled 'Razmnameh' on the order of Mughal Emperor Akbar in the 18th Century Later on, this is followed by English, Hindi, and other regional Languages [1]. The literature is narrating the phenomena and story which is lived by more than 200 people [2] which has been redacted between 400BCE and 400CE [3]. we can see the glimpse of various events that occurred in past across India and even across the globe[4]. Among these chunks, the city Bishnupur in West Bengal, India is famous for its terra-cotta temples. These temple's walls are carved with terracotta panels describing various events from 'Mahabharata'. These images are captured and used as a 3D image dataset known as BHID (Bishnupur Heritage Image Dataset) for various computer vision applications. BHID is a dataset containing a total of 4233 images which is in the public domain and is considered a central resource for research related to digital heritage [5]. The story of Mahabharata is retold in various art forms like plays, short stories, paintings, poems such as 'Kiratarjunyam' to make people understand the right ethics that not to make difference between 'High-man' and 'Lowman' where 'Lord Shiva' himself described as 'Kirat' [6] and translated books in various Indian languages. Though the Orality affects the translation, according to paper [7] the translation of literature may be treated as an independent text because "A study of translation is a study of language" and Mahabharata is retold in various Indian languages which can be a free translation or a literal translation. Here the difference between free and literal translation comes up in the picture because of the orality. Between these all forms of art, a unique art called 'Wayang (leather puppets)' is famous for recreating the Mahabharata story in Bali - Indonesia. Sudiamika (et al., 2021) and fellow researchers have classified the 'Mahabharata Events' presented in this art form. They used the R-CNN algorithm to achieve the recognization of events and the characters such as 'Wayang Arjuna' and 'Wayang Yudhistira' (2018). People are always interested in hearing or watching fictional or fantasy stories. Thus, stories inside Mahabharata are always being attractive for creative people. This Epic is even inspiring for the technologist to create various taxonomies for the fictional domain (TiFI) (Beng et al., 2019) and launch 'ENTYFI' - the first technique for typing entities in fictional text. This 5-steps technique is useful to generate supervise fiction typing, supervised real-world typing and unsupervised typing (Beng et al., 2019). A large number of events and characters in the epic is also useful for Ontology (a knowledge representation structure). In the current scenario, the web resources are more explored for ontology enrichment rather than the question-answer-pair (QA-pair). Authors in paper (Beng et al., 2019) applied such QA-pair on the 'Mahabharata Domain' to convert them into potential triples (subject, predicate, and object) and identify the triples which are new, more precise, and related to the domain for ontology enrichment in literature. During an ACM conference on 'Data and Application Security and Privacy (CODASPY) in a panel session prof. Rakesh Varma compared the data security issues with the Mahabharata War. He mentioned that in this world of data we are facing an untold war where attackers are motivated and working more sophisticatedly rather than we are fragmented with our data (Beng et al., 2019). Apart from the angles like literature, Security, Technology, Digital Heritage Research, text generation, or literature of translations, Mahabharata is referred for the analysis of the 'Ludo Game' played on an android device. The analysis of different nine games concluded that the face of the dice is not equally distributed. Thus, the dice is biased and the dice algorithm is designed in such a way to make the game closer for an exciting experience for the users. Authors in (Beng et al., 2019) took the context from the history as well that the ludo game is inspired from that game called 'Pachishi' which is similar to the game played in Mahabharata called 'Chaupar'. While looking at the various aspect of Mahabharata, excluding the psychological aspects is not possible. Authors in paper (Aguilar et al., 2019) has explored the evidence for the most fundamental metaphor used for the mind - "The Mind is a Container" in Indian Epic 'Mahabharata' and 'Ramayana' plus the Greek Epic poem 'Homer and Hesiod' to traverse the cognitive phenomena in the epic literature. This study provides many uncommon aspects of our mental life. The description of the concept of the mind container is elaborated on the base of the epics by (a) Ascription, Location, the content of the mind container, (b) Scope of mind container concerning consciousness and memory, (c) control over the content and (d) functions of the mind container. Mahabharata Wiki Article is featured in the 100 most viewed Wikipedia article list. It is easy to give the context of the literature to people who belong to different domains. Thus, it is important to have a computational, analytical, and sentimental analysis of the text to get meaningful insights (Aguilar et al., 2019) In (Aguilar et al., 2019), the authors have derived interesting insights from the English translation of Mahabharata (Aguilar et al., 2019) by applying Pre-processing, POS tagging, Co-occurrence analysis, sentiment analysis of text and characters, and emotional analysis. The Insights which are given about the character and phenomena are versatile enough to use in different domains. According to (Aguilar et al., 2019) paper, The important characters of the epic Arjuna and Bheema had a common struggle and they trust each other abilities more intensely, this is also derived by (Aguilar et al., 2019) in the sentimental analysis across the text that "Arjuna and Bheema faced more negativity around them". In paper (Aguilar et al., 2019), the author brings the concept of considering the human values while designing the AI Agents. The similarity between humans and AI agents is positively correlated to the trust factor. Thus, inspired from the story of 'Challenging the powerful kings like Jarasandha and Chitrasena by arjuna and Bheema' can help AI-Agent developers to involve Value similarity in the outline of AI-Agent development. Apart from the technology development, the treatise has relevance to the modern society and is helpful to derive management lessons such as Strategic Management, Creation and relation with powerful friends and Allies, Effective Leadership Style, Successful Team Building, Shared goal and Ownership of the Goal, Commitment to the Goal, Role Clarity, Understanding the ground realities and Empowering Women (Aguilar et al., 2019). The most important part of this Epic is 'Bhagavad Gita' said during Bhishma Parva also gives lessons of intrapersonal skills like Self-development, sublimation/management of the physical dimensions, sublimation/management of the psychological dimensions, Deontology, desire management, anger management, mind management, Emotional Stability, Fear Management, self-motivation, Empathy, and social welfare (Aguilar et al., 2019). This epic gives the zoom version of the art of concentration with the lifespan of Arjuna. Different events can lead us to derive the factors which can be considered for the concentration like Enthusiasm, Dedication, Aptitude, Emotional or Physical state, and Environment (Aguilar et al., 2019). The Epic context is shaping the thinking of society over the centuries. And this is reflected in our modern literature for children and adults. The stories derived from the epic show the disability as a curse or sin, but the modern literature shows the positivity and power of the disability. It portrays the usefulness of disabled people to society. In the context of Mahabharata, the approach towards the disability may fall under the bucket of "Don'ts"(Aguilar et al., 2019). ## III Methodology This paper aims to carve the non-semantic, statistical, and computational insights along with finding the most relevant words from the largest Indian Epic 'Mahabharata'.Figure-1 shows the NLP pipeline, which is defined to get robust results on the text. During this experiment "The Mahabharata of Krishna Dwaipayana Vyasa - The English translation (1886-1889) by Kesri Mohan Ganguli" is used in '.EPUB' format as a dataset. ### _.EPUB file conversation into data structure_ The '.EPUB' - (electronic publication) format is a very popular format of the e-book in digital documentation. This format is not only useful to read e-books using multiple devices such as android/mac mobiles, tablets, laptops, or desktops but these files are also useful for text processing. The EPUB format is released as an archive file built on the XHTML method. The tag format of XHTML can be flattened into any data structure which is readable by machine language. Here the whole e-book is converted into a python list Data structure. As shown in above figure 2, the 'Mahabharata' e-book is divided into sequential data structure. While conversation the 'New line' is converted into 'n' and page break is converted into '\o'. apart from this, we do have some unwanted elements such as comma (.) semicolon (;) and apostrophe's' ('s). ### _Text Cleaning_ The Mahabharata story contains many punctuations which are important to understand the sentiments for humans, but it not useful for the machine. Text cleaning is addressing the problem to handle unwanted elements. Using python library're' (Regular Expression) and'string' the redundant elements such as comma (.) semicolon (;) and apostrophe's' ('s) are removed from the whole text and the text is now stored a unit string. In the general case, full-stop (.) is also removed during the text cleaning of the data-set but this process required full-stop (.) while performing the next step of the pipeline called tokenization. the reason behind keeping the full stop is to define the end of the sentences. After tokenization, we can find the number of words occupied in each sentence which can be identified as a word distribution pattern. ### _Tokenization_ The concept of dividing the text document into small snippets is known as tokenization. The tokenization can be applied in two different ways on the text document: (a) Sentence tokenization and (b) word tokenization. These can generate a bunch of sentences, words, phrases, tokens, or symbols [21]. Usually, Tokenization is applied as a primary and conventional text-preprocessing step in an NLP pipeline. In the text-preprocessing of the Mahabharata, we used the 'Natural Language toolkit - sent_tokenize()' method to divide the whole text into sentences. The whole Mahabharata is divided into 1,30,700 sentences with variable lengths. The length distribution is described in figure 4. As shown in figure 4, most of the sentences have a length between 20 to 70 words. And very few sentences are having a length of less than 20 and some outliers do have higher lengths like the sentence on the 121306 index is having a length of 1850 words.The text is not only divided into chunks of sentences but also into unit words to add more granularity into text preprocessing. This is achieved using the technique called word tokenization. Here we used the 'Natural Language Toolkit - word- tokenize()' method which divides the whole Mahabharata text into 27,49,461 uni-grams (only one word). ### _Text Normalization_ The human written text includes Function Words and Content words. These text data, specifically the fictional text is a combination of all the grammatical ups and downs. Thus, these data do have high randomness. to reduce the randomness of the text and maintain the significant meaning of the text, the text normalization can be performed on the whole text. On the Mahabharata text, we are applying two popular techniques Stemming and Lemmatization. These tasks are followed in the NLP pipeline to transform the fictional text into the standard form of the language. Both these tasks are followed by 'Removing Stop Words' on the text. In Mahabharata text, many words do not have critical significance but are used with high frequency throughout the whole epic to form the correct grammar. these words are not useful to improve the performance of any language model and they will also take some computational time in the further analysis process. These words do not have any information in terms of sentiment analysis as well. So, it is advisable to remove stop words (words like a, an, the, are, have, etc. ) along with the text normalization tasks. #### Iii-C1 Stemming with Stop Words One word having the same semantic meaning can be written in many formats with human language. Stemming is a technique that removes the affixes and suffixes attached to the word and tries to bring out the stem word or root word from the text. Among popular stemming techniques like Lancaster stemmer, Porter Stemmer, and Snowball stemmer, we used Porter stemmer to get the root words of the whole Mahabharata text. #### Iii-C2 Lemmatisation with stop words The process of Lemmatisation is designed with the same purpose which is addressed by stemming. Lemmatisation is also used to cutting down the words to their root word. However, in Lemmatisation, the inflection of the word is not just broken-off, but it uses the concept of lexical knowledge. Using this converts the words into base form. Thus, it holds the sentiments of the text more strongly. Here we used 'wordnetlemmatizer' to achieve this task. The selection between stemming and Lemmatisation can be done based on the database on which the Language model is going to be built. The Mahabharata is a fictional text, and to extract features from this large epic, a strong sentimental hold on the text is required. Thus, based on the comparison of stemming and Lemmatisation we decide to build the language model on lemmatized text ### _The Language Model_ The second objective of this paper is to find similar words from the Mahabharata fictional text. So basically, we are targeting to implement a model which can process as illustrated in figure 6. Here we have a large amount of fictional text which can be considered as unannotated data for training a model. Thus, according to [21] word2vec is well liked model to be applied on data which do not have any adulteration.Word2vac is a combination of two different algorithms applied together on corpus. These two algorithms are known as CBOW (Continuous Bag of Words) and Skip-Gram. This model is developed with three different layers. (a) Input Layer, (b) Single Hidden Layer and (c) Output Layer. The input layer is consisted with set of neurons which is having shape of the total number of words in the vocabulary. This vocabulary is specifically built according to the corpus. In this paper our corpus is the book "Mahabharata" and the vocabulary created Fig. 1: Text pre-processing Pipeline on The Mahabharata Fig. 3: The Mahabharata - after text cleaning stored as unit string. Fig. 2: The Mahabharata - in a list structure (figure - 7) based on this text is containing 25794 words. The magnitude of a single hidden layer is equal to the dimensionality of the result word vector. Here we trained a word2vec model to get the 100-dimension resultant vector. So, the size of the hidden layer is 100 dimensional. And the output layer is having the same magnitude as the input layer. Considering 'V' words in the vocabulary (where V=25794) and 'N' is the dimension of the resultant vector (where N=100). Thus, the connections from the input layer to the hidden layer can be constituted by the WI matrix having the shape of V \(\times\) N. Here each row and column represents each word of vocabulary and the dimension of the resultant vector respectively. Likewise, the connections from the hidden layer to the output layer can be constituted by a WO matrix having the shape of N \(\times\) V. Here each row and column represents the dimension of the resultant vector and each word of vocabulary respectively. Considering the above sample corpus (figure 8), the vocabulary created based on this corpus can be represented as follows: \[\text{Vocabulary}_{\text{s}}=\text{ `one' : 0, 'day' : 1, 'wait' : 2, 'upon' : 3, 'wratnful' : 4, 'ascetic' : 5, 'rigid' : 6, 'vow' : 7, 'durvosa' : 8, 'name' : 9, 'acquainted' : 10, 'truth' : 11, 'fully' : 12, 'conversant' : 13,'mystery' : 14,'religion' : 15, 'pritho' : 16, 'possible' : 17, 'care' : 18, 'gratified' : 19, 'rish' : 20,'soul' : 21, 'complete' : 22, 'control' : 23, 'holy' : 24, 'attention' : 25, 'bestowed' : 26,'maiden' : 27, 'told' : 28,'satisfied' : 29, 'fortunate' : 30, 'thee' : 31, '!' : 32 The sample corpus vocabulary has 33 words. This vocabulary is considering each unique word given in the sample corpus. So, there are 33 input neurons and 33 output neurons. We have 100 neurons in the hidden layer. Thus, our connections neurons between the input layer to the hidden layer can be represented as WI(33 \(\times\)100) and the connection neurons between the hidden layer to the output layer can be represented as WO(100 \(\times\) 33). Now before we train the word2vec model these matrices are initialized with small random numbers. Now looking at the corpus, if we want that word2vec model finds the relationship between the words "durvasa" and "vow"; the word "durvasa" is known as context, and "vow" is known as the target. Now, these inputs can be multiplied with the randomly initialized WI(33 \(\times\)100) matrix tending towards the hidden layer, and then the output at the hidden layer will be multiplied with WO(100 \(\times\) 33)matrix while tending towards the output layer. The target of this model is to compute probabilities for words at the output layer. This is achieved in word2vec as it implements the softmax function. The idea behind using word2vec is, this model is used to represent the words by a vector of numbers. In our case, we provide the target word as input to the model. It will compute the cosign similarities between all other words available in the vocabulary and send it back as output with _top_\(n\) words. ## IV Results After Text-preprocessing we applied word2vec on the corpus with 25794 length vocabulary. we considered 'Hastiapur[location]', 'Arjuna (protagonist)', 'Gandiva (object)', 'Shakuni (protagonist)', 'dice (object)', 'Krishna (protagonist)' and 'Siva (as Character)' as target words. These words are selected based on the popularity of the protagonist, location, and object covered in Mahabharata. The vector representation of the word is illustrated in figure 9. The sample target words with their similar words along with the cosign similarity between target word and context word is shown table-1. _(Here we are considering top five similar words)_ ## V Conclusion In this paper, the NLP-based experiment on the Mahabharata is carried out on a basic level. We trained the word2vec model on the corpus to get the most similar words from the text itself. The reason behind selecting word2vec is to deal easily with the high-dimensional word vectors. In this paper, we can reach the basic aspects like uni-gram vocabulary, sentence distribution, 100-dimensional vector representation, and word similarities. ## VI Future Scope Though in the current scenario we do have similar models like 'Glove' and 'Fast Text' which we are targeting to apply and compare the results. The comparison of these models will bring a robust argument that which model is giving the best result on fictional text. Apart from the text-similarity, we are Fig. 4: Sentence Distribution of Mahabharata * #OriginalSentence sentences[470] 'theendeavours ofduryodhanatoengageyudhishthiraagaininthegame'a ndtheexileofthedefeatedyudhishthirawithhisbrothers.' #SentenceafterStemming sentences_stm[470] [endeavourduryodhanaengageyudhishthiragame'exildefeatyudhishthira brother.' #Sentenceafterlemmatisation sentences_lemma[470] [endeavourduryodhanaengageyudhishthiragame'exiledefeatedyudhishthirabrother.' protagonist. These observations can be matched with human behavior with the help of a specific questionnaire based on organizational behavior. This can provide a profile of a person and production capacity in his/her working environment. Thus, the results of this paper can be mapped with future research to identify the professional perspective of a human personality based on the Mahabharata.
2310.17292
Boolean Abstractions for Realizability Modulo Theories (Extended version)
In this paper, we address the problem of the (reactive) realizability of specifications of theories richer than Booleans, including arithmetic theories. Our approach transforms theory specifications into purely Boolean specifications by (1) substituting theory literals by Boolean variables, and (2) computing an additional Boolean requirement that captures the dependencies between the new variables imposed by the literals. The resulting specification can be passed to existing Boolean off-the-shelf realizability tools, and is realizable if and only if the original specification is realizable. The first contribution is a brute-force version of our method, which requires a number of SMT queries that is doubly exponential in the number of input literals. Then, we present a faster method that exploits a nested encoding of the search for the extra requirement and uses SAT solving for faster traversing the search space and uses SMT queries internally. Another contribution is a prototype in Z3-Python. Finally, we report an empirical evaluation using specifications inspired in real industrial cases. To the best of our knowledge, this is the first method that succeeds in non-Boolean LTL realizability.
Andoni Rodriguez, Cesar Sanchez
2023-10-26T10:17:22Z
http://arxiv.org/abs/2310.17292v1
# Boolean Abstractions # Boolean Abstractions for Realizability Modulo Theories (Extended version) + Footnote †: This work was funded in part by the Madrid Regional Gov. Project “S2018/TCS-4339 (BLOQUES-CM)”, by PRODIGY Project (TED2021-132464B-I00) funded by MCIN/AEI/10.13039/501100011033/ and the European Union Next Generation EU/PRTR, and by a research grant from Nomadic Labs and the Tezos Foundation. Andoni Rodriguez 1IMDEA Software Institute, Madrid. Spain 12Universidad Politecnica de Madrid. Spain2 Cesar Sanchez 1IMDEA Software Institute, Madrid. Spain 1 Footnote 1: This work was funded in part by the Madrid Regional Gov. Project “S2018/TCS-4339 (BLOQUES-CM)”, by PRODIGY Project (TED2021-132464B-I00) funded by MCIN/AEI/10.13039/501100011033/ and the European Union Next Generation EU/PRTR, and by a research grant from Nomadic Labs and the Tezos Foundation. ###### Abstract In this paper, we address the problem of the (reactive) realizability of specifications of theories richer than Booleans, including arithmetic theories. Our approach transforms theory specifications into purely Boolean specifications by (1) substituting theory literals by Boolean variables, and (2) computing an additional Boolean requirement that captures the dependencies between the new variables imposed by the literals. The resulting specification can be passed to existing Boolean off-the-shelf realizability tools, and is realizable if and only if the original specification is realizable. The first contribution is a brute-force version of our method, which requires a number of SMT queries that is doubly exponential in the number of input literals. Then, we present a faster method that exploits a nested encoding of the search for the extra requirement and uses SAT solving for faster traversing the search space and uses SMT queries internally. Another contribution is a prototype in Z3-Python. Finally, we report an empirical evaluation using specifications inspired in real industrial cases. To the best of our knowledge, this is the first method that succeeds in non-Boolean LTL realizability. ## 1 Introduction Reactive synthesis [35, 34] is the problem of automatically producing a system that is guaranteed to model a given temporal specification, where the Boolean variables (i.e., atomic propositions) are split into variables controlled by the environment and variables controlled by the system. Realizability is the related decision problem of deciding whether such a system exists. These problems have been widely studied [24, 19], specially in the domain of Linear Temporal Logic (LTL) [33]. Realizability corresponds to infinite games where players alternatively choose the valuations of the Boolean variables they control. The winning condition is extracted from the temporal specification and determines which player wins a given play. A system is realizable if and only if the system player has a winning strategy, i.e., if there is a way to play such that the specification is satisfied in all plays played according to the strategy. However, in practice, many real and industrial specifications use complex data beyond Boolean atomic propositions, which precludes the direct use of realizability tools. These specifications cannot be written in (propositional) LTL, but instead use literals from a richer domain. We use LTL\({}_{\mathcal{T}}\) for the extension of LTL where Boolean atomic propositions can be literals from a (multi-sorted) first-order theory \(\mathcal{T}\). The \(\mathcal{T}\) variables (i.e., non-Boolean) in the specification are again split into those controlled by the system and those controlled by the environment. The resulting realizability problem also corresponds to infinite games, but, in this case, players chose valuations from the domains of \(\mathcal{T}\), which may be infinite. Therefore, arenas may be infinite and positions may have infinitely many successors. In this paper, we present a method that transforms a specification that uses data from a theory \(\mathcal{T}\) into an equi-realizable Boolean specification. The resulting specification can then be processed by an off-the-shelf realizability tool. The main element of our method is a novel _Boolean abstraction_ method, which allows to transform LTL\({}_{\mathcal{T}}\) specifications into pure (Boolean) LTL specifications. The method first substitutes all \(\mathcal{T}\) literals by fresh Boolean variables controlled by the system, and then extends the specification with an additional subformula that constrains the combination values of these variables. This method is described in Section 3. The main idea is that, after the environment selects values for its (data) variables, the system responds with values for the variables it controls, which induces a Boolean value for all the literals. The additional formula we compute captures the set of possible valuations of literals and the precise power of each player to produce each valuation. Example 1: Consider the following specification \(\varphi=\Box(R_{0}\wedge R_{1})\), where: \[R_{0}:(x<2)\to\bigcirc(y>1)\qquad\qquad R_{1}:(x\geq 2)\to(y<x)\] where \(x\) is a numeric variable that belongs to the environment and \(y\) to the system. In the game corresponding to this specification, each player has an infinite number of choices at each time step. For example, in \(\mathcal{T}_{\mathbb{Z}}\) (the theory of integers), the environment player chooses an integer for \(x\) and the system responds with an integer for \(y\). This induces a valuation of all literals in the formula, which in turn induces (also considering the valuations of the literals at other time instants, according to the temporal operators) a valuation of the full specification. In this paper, we exploit that, from the point of view of the valuations of the literals, there are only _finitely many_ cases and provide a systematic manner to compute these cases. This allows us to reduce a specification into a purely Boolean specification that is equi-realizable. This specification encodes the (finite) set of decisions of the environment, and the (finite) set of reactions of the system. Ex. 1 suggests a naive algorithm to capture the powers of the environment and system to determine a combination of the valuations of the literals, by enumerating all these combinations and checking the validity of each potential reaction. Checking that a given combination is a possible reaction requires an \(\exists^{*}\forall^{*}\) query (which can be delegated to an SMT solver for appropriate theories). In this paper, we describe and prove correct a Boolean abstraction method based on this idea. Then, we propose a more efficient search method for the set of possible reactions using SAT solving to speed up the exploration of the set of reactions. The main idea of this faster method is to learn from an invalid reaction which other reactions are guaranteed to be invalid, and from a valid reaction which other reactions are not worth being explored. We encode these learnt sets as a incremental SAT formula that allows to prune the search space. The resulting method is much more efficient than brute-force enumeration because, in each iteration, the learning can prune an exponential number of cases. An important technical detail is that computing the set of cases to be pruned from the outcome of a given query can be described efficiently using a SAT solver. In summary, our contributions are: (1) a proof that realizability is decidable for all LTL\({}_{\mathcal{T}}\) specifications for those theories \(\mathcal{T}\) with a decidable \(\exists^{*}\forall^{*}\) fragment; (2) a simple implementation of the resulting Boolean abstraction method; (3) a much faster method based on a nested-SAT implementation of the Boolean abstraction method that efficiently explores the search space of potential reactions; and (4) an empirical evaluation of these algorithms, where our early findings suggest that Boolean abstractions can be used with specifications containing different arithmetic theories, and also with industrial specifications. We used Z3 [12] both as an SMT solver and a SAT solver, and Strix [31] as the realizability checker. To the best of our knowledge, this is the first method that succeeds (and efficiently) in non-Boolean LTL realizability. ## 2 Preliminaries We study realizability of LTL [33, 29] specifications. The syntax of LTL is: \[\varphi::=T\,\big{|}\,a\,\big{|}\,\varphi\vee\varphi\,\big{|}\,\neg\varphi\, \big{|}\,\bigcirc\varphi\,\big{|}\,\varphi\,\mathcal{U}\,\varphi\] where \(a\) ranges from an atomic set of proposition \(\mathsf{AP}\), \(\vee\), \(\wedge\) and \(\neg\) are the usual Boolean disjunction, conjunction and negation, and \(\bigcirc\) and \(\mathcal{U}\) are the next and until temporal operators. The semantics of LTL associate traces \(\sigma\in\Sigma^{\omega}\) with formulae as follows: \[\begin{array}{lcl}\sigma&\models&T&\text{always}\\ \sigma&\models&a&\text{iff}&a\in\sigma(0)\\ \sigma&\models&\varphi_{1}\vee\varphi_{2}&\text{iff}&\sigma\models\varphi_{1} \text{ or }\sigma\models\varphi_{2}\\ \sigma&\models&\neg\varphi&\text{iff}&\sigma\not\models\varphi\\ \sigma&\models&\bigcirc\varphi&\text{iff}&\sigma^{1}\models\varphi\\ \sigma&\models&\varphi_{1}\,\mathcal{U}\,\varphi_{2}&\text{iff}&\text{ for some }i\geq 0\;\;\sigma^{i}\models\varphi_{2},\text{ and for all }0\leq j<i,\sigma^{j}\models\varphi_{1}\end{array}\] We use common derived operators like \(\vee\), \(\mathcal{R}\), \(\Diamond\) and \(\Box\). Reactive synthesis [37, 32, 6, 16, 5] is the problem of producing a system from an LTL specification, where the atomic propositions are split into propositions that are controlled by the environment and those that are controlled by the system. Synthesis corresponds to a turn-based game where, in each turn, the environment produces values of its variables (inputs) and the system responds with values of its variables (outputs). A play is an infinite sequence of turns. The system player wins a play according to an LTL formula \(\varphi\) if the trace of the play satisfies \(\varphi\). A (memory-less) strategy of a player is a map from positions into a move for the player. A play is played according to a strategy if all the moves of the corresponding player are played according to the strategy. A strategy is winning for a player if all the possible plays played according to the strategy are winning. Depending on the fragment of LTL used, the synthesis problem has different complexities. The method that we present in this paper generates a formula in the same temporal fragment as the original formula (e.g., starting from a safety formula another safety formula is generated). The generated formula is discharged into a solver capable to solve formulas in the right fragment. For simplicity in the presentation, we illustrate our method with safety formulae. We use \(\mathrm{LTL}_{\mathcal{T}}\) as the extension of LTL where propositions are replaced by literals from a first-order theory \(\mathcal{T}\). In realizability for \(\mathrm{LTL}_{\mathcal{T}}\), the variables that occur in the literals of a specification \(\varphi\) are split into those variables controlled by the environment (denoted by \(\overline{v}_{e}\)) and those controlled by the system \((\overline{v}_{s})\), where \(\overline{v}_{e}\cap\overline{v}_{s}=\emptyset\). We use \(\varphi(\overline{v}_{e},\overline{v}_{s})\) to remark that \(\overline{v}_{e}\cup\overline{v}_{s}\) are the variables occurring in \(\varphi\). The alphabet \(\Sigma_{\mathcal{T}}\) is now a valuation of the variables in \(\overline{v}_{e}\cup\overline{v}_{s}\). A trace is an infinite sequence of valuations, which induces an infinite sequence of Boolean values of the literals occurring in \(\varphi\) and, in turn, a valuation of the temporal formula. Realizability for \(\mathrm{LTL}_{\mathcal{T}}\) corresponds to an infinite game with an infinite arena where positions may have infinitely many successors if the ranges of the variables controlled by the system and the environment are infinite. For instance, in Ex. 1 with \(\mathcal{T}=\mathcal{T}_{\mathbb{Z}}\), valuation ranges over infinite values, and literal (\(x\geq 2\)) can be satisfied with \(x=2\), \(x=3\), etc. Arithmetic theories are a particular class of first-order theories. Even though our Boolean abstraction technique is applicable to any theory with a decidable \(\exists^{*}\forall^{*}\) fragment, we illustrate our technique with arithmetic specifications. Concretely, we will consider \(\mathcal{T}_{\mathbb{Z}}\) (i.e., linear integer arithmetic) and \(\mathcal{T}_{\mathbb{R}}\) (i.e., non-linear real arithmetic). Both theories have a decidable \(\exists^{*}\forall^{*}\) fragment. Note that the choice of the theory influences the realizability of a given formula. Example 2: Consider Ex. 1. The formula \(\varphi:=R_{0}\)\(\wedge\)\(R_{1}\) is not realizable for \(\mathcal{T}_{\mathbb{Z}}\), since, if at a given instant \(t\), the environment plays \(x=0\) (and hence \(x<2\) is true), then \(y\) must be greater than \(1\) at time \(t+1\). Then, if at \(t+1\) the environment plays \(x=2\) then (\(x\geq 2\)) is true but there is no \(y\) such that both (\(y>1\)) and (\(y<2\)). However, for \(\mathcal{T}_{\mathbb{R}}\), \(\varphi\) is realizable (consider the system strategy to always play \(y=1.5\)). The following slight modifications of Ex. 1 alters its realizability (\(R_{1}^{\prime}\) substitutes \(R_{1}\) by having the \(\mathcal{T}\)-predicate \(y\leq x\) instead of \(y<x\)): \[R_{0}:(x<2)\to\bigcirc(y>1)\qquad\qquad R_{1}^{\prime}:(x\geq 2)\to(y\leq x)\] Now, \(\varphi^{\prime}=\square(R_{0}\wedge R_{1}^{\prime})\) is realizable for both \(\mathcal{T}_{\mathbb{Z}}\) and \(\mathcal{T}_{\mathbb{R}}\), as the strategy of the system to always pick \(y=2\) is winning in both theories. ## 3 Boolean Abstraction We solve the realizability problem modulo theories by transforming the specification into an equi-realizable Boolean specification. Given a specification \(\varphi\) with literals \(l_{i}\), we get a new specification \(\varphi[l_{i}\gets s_{i}]\wedge\square\varphi^{\mathit{extra}}\), where \(s_{i}\) are fresh Boolean variables and \(\varphi^{\mathit{extra}}\in\mathrm{LTL}_{\mathbb{B}}\) is a Boolean formula (without temporal operators). The additional sub-formula \(\varphi^{\mathit{extra}}\) uses the freshly introduced variables \(s_{i}\) controlled by the system, as well as additional Boolean variables controlled by the environment \(\overline{e}\), and captures the precise combined power of the players to decide the valuations of the literals in the original formula. We call our approach _Booleanization_ or _Boolean abstraction_. The approach is summarized in Fig. 1: given an LTL specification \(\varphi_{\mathcal{T}}\), it is translated into a Boolean \(\varphi_{\mathbb{B}}\) which can be analyzed with off-the-shelf realizability checkers. Note that \(\mathcal{G}^{\mathsf{B}}\) and \(\mathcal{G}^{\mathcal{T}}\) are the games constructed from specifications \(\varphi_{\mathbb{B}}\) and \(\varphi_{\mathcal{T}}\), respectively. Also, note that [23] shows that we can construct a game \(\mathcal{G}\) from a specification \(\varphi\) and that \(\varphi\) is realizable if and only if \(\mathcal{G}\) is winning for the system. The Booleanization procedure constructs an extra requirement \(\varphi^{\mathit{extra}}\) and conjoins \(\square\varphi^{\mathit{extra}}\) with the formula \(\varphi[l_{i}\gets s_{i}]\). In a nutshell, after the environment chooses a valuation of the variables it controls (including \(\overline{e}\)), the system responds with valuations of its variables (including \(s_{i}\)), which induces a Boolean value for all literals. Therefore, for each possible choice of the environment, the system has the power to choose a Boolean response among a specific collection of responses (a subset of all the possible combinations of Boolean valuations of the literals). Since the set of all possible responses is finite, so are the different cases. The extra requirement captures precisely the finite collection of choices of the environment and the resulting finite collection of responses of the system for each case. ### Notation In order to explain the construction of the extra requirement, we introduce some preliminary definitions. We will use Ex. 1 as the running example. A literal is an atom or its negation, regardless of whether the atom is a Boolean variable or a predicate of a theory. Let \(\mathit{Lit}(\varphi)\) be the collection of Figure 1: The tool chain with the correctness argument. literals that appear in \(\varphi\) (or \(\mathit{Lit}\), if the formula is clear from the context). For simplicity, we assume that all literals belong the same theory, but each theory can be Booleanized in turn, as each literal belongs to exactly one theory and we assume in this paper that literals from different theories do not share variables. We will use \(\overline{x}\) as the environment controlled variables occurring in \(\mathit{Lit}(\varphi)\) and \(\overline{y}\) for the variables controlled by the system. In Ex. 1, we first translate the literals in \(\varphi\). Since \((x<2)\) is equivalent to \(\neg(x\geq 2)\), we use a single Boolean variable for both. The substitutions is: \[\begin{array}{ll}(x<2)\gets s_{0}&(y>1)\gets s_{1}&(y<x)\gets s _{2}\\ (x\geq 2)\leftarrow\neg s_{0}&(y\leq 1)\leftarrow\neg s_{1}&(y\geq x) \leftarrow\neg s_{2}\end{array}\] After the substitution we obtain \(\varphi^{\prime\prime}=\Box(R_{0}^{\mathbb{B}}\wedge R_{1}^{\mathbb{B}})\) where \[R_{0}^{\mathbb{B}}:s_{0}\rightarrow\bigcirc s_{1}\qquad\qquad R_{1}^{\mathbb{ B}}:\neg s_{0}\to s_{2}\] Note that \(\varphi^{\prime\prime}\) may not be equi-realizable to \(\varphi\), as we may be giving too much power to the system if \(s_{0}\), \(s_{1}\) and \(s_{2}\) are chosen independently without restriction. Note that \(\varphi^{\prime\prime}\) is realizable, for example by always choosing \(s_{1}\) and \(s_{2}\) to be true, but \(\varphi\) is not realizable in \(\mathit{LTL}_{\mathcal{T}_{\mathbb{Z}}}\). This justifies the need of an extra sub-formula. Definition 1 (Choice): A choice \(c\subseteq\mathit{Lit}(\varphi)\) is a subset of the literals of \(\varphi\). The intended meaning of a choice is to capture what literals are true in the choice, while the rest (i.e., \(\mathit{Lit}\setminus c\)) are false. Once the environment picks values for \(\overline{x}\), the system can realize some choice \(c\) by selecting \(\overline{y}\) and making the literals in \(c\) true (and the rest false). However, for some values of \(\overline{x}\), some choices may not be possible for the system for any \(\overline{y}\). Given a choice \(c\), we use \(f(c(\overline{x},\overline{y}))\) to denote the formula: \[\bigwedge_{l\in c}l\land\bigwedge_{l\notin c}\neg l\] which is a formula with variables \(\overline{x}\) and \(\overline{y}\) that captures logically the set of values of \(\overline{x}\) and \(\overline{y}\) that realize precisely choice \(c\). We use \(\mathcal{C}\) for the set of choices. Note that there are \(|\mathcal{C}|=2^{|\mathit{Lit}|}\) different choices. We call the elements of \(\mathcal{C}\) choices because they may be at the disposal of the system to choose by picking the right values of its variables. A given choice \(c\) can act as _potential_ (meaning that the response is possible) or as _antipotential_ (meaning that the response is not possible). A potential is a formula (that depends only on \(\overline{x}\)) that captures those values of \(\overline{x}\) for which the system can respond and make precisely the literals in \(c\) true (and the rest of the literals false). The negation of the potential (i.e., an antipotential) captures precisely those values of \(\overline{x}\) for which there are no values of \(\overline{y}\) that lead to \(c\). Definition 2 (Potential and Antipotential): Given a choice \(c\), a potential is the following formula \(c^{p}\) and an antipotential is the following formula \(c^{a}\): \[c^{p}(\overline{x})=\exists\overline{y}.f(c(\overline{x},\overline{y})) c^{a}(\overline{x})=\forall\overline{y}.\neg f(c(\overline{x},\overline{y}))\] Example 3: We illustrate two choices for Ex. 1. Consider choices \(c_{0}=\{(x<2),(y>1),(y<x)\}\) and \(c_{1}=\{(x<2),(y>1)\}\). Choice \(c_{0}\) corresponds to \(f(c_{0})=(x<2)\wedge(y>1)\wedge(y<x)\), that is, literals \((x<2)\), \((y>1)\) and \((y<x)\) are true. Choice \(c_{1}\) corresponds to \(f(c_{1})=(x<2)\wedge(y>1)\wedge(y\geq x)\), that is, literals \((x<2)\) and \((y>1)\) being true and \((y<x)\) being false (i.e., \((y\geq x)\) being true). It is easy to see the meaning of \(c_{2}\), \(c_{3}\) etc. Then, the potential and antipotential formulae of e.g., choices \(c_{0}\) and \(c_{1}\) from Ex. 1 are as follows: \[\begin{array}{l}c_{0}^{p}=\exists y.(x<2)\wedge(y>1)\wedge(y<x)\\ c_{1}^{p}=\exists y.(x<2)\wedge(y>1)\wedge(y\geq x)\end{array}\qquad c_{0}^{a} =\forall y.\neg\big{(}(x<2)\wedge(y>1)\wedge(y<x)\big{)}\\ c_{1}^{a}=\forall y.\neg\big{(}(x<2)\wedge(y>1)\wedge(y\geq x)\big{)}\end{array}\] Note that potentials and antipotentials have \(\overline{x}\) as the only free variables. Depending on the theory, the validity of potentials and antipotentials may be different. For instance, consider \(c_{0}^{p}\) and theories \(\mathcal{T}_{\mathbb{Z}}\) and \(\mathcal{T}_{\mathbb{R}}\): * In \(\mathcal{T}_{\mathbb{Z}}\): \(\exists y.(x<2)\wedge(y>1)\wedge(y<x)\) is equivalent to _false_. * In \(\mathcal{T}_{\mathbb{R}}\): \(\exists y.(x<2)\wedge(y>1)\wedge(y<x)\) is equivalent to \((x<2)\). These equivalences can be obtained using classic quantifier elimination procedures, e.g., with Cooper's algorithm [11] for \(\mathcal{T}_{\mathbb{Z}}\) and Tarski's method [36] for \(\mathcal{T}_{\mathbb{R}}\). A reaction is a description of the specific choices that the system has the power to choose. Definition 3 (Reaction): Let \(P\) and \(A\) be a partition of \(\mathcal{C}\) that is: \(P\subseteq\mathcal{C}\), \(A\subseteq\mathcal{C}\), \(P\cap A=\emptyset\) and \(P\cup A=\mathcal{C}\). The reaction \(\text{react}_{(P,A)}\) is as follows: \[\text{react}_{(P,A)}(\overline{x})\stackrel{{ def}}{{=}}\bigwedge_{c \in P}c^{p}\wedge\bigwedge_{c\in A}c^{a}\] The reaction \(\text{react}_{(P,A)}\) is equivalent to: \[\text{react}_{(P,A)}(\overline{x})=\bigwedge_{c\in P}\big{(}\exists\overline{ y}.f(c(\overline{x},\overline{y}))\big{)}\wedge\bigwedge_{c\in A}\big{(}\forall \overline{y}.\neg f(c(\overline{x},\overline{y}))\big{)}.\] There are \(2^{2^{\left\lfloor Lit\right\rfloor}}\) different reactions. A reaction \(r\) is called valid whenever there is a move of the environment for which \(r\) captures precisely the power of the system, that is exactly which choices the system can choose. Formally, a reaction is valid whenever \(\exists\overline{x}.r(\overline{x})\) is a valid formula. We use \(\mathcal{R}\) for the set of reactions and \(\mathit{VR}\) for the set of valid reactions. It is easy to see that, for all possible valuations of \(\overline{x}\) the environment can pick, the system has a specific power to respond (among the finitely many cases). Therefore, the following formula is valid: \[\varphi_{\mathit{VR}}=\forall\overline{x}.\bigvee_{r\in\mathit{VR}}r( \overline{x}).\] Example 4: In Ex. 1, for theory \(\mathcal{T}_{\mathbb{Z}}\), we find there are two valid reactions (using choices from Ex. 3): \[\begin{array}{l}r_{1}:\exists x.c_{0}^{a}\wedge c_{1}^{p}\wedge c_{2}^{p}\wedge c _{3}^{p}\wedge c_{4}^{a}\wedge c_{5}^{a}\wedge c_{6}^{a}\wedge c_{7}^{a}\\ r_{2}:\exists x.c_{0}^{a}\wedge c_{1}^{a}\wedge c_{2}^{a}\wedge c_{3}^{a}\wedge c _{4}^{a}\wedge c_{5}^{p}\wedge c_{6}^{p}\wedge c_{7}^{a},\end{array}\] where reaction \(r_{1}\) models the possible responses of the system after the environment picks a value for \(x\) with \((x<2)\), whereas \(r_{2}\) models the responses to \((x\geq 2)\). On the other hand, for \(\mathcal{T}_{\mathbb{R}}\), there are three valid reactions: \[\begin{array}{l}r_{1}:\exists x.c_{0}^{a}\wedge c_{1}^{p}\wedge c_{2}^{p} \wedge c_{3}^{p}\wedge c_{4}^{a}\wedge c_{5}^{a}\wedge c_{6}^{a}\wedge c_{7}^{ a}\\ r_{2}:\exists x.c_{0}^{p}\wedge c_{1}^{p}\wedge c_{2}^{p}\wedge c_{3}^{a} \wedge c_{4}^{a}\wedge c_{5}^{a}\wedge c_{6}^{a}\wedge c_{7}^{a}\\ r_{3}:\exists x.c_{0}^{a}\wedge c_{1}^{a}\wedge c_{2}^{a}\wedge c_{3}^{a} \wedge c_{4}^{a}\wedge c_{5}^{p}\wedge c_{6}^{p}\wedge c_{7}^{a}\end{array}\] Note that there is one valid reaction more, since in \(\mathcal{T}_{\mathbb{R}}\) there is one more case: \(x\in(1,2]\). Also, note that \(c_{4}\) cannot be a potential in \(\mathcal{T}_{\mathbb{Z}}\) (not even with a collaboration between environment and system), whereas it can in \(\mathcal{T}_{\mathbb{R}}\). ### The Boolean Abstraction Algorithm Boolean abstraction is a method to compute \(\varphi_{\mathbb{B}}\) from \(\varphi_{\mathcal{T}}\). In this section we describe and prove correct a basic brute-force version of this method, and later in Section 4, we present faster algorithms. All Boolean abstraction algorithms that we present on this paper first compute the extra requirement, by visiting the set of reactions and computing a subset of the valid reactions that is sufficient to preserve realizability. The three main building blocks of our algorithms are (1) the stop criteria of the search for reactions; (2) how to obtain the next reaction to consider; and (3) how to modify the current set of valid reactions (by adding new valid reactions to it) and the set of remaining reactions (by pruning the search space). Finally, after the loop, the algorithm produces as \(\varphi^{\mathit{extra}}\) a conjunction of cases, one per valid reaction \((P,A)\) in _VR_. ``` 1 Input: \(\varphi_{\mathcal{T}}\) 2\(\varphi^{\prime}\leftarrow\varphi_{\mathcal{T}}[l_{i}\gets s_{i}]\)\(\mathit{VR}\leftarrow\{\}\)\(\mathcal{C}\leftarrow\mathit{choices}(\mathit{literals}(\varphi_{\mathcal{T}}))\)\(\mathcal{R}\gets 2^{\mathcal{C}}\)for\((P,A)\in\mathcal{R}\)do 3if\(\exists\overline{x}.\mathit{react}_{(P,A)}(\overline{x})\)then 4\(\mathit{VR}\leftarrow\mathit{VR}\cup\{(P,A)\}\) 5\(\varphi^{\mathit{extra}}\leftarrow\mathit{getExtra}(\mathit{VR})\)return\(\varphi^{\prime}\wedge\square(A\rightarrow\varphi^{\mathit{extra}})\) ``` **Alg. 1:**Brute-force We introduce a fresh variable \(e_{(P,A)}\), controlled by the environment for each valid reaction \((P,A)\), to capture that the environment plays values for \(\overline{x}\) that correspond to the case where the system is left with the power to choose captured precisely by \((P,A)\). Therefore, there is one additional environment Boolean variable per valid reaction (in practice we can enumerate the number of valid reactions and introduce only a logarithmic number of environment variables). Finally, the extra requirement uses \(P\) for each valid reaction \((P,A)\) to encode the potential moves of the systems as a disjunction of the literals described by each choice in \(P\). Each of these disjunction contains precisely the combinations of literals that are possible for the concrete case that \((P,A)\) captures. A brute-force algorithm that implements Boolean abstraction method by exhaustively searching all reactions is shown in Alg 1. The building blocks of this algorithm are: 1. It stops when the remaining set of reactions is empty. 2. It traverses the set \(\mathcal{R}\) according to some predetermined order. 3. To modify the set of valid reactions, if \((P,A)\) is valid it adds \((P,A)\) to the set _VR_ (line 7). To modify the set of remaining reactions, it removes \((P,A)\) from the search. Finally, the extra sub-formula \(\varphi^{\mathit{extra}}\) is generated by _getExtra_ (line 8) defined as follows: \[\mathit{getExtra}(\mathit{VR})=\bigwedge_{(P,A)\in\mathit{VR}}(e_{(P,A)}\to \bigvee_{c\in P}(\bigwedge_{l_{i}\in c}s_{i}\land\bigwedge_{l_{i}\notin c}\neg s _{i}))\] Note that there is an \(\exists^{*}\forall^{*}\) validity query in the body of the loop (line 6) to check whether the candidate reaction is valid. This is why decidability of the \(\exists^{*}\forall^{*}\) fragment is crucial because it captures the finite partitioning of the environment moves (which is existentially quantified) for which the system can react in certain ways (i.e., potentials, which are existentially quantified) by picking appropriate valuations but not in others (i.e., antipotentials, which are universally quantified). In essence, the brute-force algorithm iterates over all the reactions, one at a time, checking whether each reaction is valid or not. In case the reaction (characterized by the set of potential choices3) is valid, it is added to _VR_. Footnote 3: The potentials in a choice characterize the precise power of the system player, because the potentials correspond with what the system can respond. Example 5: Consider again the specification in Ex. 1, with \(\mathcal{T}_{\mathbb{Z}}\) as theory. Note that the valid reactions are \(r_{1}\) and \(r_{2}\), as shown in Ex. 4, where the potentials of \(r_{1}\) are \(\{c_{1},c_{2},c_{3}\}\) and the potentials of \(r_{2}\) are \(\{c_{5},c_{6}\}\). Now, the creation of \(\varphi^{\mathit{extra}}\) requires two fresh variables \(d_{0}\) and \(d_{1}\) for the environment (they correspond to environment decisions (\(x<2\)) and (\(x\geq 2\)), respectively), resulting into: \[\varphi^{\mathit{extra}}_{\mathcal{T}_{\mathbb{Z}}}:\left(\begin{array}{c}d_{ 0}\to\left((s_{0}\land s_{1}\land\neg s_{2})\lor(s_{0}\land\neg s_{1}\land s_ {2})\lor(s_{0}\land\neg s_{1}\land\neg s_{2})\right)\\ \land\\ d_{1}\to\left((\neg s_{0}\land s_{1}\land\neg s_{2})\lor(\neg s_{0}\land \neg s_{1}\land s_{2})\right)\end{array}\right)\] For example \(c_{2}=\{s_{0}\}\) is a choice that appears as potential in valid reaction \(r_{1}\), so it appears as a disjunct of \(d_{0}\) as \((s_{0}\land\neg s_{1}\land\neg s_{2})\). The resulting _Booleanized_ specification \(\varphi_{\mathbb{B}}\) is as follows: \[\varphi^{\mathbb{B}}_{\mathcal{T}_{\mathbb{Z}}}=(\varphi^{\prime\prime}\land \square(A_{\mathbb{B}}\to\varphi^{\mathit{extra}}_{\mathcal{T}_{\mathbb{Z}}}))\] Note that the Boolean encoding is extended with an assumption formula \(A_{\mathbb{B}}=(d_{0}\leftrightarrow\neg d_{1})\land(d_{0}\lor d_{1})\) that restricts environment moves to guarantee that exactly one environment decision variable is picked. Also, note that a Boolean abstraction algorithm will output three (instead of two) decisions for the environment, but we ackowledge that one of them will never be played by it, since it gives strictly more power to the system. The complexity of this brute-force Booleanization algorithm is doubly exponential in the number of literals. ### From Local Simulation to Equi-Realizability The intuition about the correctness of the algorithm is that the extra requirement encodes precisely all reactions (i.e., collections of choices), for which there is a move of the environment that leaves the system with precisely that power to respond. As an observation, in the extra requirement, the set of potentials in valid reactions cannot be empty. This is stated in Lemma 3. Lemma 1: _Let \(C\in\mathcal{C}\) be such that \(react_{C}\in\) VR. Then \(C\neq\emptyset\)._ Proof: Bear in mind \(\textit{react}_{C}\in\) VR is valid. Let \(\overline{v}\) be such that \(\textit{react}_{C}[\overline{x}\leftarrow\overline{v}]\) is valid. Let \(\overline{w}\) be an arbitrary valuation of \(\overline{y}\) and let \(c\) be a choice and \(l\) a literal. Therefore: \[\bigwedge_{l[\overline{x}\leftarrow\overline{v},\overline{y}\leftarrow \overline{w}]\text{ is true }l\wedge\bigwedge_{l[\overline{x}\leftarrow\overline{v},\overline{y} \leftarrow\overline{w}]\text{ is false}}\neg l}\] It follows that \(I[\overline{x}\leftarrow\overline{v}]\exists\overline{y}.c\), so \(c\in C\). Lemma 3 is crucial, because it ensures that once a Boolean abstraction algorithm is executed, for each fresh \(\overline{e}\) variable in the extra requirement, at least one reaction with one or more potentials can be responded by the system. Therefore, in each position in the realizability game, the system can respond to moves of the system leaving to precisely corresponding positions in the Boolean game. In turn, this leads to equi-realizability because each move can be simulated in the corresponding game. Concretely, it is easy to see that we can define a simulation between the positions of the games for \(\varphi_{\mathcal{T}}\) and \(\varphi_{\mathbb{B}}\) such that (1) each literal \(l_{i}\) and the corresponding variable \(s_{i}\) have the same truth value in related positions, (2) the extra requirement is always satisfied, and (3) moves of the system in each game from related positions in each game can be mimicked in the other game. This is captured by the following theorem: Theorem 3.1: _System wins \(\mathcal{G}^{\mathcal{T}}\) if and only if System wins the game \(\mathcal{G}^{\mathbb{B}}\). Therefore, \(\varphi_{\mathcal{T}}\) is realizable if and only if \(\varphi_{\mathbb{B}}\) is realizable._ Proof: (Sketch). Since realizability games are memory-less determined, it is sufficient to consider only local strategies. Given a strategy \(\rho_{\mathbb{B}}\) that is winning in \(\mathcal{G}^{\mathbb{B}}\) we define a strategy \(\rho_{\mathcal{T}}\) in \(\mathcal{G}^{\mathcal{T}}\) as follows. Assuming related positions, \(\rho_{\mathcal{T}}\) moves in \(\mathcal{G}^{\mathcal{T}}\) to the successor that is related to the position where \(\rho_{\mathbb{B}}\) moves in \(\mathcal{G}^{\mathbb{B}}\). By (3) above, it follows that for every play played in \(\mathcal{G}^{\mathbb{B}}\) according to \(\rho_{\mathbb{B}}\) there is a play in \(\mathcal{G}^{\mathcal{T}}\) played according to \(\rho_{\mathcal{T}}\) that results in the same trace, and vice-versa: for every play played in \(\mathcal{G}^{\mathcal{T}}\) according to \(\rho_{\mathcal{T}}\) there is a play in \(\mathcal{G}^{\mathbb{B}}\) played according to \(\rho_{\mathcal{B}}\) that results in the same trace. Since \(\rho_{\mathbb{B}}\) is winning, so is \(\rho_{\mathcal{T}}\). The other direction follows similarly, because again \(\rho_{\mathbb{B}}\) can be constructed from \(\rho_{\mathcal{T}}\) not only guaranteeing the same valuation of literals and corresponding variables, but also that the extra requirement holds in the resulting position. The following corollary of Thm. 3.1 follows immediately. Theorem 3.2: _Let \(\mathcal{T}\) be a theory with a decidable \(\exists^{*}\forall^{*}\)-fragment. Then, \(\textsc{LTL}_{\mathcal{T}}\) realizability is decidable._ ## 4 Efficient algorithms for Boolean Abstraction ### Quasi-reactions The basic algorithm presented in Section 3 exhaustively traverses the set of reactions, one at a time, checking whether each reaction is valid. Therefore, the body of the loop is visited \(2^{|\mathcal{C}|}\) times. In practice, the running time of this basic algorithm quickly becomes unfeasible. We now improve Alg. 1 by exploiting the observation that every SMT query for the validity of a reaction reveals information about the validity of other reactions. We will exploit this idea by learning uninteresting subsequent sets of reactions and pruning the search space. The faster algorithms that we present below encode the remaining search space using a SAT formula, whose models are further reactions to explore. To implement the learning-and-pruning idea we first introduce the notion of quasi-reaction. Definition 4 (Quasi-reaction): A quasi-reaction is a pair \((P,A)\) where \(P\subseteq\mathcal{C}\), \(A\subseteq\mathcal{C}\) and \(P\cap A=\emptyset\). Quasi-reactions remove from reactions the constraint that \(P\cup A=\mathcal{C}\). A quasi-reaction represents the set of reactions that would be obtained from choosing the remaining choices that are neither in \(P\) nor in \(A\) as either potential or antipotential. The set of quasi-reactions is: \[\mathcal{Q}=\{(P,A)|P,A\subseteq\mathcal{C}\text{ and }P\cap A=\emptyset\}\] Note that \(\mathcal{R}=\{(P,A)\in\mathcal{Q}|P\cup A=\mathcal{C}\}\). Example 6: Consider a case with four choices \(c_{0}\), \(c_{1}\), \(c_{2}\) and \(c_{3}\). The quasi-reaction \((\{c_{0},c_{2}\},\{c_{1}\})\) corresponds to the following formula: \[\exists\overline{x}.\ \big{(}\exists\overline{y}.\ f(c_{0}(\overline{x}, \overline{y}))\wedge\forall\overline{y}.\ \neg f(c_{1}(\overline{x},\overline{y}))\wedge\exists \overline{y}.\ f(c_{2}(\overline{x},\overline{y}))\big{)}\] Note that nothing is stated in this quasi-reaction about \(c_{3}\) (it neither acts as a potential nor as an antipotential). Consider the following order between quasi-reactions: \((P,A)\preceq(P^{\prime},A^{\prime})\) holds if and only if \(P\subseteq P^{\prime}\) and \(A\subseteq A^{\prime}\). It is easy to see that \(\preceq\) is a partial order, that \((\emptyset,\emptyset)\) is the lowest element and that for every two elements \((P,A)\) and \((P^{\prime},A^{\prime})\) there is a greatest lower bound (namely \((P\cap P^{\prime},A\cap A^{\prime})\)). Therefore \((P,A)\sqcap(P^{\prime},A^{\prime})\ \stackrel{{\mathrm{def}}}{{=}}\ (P\cap P^{\prime},A\cap A^{\prime})\) is a meet operation (it is associative, commutative and idempotent). Note that \(q\preceq q^{\prime}\) if and only if \(q\sqcap q^{\prime}=q\). Formally: Proposition 1: \((\mathcal{Q},\sqcap)\) _is a lower semi-lattice._ The quasi-reaction semi-lattice represents how _informative_ a quasi-reaction is. Given a quasi-reaction \((P,A)\), removing an element from either \(P\) or \(A\) results in a strictly less informative quasi-reaction. The lowest element \((\emptyset,\emptyset)\) contains the least information. Given a quasi-reaction \(q\), the set \(\mathcal{Q}_{q}=\{q^{\prime}\in\mathcal{Q}|q^{\prime}\preceq q\}\) of the quasi-reactions below \(q\) form a full lattice with join \((P,Q)\sqcup(P^{\prime},Q^{\prime})\stackrel{{\mathrm{def}}}{{=}}( P\cup P^{\prime},Q\cup Q^{\prime})\). This is well defined because \(P^{\prime}\) and \(Q\), and \(P\) and \(Q^{\prime}\) are guaranteed to be disjoint. Proposition 2: _For every \(q\), \((\mathcal{Q}_{q},\sqcap,\sqcup)\) is a lattice._ As for reactions, quasi-reactions correspond to a formula in the theory as follows: \[\text{{qreact}}_{(P,A)}(\overline{x})=\bigwedge_{c\in P}\left(\exists\overline {y}.c(\overline{x},\overline{y})\right)\wedge\bigwedge_{c\in A}\left(\forall \overline{y}.\neg c(\overline{x},\overline{y})\right)\] Again, given a quasi-reaction \(q\), if \(\exists\overline{x}.\text{{qreact}}_{q}(\overline{x})\) is valid we say that \(q\) is valid, otherwise we say that \(q\) is invalid. The following holds directly from the definition (and the fact that adding conjuncts makes a first-order formula "less satisfiable"). Proposition 3: _Let \(q,q^{\prime}\) be two quasi-reactions with \(q\preceq q^{\prime}\). If \(q\) is invalid then \(q^{\prime}\) is invalid. If \(q^{\prime}\) is valid then \(q\) is valid._ These results enable the following optimizations. ### Quasi-reaction-based Optimizations #### 4.2.1 A Logic-based Optimization. Consider that, during the search for valid reactions in the main loop, a reaction \((P,A)\) is found to be invalid, that is \(\text{{react}}_{(P,A)}\) is unsatisfiable. If the algorithms explores the quasi-reactions below \((P,A)\), finding \((P^{\prime},A^{\prime})\preceq(P,A)\) such that \(\text{{qreact}}_{(P^{\prime},A^{\prime})}\), then by Prop. 3, every reaction \((P^{\prime\prime},A^{\prime\prime})\) above \((P^{\prime},A^{\prime})\) is guaranteed to be invalid. This allows to prune the search in the main loop by computing a more informative quasi-reaction \(q\) after an invalid reaction \(r\) is found, and skipping all reactions above \(q\) (and not only \(r\)). For example, if the reaction corresponding to \((\{c_{0},c_{2},c_{3}\},\{c_{1}\})\) is found to be invalid, and by exploring quasi-reactions below it, we find that \((\{c_{0}\},\{c_{1}\})\) is also invalid, then we can skip all reactions above \((\{c_{0}\},\{c_{1}\})\). This includes for example \((\{c_{0},c_{2}\},\{c_{1},c_{3}\})\) and \((\{c_{0},c_{3}\},\{c_{1},c_{2}\})\). In general, the lower the invalid quasi-reaction in \(\preceq\), the more reactions will be pruned. This optimization resembles a standard choosing of max/min elements in an anti-chain. #### 4.2.2 A Game-based Optimization. Consider now two reactions \(r=(P,A)\) and \(r^{\prime}=(P^{\prime},A^{\prime})\) such that \(P\subseteq P^{\prime}\) and assume that both are valid reactions. Since \(r^{\prime}\) allows more choices to the system (because the potentials \(P\) determine these choices), the environment player will always prefer to play \(r\) than \(r^{\prime}\). Formally, if there is a winning strategy for the environment that chooses values for \(\overline{x}\) (corresponding to a model of \(\text{{react}}_{r}\)), then choosing values for \(\overline{x}^{\prime}\) instead (corresponding to a model of \(\text{{react}}_{r^{\prime}}\)) will also be winning. Therefore, if a reaction \(r\) is found to be valid, we can prune the search for reactions \(r^{\prime}\) that contain strictly more potentials, because even if \(r^{\prime}\) is also valid, it will be less interesting for the environment player. For instance, if \((\{c_{0},c_{3}\},\{c_{1},c_{2}\})\) is valid, then \((\{c_{0},c_{1},c_{3}\},\{c_{2}\})\) and \((\{c_{0},c_{1},c_{3},c_{2}\},\{\})\) become uninteresting to be explored and can be pruned from the search. ### A Single Model-loop Algorithm (Alg. 2) We present now a faster algorithm that replaces the main loop of Alg. 1 that performs exhaustive exploration with a SAT-based search procedure that prunes uninteresting reactions. In order to do so, we use a SAT formula \(\psi\) with one variable \(z_{i}\) per choice \(c_{i}\), in a DPLL(T) fashion. An assignment \(v:\mathit{Vars}(\psi)\rightarrow\mathbb{B}\) to these variables represents a reaction \((P,A)\) where \[P=\{c_{i}|v(z_{i})=\mathit{true}\}\hskip 28.452756ptA=\{c_{j}|v(z_{j})= \mathit{false}\}\] Similarly, a partial assignment \(v:\mathit{Vars}(\psi)\rightharpoonup\mathbb{B}\) represents a quasi-reaction. The intended meaning of \(\psi\) is that its models encode the set of interesting reactions that remain to be explored. This formula is initialized with \(\psi=\mathit{true}\) (note that \(\neg(\bigwedge_{z_{i}}\neg z_{i})\) is also a correct starting point because the reaction where all choices are antipotentials is invalid). Then, a SAT query is used to find a satisfying assignment for \(\psi\), which corresponds to a (quasi-)reaction \(r\) whose validity is interesting to be explored. Alg. 2 shows the Model-loop algorithm. The three main building blocks of the model-loop algorithm are: 1. Alg. 2 stops when \(\psi\) is invalid (line 14). 2. To explore a new reaction, Alg. 2 obtains a satisfying assignment for \(\psi\) (line 15). 3. Alg. 2 checks the validity of the reaction (line 16) and enriches \(\psi\) o prune according to what can be learned, as follows: * If the reaction is invalid (as a result of the SMT query in line 16), then it checks the validity of quasi-reaction \(q=(\emptyset,A)\) in line 23. If \(q\) is invalid, add the negation of \(q\) as a new conjunction of \(\psi\) (line 26). If \(q\) is valid, add the negation of the reaction (line 24). This prevents all SAT models that agree with one of these \(q\), which correspond to reactions \(q\preceq r^{\prime}\), including \(r\). * If the reaction is valid, then it is added to the set of valid reactions _VR_ and the corresponding quasi-reaction that results from removing the antipotentials is added (negated) to \(\psi\) (line 18), preventing the exploration of uninteresting cases, according to the game-based optimization. As for the notation in Alg. 2 (also in Alg. 3 and Alg. 4), _model(\(\psi\))_ in line 15 is a function that returns a satisfying assignment of the SAT formula \(\psi\), _posVars(m)_ returns the positive variables of \(m\) (e.g., \(c_{i},c_{j}\) etc.) and _negVars(m)_ returns the negative variables. Finally, \(\textit{toTheory}(m,\mathcal{C})=\bigwedge_{m_{i}}c_{i}^{p}\wedge\bigwedge_{ \neg m_{i}}c_{i}^{a}\) (in lines 16 and 23) translates a Boolean formula into its corresponding formula in the given \(\mathcal{T}\) theory. Note that unsatisfiable \(m\) can be minimized finding cores. If \(r\) is invalid and \((\emptyset,A)\) is found also to be invalid, then exponentially many cases can be pruned. Similarly, if \(r\) is valid, also exponentially many cases can be pruned. The following result shows the correctness of Alg. 2: Theorem 4.1: _Alg. 2 terminates and outputs a correct Boolean abstraction._ Proof: (Sketch). Alg. 2 terminates because, at each step in the loop, \(\psi\) removes at least one satisfying assignment and the total number is bounded by \(2^{|\mathcal{C}|}\). Also, the correctness of the generated formula is guaranteed because, for every valid reaction in Alg. 1, either there is a valid reaction found in Alg. 2 or a more promising reaction found in Alg. 2. ### A Nested-SAT algorithm (Alg. 3) We now present an improvement of Alg. 2 that performs a more detailed search for a promising collection of invalid quasi-reactions under an invalid reaction \(r\). Note that it is not necessary to find the precise collection of all the smallest quasi-reactions that are under an invalid reaction \(r\), as long as at least one quasi-reaction under \(r\) is calculated (perhaps, \(r\) itself). Finding lower quasi-reactions allow to prune more, but its calculation is more costly, because more SMT queries need to be performed. The Nested-SAT algorithm (Alg. 3) explores (using an inner SAT encoding) this trade-off between computing more exhaustively better invalid quasi-reactions and the cost of the search. The three main building blocks of the nested-SAT algorithm (see Alg. 3) are: 1. It stops when \(\psi\) is invalid (as in Alg. 2), in line 33. 2. To get the reaction, obtain a satisfying assignment \(m\) for \(\psi\) (as in Alg. 2), in line 34. 3. Check the validity of the corresponding reaction and prune \(\psi\) according to what can be learned as follows. If the reaction is valid, then we proceed as in Alg. 2. If \(r=(P,A)\) is invalid (as a result of the SMT query), then an inner SAT formula encodes whether a choice is masked (eliminated from \(P\) or \(A\)). Models of the inner SAT formula, therefore, correspond to quasi-reactions below \(r\). If a quasi-reaction \(q\) found in the inner loop is invalid, the inner formula is additionally constrained and the set of invalid quasi-reactions is expanded. If a quasi-reaction \(q\) found is valid, then the inner SAT formula is pruned eliminating all quasi-reactions that are guaranteed to be valid. At the end of the inner loop, a (non-empty) collection of invalid quasi-reactions are added to \(\psi\). The inner loop, shown in Alg. 4 (where _VQ_ stands for _valid quasi-reactions_), explores a full lattice. Also, note that \(\neg(\bigwedge_{z_{i}}\neg z_{i})\) is, again, a correct starting point. Consider, for example, that the outer loop finds \((\{c_{1},c_{3}\},\{c_{0},c_{2}\})\) to be invalid and that the inner loop produces assignment \(w_{0}\,\wedge\,w_{1}\,\wedge\,w_{2}\,\wedge\,\neg w_{3}\). This corresponds to \(c_{3}\) being masked producing quasi-reaction \((\{c_{1}\},\{c_{0},c_{2}\})\). The pruning system is the following: * If quasi-reaction \(q\) is valid then the inner SAT formula is pruned eliminating all inner models that agree with the model in the masked choices. In our example, we would prune all models that satisfy \(\neg w_{3}\) if \(q\) is valid (because the resulting quasi-reactions will be inevitably valid). * If quasi-reaction \(q\) is invalid, then we prune in the inner search all quasi-reactions that mask less than \(q\), because these will be inevitably invalid. In our example, we would prune all models satisfying \(\neg(w_{0}\,\wedge\,w_{1}\,\wedge\,w_{2})\). Note that _toTheory_inn\((u,m,\mathcal{C})=\bigwedge_{m_{i}\wedge u_{j}}c_{i}^{p}\wedge\bigwedge_{\neg m _{i}\wedge u_{j}}c_{i}^{a}\)_ is not the same function as the _toTheory()_ used in Alg. 2 and Alg. 3, since the inner loops needs both model \(m\) and mask \(u\) (which makes no sense to be negated) to translate a Boolean formula into a \(\mathcal{T}\)-formula. Also, note that there is again a trade-off in the inner loop because an exhaustive search is not necessary. Thus, in practice, we also used some basic heuristics: (1) entering the inner loop only when \((\emptyset,A)\) is invalid; (2) fixing a maximum number of inner model queries per outer model with the possibility to decrement this amount dynamically with a decay; and (3) reducing the number of times the inner loop is exercised (e.g., _enter the inner loop only if the number of invalid outer models so far is even_). Example 7: We explore the results of Alg. 3. A possible execution for 2 literals can be as follows: 1. Reaction \((\{c_{0},c_{3}\},\{c_{1},c_{2}\})\) is obtained in line 34, which is declared invalid by the SMT solver in line 35. The inner loop called in line 42 produces \((\{c_{0}\},\{c_{1}\})\), \((\{c_{3}\},\{c_{2}\})\) and \((\{\},\{c_{1},c_{2}\})\) as three invalid quasi-reactions, and their negations are added to the SAT formula of the outer loop in line 43. 2. A second reaction \((\{c_{0},c_{1}\},\{c_{3},c_{4}\})\) is obtained from the SAT solver in line 34, and now the SMT solver query is valid in line 35. Then, \(\neg(c_{0}\ \wedge\ c_{1})\) is added to the outer SAT formula in line 37. 3. A third reaction \((\{c_{2},c_{3}\},\{c_{0},c_{1}\})\) is obtained in line 33, which is again valid in line 35. Similarly, \(\neg(c_{2}\ \wedge\ c_{3})\) is added the outer SAT formula in line 37. 4. A fourth reaction \((\{c_{1},c_{2}\},\{c_{0},c_{3}\})\) is obtained in line 33, which is now invalid (line 35). The inner loop called in line 42 generates the following cores: \((\{c_{1}\},\{c_{0}\})\) and \((\{c_{2}\},\{c_{3}\})\). The addition of the negation of these cores leads to an unsatisfiable outer SAT formula, and the algorithm terminates. The execution in this example has performed 4 SAT+SMT queries in the outer loop, and 3+2 SAT+SMT queries in the inner loops. The brute-force Alg. 1 would have performed 16 queries. Note that the difference between the exhaustive version and the optimisations soon increases exponentially when we consider specifications with more literals. ## 5 Empirical evaluation We perform an empirical evaluation on six specifications inspired by real industrial cases: _Lift_ (_Li_.), _Train_ (_Tr_.), _Connect_ (_Con_.), _Cooker_ (_Coo_.), _Usb_ (_Usb_) and _Stage_ (_St_.), and a synthetic example (_Syn_.) with versions from 2 to 7 literals. For the implementation, we used used Python 3.8.8 with Z3 4.11. It is easy to see that "clusters" of literals that do not share variables can be Booleanized independently, so we split into clusters each of the examples. We report our results in Fig. 2. Each row contains the result for a cluster of an experiment (each one for the fastest heuristic). Each benchmark is split into clusters, where we show the number of variables (_vr_.) and literals (_lt_.) per cluster. We also show running times of each algorithm against each cluster; concretely, we test Alg. 1 (_BF_), Alg. 2 (_SAT_) and Alg. 3 (_Doub_.). For Alg. 2 and Alg. 3, we show the number of queries performed; in the case of Alg. 3, we also show both outer and inner queries. Alg. 1 and Alg. 2 require no heuristics. For Alg. 3, we report, left to right: maximum number of inner loops (_MxI_.), the modulo division criteria (_Md_.)4, the number of queries after which we perform a decay of 1 in the maximum number of inner loops (_Dc_.), and if we apply the invalidity of \((\emptyset,A)\) as a criteria to enter the inner loop (\(A\).), where \(\checkmark\) means that we do and \(\times\) means the contrary. Also, \(\bot\) means timeout (or _no data_). The brute-force (BF) Alg. 1 performs well with 3 or fewer literals, but the performance dramatically decreases with 4 literals. Alg. 2 (single SAT) performs well up to 4 literals, and it can hardly handle cases with 6 or more literals. An exception is _Lift (1,7)_ which is simpler since it has only one variable (and this implies that there is only one player). The performance improvement of SAT with respect to BF is due to the decreasing of queries. For example, _Train (3,6)_ performs 13706 queries, whereas BF would need \(2^{2^{6}}=1.844\cdot 10^{18}\) queries. All examples are Booleanizable when using Alg. 3 (two SAT loops), particularly when using a combination of concrete heuristics. For instance, in small cases (2 to 5 literals) it seems that heuristic-setups like \(3/3/3/0/\diameter^{5}\) are fast, whereas in bigger cases other setups like \(40/2/0/\diameter\) or \(100/40/20/\times\) are faster. Figure 2: Empirical evaluation results of the different Boolean abstraction algorithms, where the best results are in **bold** and \(\varphi_{\mathbb{B}}\) only refers to best times. We conjecture that a non-zero decay is required to handle large inputs, since inner loop exploration becomes less useful after some time. However, adding a decay is not always faster than fixing a number of inner loops (see _Syn (2,7)_), but it always yields better results in balancing the number of queries between the two nested SAT layers. Thus, since balancing the number of queries typically leads to faster execution times, we recommend to use decays. Note that we performed all the experiments reported in this section running all cases several times and computing averages, because Z3 exhibited a big volatility in the models it produces, which in turn influenced the running time of our algorithms. This significantly affects the precise reproducibility of the running times. For instance, for _Syn(2,5)_ the worst case execution was almost three times worst than the average execution reported in Fig. 2. Studying this phenomena more closely is work in progress. Note that there are cases in which the number of queries of _SAT_ and _Doub_. are the same (e.g., _Usb(3,5)_), which happened when the \(A\). heuristic had the effect of making the search not to enter the inner loop. In Fig. 2 we also analyzed the constructed \(\varphi_{\mathbb{B}}\), measuring the number of valid reactions from which it is made (_Val_.) and the time (_Tme_.) that a realizability checker takes to verify whether \(\varphi_{\mathbb{B}}\) (hence, \(\varphi_{\mathcal{T}}\)) is realizable or not (expressed with dark and light gray colours, respectively). We used Strix [31] as the realizability checker. As we can see, there is a correspondence between the expected realizability in \(\varphi_{\mathcal{T}}\) and the realizability result that Strix returns in \(\varphi_{\mathbb{B}}\). Indeed, we can see all instances can be solved in less than 7 seconds, and the length of the Boolean formula (characterized by the number of valid reactions) hardly affects performance. This suggests that future work should be focused on reducing time necessary to produce Boolean abstraction to scale even further. Also, note that Fig. 2 shows remarkable results as for ratios of queries required with respect to the (doubly exponential) brute-force algorithm: e.g., \(4792+9941\) (outer + inner loops) out of the \(1.844\cdot 10^{19}\) queries that the brute-force algorithm would need, which is less than its \(1\cdot 10^{-13}\%\) (see Fig. 3 for more details). We also compared the performance and number of queries for two different theories \(\mathcal{T}_{\mathbb{Z}}\) and \(\mathcal{T}_{\mathbb{R}}\) for _Syn (2,3)_ to _Syn (2,6)_. Note, again, that the realizability result may vary if a specification is interpreted in different theories, but this is not relevant for the experiment in Fig. 4, which suggests that time results are not dominated by the SMT solver; but, again, from the enclosing abstraction algorithms. Figure 3: Best numbers of queries for Alg. 2 and 3 relative to brute-force (Alg.1). ## 6 Related Work and Conclusions **Related work.** Constraint LTL [13] extends LTL with the possibility of expressing constraints between variables at bounded distance (of time). The theories considered are a restricted form of \(\mathcal{T}_{\mathbb{Z}}\) with only comparisons with additional restrictions to overcome undecidability. In comparison, we do not allow predicates to compare variables at different timesteps, but we prove decidability for all theories with an \(\exists^{*}\forall^{*}\) decidable fragment. LTL modulo theories is studied in [22, 14] for finite traces and they allow temporal operators within predicates, leading the logic to undecidability. As for works closest to ours, [9] proposes numerical LTL synthesis using an interplay between an LTL synthesizer and a non-linear real arithmetic checker. However, [9] overapproximates the power of the system and hence it is not precise for realizability. Linear arithmetic games are studied in [15] introducing algorithms for synthesizing winning strategies for non-reactive specifications. Also, [25] considers infinite theories (like us), but it does not guarantee success or termination, whereas our Boolean abstraction is complete. They only consider safety, while our approach considers all LTL. The follow-up [26] has still similar limitations: only liveness properties that can be reduced to safety are accepted, and guarantees termination only for the unrealizability case. Similarly, [21] is incomplete, and requires a powerful solver for many quantifier alternations, which can be reduced to 1-alternation, but at the expense of the algorithm being no longer sound for the unrealizable case (e.g., depends on Z3 not answering "unknown"). As for [38], it (1) only considers safety/liveness GR(1) specifications, (2) is limited to the theory of fixed-size vectors and requires (3) quantifier elimination (4) and guidance. We only require \(\exists^{*}\forall^{*}\)-satisfiability (for Boolean abstraction) and we consider multiple infinite theories. The usual main difference is that Boolean abstraction generates a (Boolean) LTL specification so that existing tools can be used with any of their internal techniques and algorithms (bounded synthesis, for example) and will automatically benefit from further optimizations. Moreover, it preserves fragments like safety and GR(1) so specialized solvers can be used. On the contrary, all approaches above adapt one specific technique and implement it in a monolithic way. Temporal Stream Logic (TSL) [18] extends LTL with complex data that can be related accross time, making use of a new _update_ operator \(\llbracket y\leftrightarrow fx\rrbracket\), to indicate that \(y\) receives the result of applying function \(f\) to variable \(x\). TSL is later extended to theories in [17; 28]. In all these works, realizability is undecidable. Also, in [10] reactive synthesis and syntax guided synthesis (SyGuS) [1] collaborate in the synthesis process, and generate executable code that guarantees reactive and data-level properties. It also suffers from undecidability: both due to the undecidability of TSL [18] and of SyGuS [8]. In comparison, we cannot relate values accross time but we provide a decidable realizability procedure. Comparing TSL with \(\mathrm{LTL}_{\mathcal{T}}\), TSL is undecidable already for safety, the theory of equality and Presburger arithmetic. More precisely, TSL is only known to be decidable for three fragments (see Thm. 7 in [17]). TSL is (1) semi-decidable for the reachability fragment of TSL (i.e., the fragment of TSL that only permits the next operator and the eventually operator as temporal operators); (2) decidable for formulae consisting of only logical operators, predicates, updates, next operators, and at most one top-level eventually operator; and (3) semi-decidable for formulae with one cell (i.e., controllable outputs). All the specifications considered for empirical evaluation in Section 5 are not within the considered decidable or semi-decidable fragments. Also, TSL allows (finite) uninterpreted predicates, whereas we need to have predicates well defined within the semantics of theories of specifications for which we perform Boolean abstraction. #### 4.2.2 Conclusion. The main contribution of this paper is to show that \(\mathrm{LTL}_{\mathcal{T}}\) is decidable via a Boolean abstraction technique for all theories of data with a decidable \(\exists^{*}\forall^{*}\) fragment. Our algorithms create, from a given \(\mathrm{LTL}_{\mathcal{T}}\) specification where atomic propositions are literals in such a theory, an equi-realizable specification with Boolean atomic propositions. We also have introduced efficient algorithms using SAT solvers for efficiently traversing the search space. A SAT formula encodes the space of reactions to be explore and our algorithms reduce this space by learning uninteresting areas from each reaction explores. The fastest algorithm uses a two layer SAT nested encoding, in a DPLL(T) fashion. This search yields dramatically more efficient running times and makes Boolean abstraction applicable to larger cases. We have performed an empirical evaluation of implementations of our algorithms. We found empirically that the best performances are obtained when there is a balance in the number of queries made by each layer of the SAT-search. To the best of our knowledge, this is the first method to propose a solution (and efficient) to realizability for general \(\exists^{*}\forall^{*}\) decidable theories, which include, for instance, the theories of integers and reals. Future work includes first how to improve scalability further. We plan to leverage quantifier elimination procedures [11] to produce candidates for the sets of valid reactions and then check (and correct) with faster algorithms. Also, optimizations based in quasi-reactions can be enhanced if state-of-the-art tools for satisfiability core search (e.g., [27; 3; 2]) are used. Another direction is to extend our realizability method into a synthesis procedure by synthesizing functions in \(\mathcal{T}\) to produces witness values of variables controlled by the system given (1) environment and system moves in the Boolean game, and (2) environment values (consistent with the environment move). Finally, we plan to study how to extend \(\mathrm{LTL}_{\mathcal{T}}\) with controlled transfer of data accross time preserving decidability.
2303.11903
Counting Finite Topologies
In this paper we study the number of finite topologies on an $n$-element set subject to various restrictions.
Eldar Fischer, Johann A. Makowsky
2023-03-21T14:48:52Z
http://arxiv.org/abs/2303.11903v2
# Counting Finite Topologies ###### Abstract In this paper we study the number of finite topologies on an \(n\)-element set subject to various restrictions. ## 1 Introduction In the last decade finite topologies have received renewed attention due to their is in image analysis and data science. There is a vast literature testifying to this. We just mention two references as typical examples, [13, 6]. For mathematical applications of finite topologies, see [1]. Finite metric spaces are studied in [4, 14]. The model theory of topological spaces was studied in the late 1970ies, see [15, 10, 11, 19]. In [5] A. Broder introduced the restricted r-Stirling numbers and r-Bell numbers. They have found various applications in enumerative combinatorics, e.g. see [2]. Inspired by this we study in this paper the number of finite topologies on an \(n\)-element set subject to various restrictions. Assume you want to count the number of topologies on a set \([r+n]\) where the topologies on the elements of \([r]\) satisfy some prescribed topological configuration such as all the singletons in \([r]\) are closed sets or are pairwise separable by an open set. Counting finite topologies is a difficult problem. Even for the case without restrictions no explicit formula is known. There are some asymptotic results, but the best results known are congruences modulo a fixed integer \(m\). A sequence \(s(n)\) of integers is _MC-finite, modularity C-finite_ if for every integer \(m\) the sequence \(s^{m}(n)=s(n)\pmod{m}\) is an ultimately periodic sequence of positive integers. E. Specker showed in [17, 18] that for every \(m\) the number of topologies is MC-finite. His proof uses both logic and advanced combinatorics. The purpose of this paper is to show the same for the number of topologies with restrictions. In the presence of the restrictions we have in mind, Specker's method cannot be applied directly. We will use logic to make this framework precise. Let \(\mathcal{T}=(X,\mathcal{U})\) be a finite topological space on the finite set \(X\) and \(\mathcal{U}\) the family of open sets in \(X\). We associate with \(\mathcal{T}\) a two sorted first order structure \(\mathcal{T}^{\prime}=(X,\mathcal{U},\in)\) where \(\in\subseteq X\times\mathcal{U}\) and \(x\in U\) says that \(x\) is an element of \(U\). If \(X=[r+n]\) we use constant symbols \(a_{1},\ldots,a_{r}\) which have a fixed interpretation: \(a_{i}\) is interpreted by \(i\in[r]\). We say that the constant symbols \(a_{i}\) are _hard-wired_. The topological restrictions are now described by a first order formula \(\phi(a_{1},\ldots,a_{r})\) over the structure \(([r+n],\mathcal{U},\in,a_{1},\ldots,a_{r})\). We denote by \(T_{\phi,r}(n)\) the number of topologies on the set \([r+n]\) which satisfy \(\phi(a_{1},\ldots,a_{r})\). For a positive integer \(m\) we denote by \(T_{\phi,r}^{m}(n)\) the sequence \(T_{\phi,r}(n)\) modulo \(m\). ### Main result Our main result is stated her for topological first order logic **TFOL**: **Theorem 1**: 1. _For every formula_ \(\phi\) _of_ **TFOL** _and every positive integer_ \(m\)_, the sequence_ \(T_{\phi,r}^{m}(n)\) _is ultimately periodic modulo_ \(m\)_. In other words_ \(T_{\phi,r}^{m}(n)\) _is MC-finite._ 2. _Given_ \(\phi\) _and_ \(m\)_, the sequence_ \(T_{\phi,r}^{m}(n)\) _is Fixed Parameter Tractable (_**FPT**_) where the parameters depend on_ \(\phi\) _and_ \(m\)_._ The proof uses recent results on extensions Specker's method due to E. Fischer and this author, [8]. It also uses model theoretic methods as described in [19]. The same method was applied to prove congruences for restricted Bell and Stirling numbers in [9]. One of our main achievements is to define the logic TCMSOL, a topological version of Monadic Second Order Logic with modular counting. This allows us to prove Theorem 8 in Section 4, which is like Theorems 1 but stated for TCMSOL instead of **TFOL**. ## 2 Background ### Counting finite topologies Here we follow the presentation from [16]. Let \(T(n)\) and \(T_{0}(n)\) be the number of topologies and \(T_{0}\)-topologies respectively on a the set \([n]=\{1,\ldots,n\}\). Recall that a topology on \([n]\) is \(T_{0}\) if for all \(a,b\in[n]\), there is some open set containing one but not both of them. No explicit formulas for \(T(n)\) and \(T_{0}(n)\) are known. The following is known: **Theorem 2**: 1. \(T(n)=Q(n)\)_, where_ \(Q(n)\) _is the number of pre-orders on_ \([n]\)_,_ _[_16_]__._ 2. \(T_{0}(n)=P(n)\)_, where_ \(P(n)\) _is the number of partial orders on_ \([n]\)_,_ _[_16_]__._ 3. \(B(n)\leq P(n)\leq Q(n)\)_, where_ \(B(n)\) _are the Bell numbers, which count the number of equivalence relations on_ \([n]\)_. Furthermore, see_ _[_7, 3_]__,_ \[\left(\frac{n}{e\ln n}\right)^{n}\leq B(n)\leq\left(\frac{n}{e^{1-\epsilon}\ln n }\right)^{n},\] _._ * _The logarithm with base_ \(2\) _of both_ \(T(n)\) _and_ \(T_{0}(n)\) _goes asymptotically to_ \(\frac{n^{2}}{4}\) _as_ \(n\) _goes to infinity,_ _[_12_]__._ ### C-finite and MC-finite sequences **Definition 1**: * \(s(n)\) _is_ C-finite _if there are_ \(p,q,a_{1},\ldots,a_{p}\in\mathbb{N}\) _such that_ \(s(n+p+1)=\sum_{i=0}^{p}a_{i}s(n+i),n\geq q\)_. If_ \(s(n)\) _is C-finite, it has at most simple exponential growth._ * \(s(n)\) _is_ C-finite modulo__\(m\) _if the recurrence relations holds modulo_ \(m\)_. In other words,_ \(s(m)\) _is_ ultimately periodic modulo__\(m\)_._ \(s(n)\) _is_ MC-finite _if_ \(s(n)\) _is C-finite modulo_ \(m\) _for every_ \(m\)_._ **Examples 3**: * _The Fibonacci sequence is C-finite._ * _The Bell numbers_ \(B(n)\) _are_ not C-finite_, but_ MC-finite_._ * _Let_ \(f(n)\) _be any integer sequence. The sequence_ \(s_{1}(n)=2\cdot f(n)\) _is C-finite modulo_ \(2\)_, but not necessarily MC-finite._ * _Let_ \(g(n)\) _grow arbitrarily fast. The sequence_ \(s_{2}(n)=n!\cdot g(n)\) _is MC-finite._ * _The sequence_ \(s_{3}(n)=\frac{1}{2}{2n\choose n}\) _is not MC-finite._ \(s_{3}(n)\) _is odd iff_ \(n\) _a power of_ \(2\)_, else it is even (Lucas, 1878)._ E. Specker has shown in [17] a general theorem from which it follows that: **Theorem 4**: _Both \(Q(n)\) and \(P(n)\) are MC-finite but not C-finite. Furthermore, for each \(m\) the required parameters \(p_{m},q_{m},a_{1}^{m},\ldots,a_{p}^{m}\in\mathbb{N}\) are computable. In particular, this also applies to both \(T(n)\) and \(T_{0}(n)\)._ The best reference is [18]. ## 3 Topologies as relational structures ### The logics Tcmsol and Cmsol Let \(\mathcal{T}=(X,\mathcal{U})\) be a finite topological space on the finite set \(X\) and \(\mathcal{U}\) the family of open sets in \(X\). We associate with a finite topological space \((X,\mathcal{U})\) a two sorted relational structure \(\mathcal{T}=(X,\mathcal{U},E)\) where \(E\subseteq X\times\mathcal{U}\) is a binary relation and \(E(x,U)\) says that \(x\) is an element of \(U\). We also require that it satisfies the extensionality axiom \(U=V\leftrightarrow\forall x(E(x,U)\leftrightarrow E(x,V))\). We denote by \(TCMSOL\) the weak second order logic for structures of this form possibly augmented by constant symbols. We allow quantification over _subsets of_\(A\), but only quantification over _elements of_\(\mathcal{U}\). Furthermore we have a modular counting quantifier \(C_{m,a}x\phi(x)\) which says that modulo \(m\) there are \(a\) elements satisfying \(\phi(x)\). **TFOL** is the logic without second order quantification and without modular counting. Similarly, CMSOL is defined as TCMSOL for one-sorted structures with one binary relation. For _finite structures_ of the form \(\mathcal{T}\) the following are TCMSOL-definable. 1. \(\mathcal{U}\) is a topology for the finite set \(A\): (i) \(\emptyset\in\mathcal{U}\), \(A\in\mathcal{U}\). (ii) \(\mathcal{U}\) is closed under unions. (iii) \(\mathcal{U}\) is closed under intersections. 2. \(\mathcal{U}\) is \(T_{0}\): \(\forall a,b\in A\exists U\in\mathcal{U}(E(a,U)\wedge\not\!E(b,U)\vee(\not\!E( a,U)\wedge E(b,U))\). 3. \(\mathcal{U}\) is \(T_{1}\): \(\forall a\in A(A-a\in\mathcal{U})\). 4. \(X\) is connected: There are no two non-empty disjoint open sets \(U_{1},U_{2}\) with \(U_{1}\cup U_{2}=X\). 5. The **TFOL**-formula \(\phi_{U_{x}}(x,U)\) says that \(U\) is the smallest open set containing \(x\): \(U_{x}\in\mathcal{U}\wedge E(x,U_{x})\wedge(\forall U\in\mathcal{U}(E(x,U) \to U_{x}\subseteq U)\). 6. A typical formula which is in TCMSOL would be: There is a set of points of even cardinality which is not an open set. ### Hard-wired constant symbols Let \(\overline{a}=(a_{1},\ldots,a_{k})\) be \(k\) constant symbols. For each of them there are \(n\) possible interpretations in the set \([n]\). However, we say that \((a_{1},\ldots,a_{r})\), for \(r\leq k\) are _hard-wired_ on \([n]\), if \(a_{i}\) is interpreted by \(i\in[n]\). In the presence of constant symbols (hard-wired or not) \(a_{1},\ldots,a_{k}\) we can say: 1. \(\forall U\bigwedge_{i}^{r}E(a_{i},U)\), i.e., they form a minimal non-empty open set. 2. There are pairwise disjoint open sets \(U_{1},\ldots,U_{r}\) such that \(a_{i}\) in \(U_{i}\). 3. The elements denoted by \(a_{i}\) are all in different connected components. In analogy to Broder's \(r\)-Stirling numbers we also count finite topologies restricted by TCMSOL-formulas with hard-wired constant symbols. ## 4 Proof of the main theorems The proofs use several older and newer results. **Theorem 5** (Alexandroff, 1931): _There is a bijection \(\alpha\) between finite topologies and finite quasi-orders._ Let \(\mathcal{T}=(X,\mathcal{U},E)\) be a finite topology. We want to define inside \(\mathcal{T}\) a quasi-order \(\mathcal{Q}=(X,\leq)\). For this we exhibit a formula \(\phi_{\leq}(x,y)\) in TCMSOL which defines \(\leq\). Let \(U_{x}\) be the intersection of all open sets \(U\) which contain \(x\). This can be expressed in **TFOL** by the formula \(\phi_{U_{x}}(x,U)\) from the previous section. Now \(x\leq y\) can be defined by \(U_{x}\subseteq U_{y}\), which be expressed as \[\phi_{\leq}(x,y):\forall z\left[(E(z,U)\wedge\phi_{U_{x}}(x,U))\to(E(z,V) \wedge\phi_{V_{x}}(x,V))\right]\] The translation scheme \(\Phi=(x=x,\phi_{\leq}(x,y))\) consists of two formulas. The first defines the new universe, which in this case is also \(X\), and the second defines defines the quasi-order. \(\Phi\) induces to maps, \(\Phi^{\star}\) which maps finite topologies onto quasi-order over the same universe, and \(\Phi^{\sharp}\), which maps CMSOL-formulas into TCMSOL-formulas, by replacing each occurrence of \(x_{1}\leq x_{2}\) by \(\phi_{\leq}(x_{1},x_{2})\). In the other direction, let \((X,\leq)\) be a finite quasi-order. We want to define inside \(\mathcal{Q}\) a topology \(\mathcal{T}=(X,\mathcal{U},E)\). We actually define a structure \(\mathcal{T}^{\prime}=(X,P(A),\mathcal{U},\mathcal{B},E,E_{top},E_{basis})\) where \(P(A)\) is the powerset of \(A\), and \(\mathcal{B}\) is a minimal basis for the topolgy \(\mathcal{U}\). Again \(X\) can be defined by \(x=x\), and \(P(A)\) can be defined by \(\phi_{set}(X):\forall x(X(x)\leftrightarrow X(x))\). The non-empty basic sets are defined by \[\phi_{basis}(B)(y,B):\exists y(B(y)\leftrightarrow\exists xy\leq x).\] The the non-empty open sets are defined by \[\phi_{top}(U)(U):U(x)\leftrightarrow(\exists y\exists B(B(y)\wedge(\forall z (B(z)\to U(z)))).\] The translation scheme \(\Psi=(x=x,\phi_{set}(X),\phi_{top}(U),\phi_{basis}(B),X(x))\). \(\Psi\) induces to maps, \(\Psi^{\star}\), which maps finite quasi-order onto topologies over the same underlying set. and \(\Psi^{\sharp}\), which maps TCMSOL-formulas into \(TCMSOL\)-formulas, by replacing each occurrence of \(E(x,X),E(x,U),E(x,B)\) by its definitions. **Theorem 6**: _The translation schemes \(\Phi\) and \(\Psi\) satisfy the following:_ * \(\Phi^{\star}(\mathcal{T})=\alpha(\mathcal{T})\) _and_ \(\Psi^{\star}(\mathcal{Q})=\alpha^{-1}(\mathcal{Q})\)_;_ * _for every_ \(\theta\in\mathrm{CMSOL}^{\,h}\) _and very finite quasi-order_ \(\mathcal{Q}\)__\(\alpha(\mathcal{T})=\mathcal{Q}\models\theta\) _iff_ \(\mathcal{T}\models\Phi^{\sharp}(\theta)\)_._ * _for every_ \(\sigma\in\mathrm{TCMSOL}^{\,h}\) _and very finite topology_ \(\mathcal{T}\)__\(\alpha^{-1}(\mathcal{Q})=\mathcal{T}\models\sigma\) _iff_ \(\mathcal{Q}\models\Psi^{\sharp}(\sigma)\)_._ **Theorem 7** ([8]): _Let \(\theta(a_{1},\ldots,a_{r})\) be a sentence in \(\mathrm{CMSOL}^{\,h}\) with \(r\) constant symbols, and let \(S(n)=S_{\theta(a_{1},\ldots,a_{r})}(n)\) be the number of relations \(R\subseteq[n]^{2}\), such that_ \[([n],R,(a_{1},\ldots,a_{r}))\models\theta(a_{1},\ldots,a_{r}).\] _In both cases, where the constant symbols are hard-wired or not, \(S\) is MC-finite._ **Theorem 8**: _Let \(\sigma(a_{1},\ldots,a_{r})\) be a sentence in \(\mathrm{TCMSOL}^{\,h}\) with \(r\) constant symbols, and let \(S^{t}(n)=S^{t}_{sigma(a_{1},\ldots,a_{r})}(n)\) be the number of topologies on \([n]\) such that_ \[([n],\mathcal{U},\,(a_{1},\ldots,a_{r}))\models\sigma(a_{1},\ldots,a_{r}).\] _In both cases, whether the constant symbols are hard-wired or not, \(S^{t}(n)\) is MC-finite._
2301.07918
Subject-Independent Classification of Brain Signals using Skip Connections
Untapped potential for new forms of human-to-human communication can be found in the active research field of studies on the decoding of brain signals of human speech. A brain-computer interface system can be implemented using electroencephalogram signals because it poses more less clinical risk and can be acquired using portable instruments. One of the most interesting tasks for the brain-computer interface system is decoding words from the raw electroencephalogram signals. Before a brain-computer interface may be used by a new user, current electroencephalogram-based brain-computer interface research typically necessitates a subject-specific adaption stage. In contrast, the subject-independent situation is one that is highly desired since it allows a well-trained model to be applied to new users with little or no precalibration. The emphasis is on creating an efficient decoder that may be employed adaptively in subject-independent circumstances in light of this crucial characteristic. Our proposal is to explicitly apply skip connections between convolutional layers to enable the flow of mutual information between layers. To do this, we add skip connections between layers, allowing the mutual information to flow throughout the layers. The output of the encoder is then passed through the fully-connected layer to finally represent the probabilities of the 13 classes. In this study, overt speech was used to record the electroencephalogram data of 16 participants. The results show that when the skip connection is present, the classification performance improves notably.
Soowon Kim, Ji-Won Lee, Young-Eun Lee, Seo-Hyun Lee
2023-01-19T07:04:11Z
http://arxiv.org/abs/2301.07918v1
# Subject-Independent Classification of Brain Signals using Skip Connections ###### Abstract Untapped potential for new forms of human-to-human communication can be found in the active research field of studies on the decoding of brain signals of human speech. A brain-computer interface system can be implemented using electroencephalogram signals because it poses more less clinical risk and can be acquired using portable instruments. One of the most interesting tasks for the brain-computer interface system is decoding words from the raw electroencephalogram signals. Before a brain-computer interface may be used by a new user, current electroencephalogram-based brain-computer interface research typically necessitates a subject-specific adaption stage. In contrast, the subject-independent situation is one that is highly desired since it allows a well-trained model to be applied to new users with little or no precalibration. The emphasis is on creating an efficient decoder that may be employed adaptively in subject-independent circumstances in light of this crucial characteristic. Our proposal is to explicitly apply skip connections between convolutional layers to enable the flow of mutual information between layers. To do this, we add skip connections between layers, allowing the mutual information to flow throughout the layers. The output of the encoder is then passed through the fully-connected layer to finally represent the probabilities of the 13 classes. In this study, overt speech was used to record the electroencephalogram data of 16 participants. The results show that when the skip connection is present, the classification performance improves notably. brain-computer interface, deep learning, electroencephalography, speech processing + Footnote †: This work was partly supported by Institute for Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2017-0-00451, Development of BCI based Brain and Cognitive Computing Technology for Recognizing User’s Intentions using Deep Learning; No.2021-0-02068, Artificial Intelligence Innovation Hub). 978-1-6654-6444-4/23/531.00 ©2023 IEEE ## I Introduction Brain signals carry information about human behavior, imagery or mental and physical states [1], which is useful for deciphering intents. By analyzing a user's brain activity, brain-computer interface (BCI) technology can generate external commands that can be utilized to control the surroundings [2, 3, 4]. Through the use of their intents, which are translated from brain signals, users of BCI can control prostheses, offering an additional means of communication between people and external machines [5]. A recent area of BCI research, brain-to-speech (BTS), aims to produce spoken words from brain signals. The goal of BTS systems is to decipher the speech-related intents from brain activity and then provide orders for communication, in contrast to other communication techniques like event-related potential spellers. Brain-to-brain systems may be a means of intuitive communication as speech is the most common mode of communication [6]. We hypothesize that there must be a meaningful brain activation that might encode a substantial aspect of the speech because it has been demonstrated that it is possible to reconstruct speech from brain signals of spoken speech [7]. EEG signal analysis methods have seen a great deal of development, with promising results [8, 9, 10, 11, 12, 13]. A brief calibration session is required before a BCI system may be utilized by a new user because the majority of recent studies concentrate on the subject-dependent scenario in which training and test data are from the same subject [7, 14, 15]. This labor- and time-intensive calibration procedure must be carried out for every new subject and usage. The subject-independent scenario, in contrast, is little studied but is greatly sought to enhance the user-experience. In this scenario, a BCI system is trained using data from seen individuals while being applied straight to new users without pre-adaptation. Exploring the subject-independent scenario, however, is quite challenging. EEG signals change significantly even between recording sessions of the same user under the same experimental paradigm and show strong subject-to-subject variability [16]. The majority of techniques rely on conventional machine learning techniques and manually created features, making them one of the few subject-independent investigations. [17] extracts and categorizes information from each subject's EEG data using a pair of Linear Discriminant Analysis (LDA) and Common Spatial Patterns (CSP) algorithms. An ensemble classifier is created by combining many classifiers using l1 regularized regression. Due to the handcrafted features' and conventional learning algorithms' limited capabilities, these methods are unable to deliver performance that is sufficient. Deep learning techniques have recently made impressive strides, showing promise for addressing complex cross-subject scenarios [18, 19]. However, only a small number of successful deep learning studies have shown strong generalization capabilities from known subjects to new ones [20, 21, 22, 23, 24]. Therefore, deep learning based models for EEG signal decoding still remains challenging tasks [25]. Skip connection is a common method for enhancing the performance and convergence of deep neural networks. It works by propagating a linear component through the layers of the neural network, which is thought to ease the difficulties in optimization caused by non-linearity [26, 27]. As the name implies, Skip Connections (or Shortcut Connections) omit some neural network layers and provide the result of one layer as the input to subsequent levels. Skip Connections were developed to address various issues in various architectures. Skip connections allowed us to address the degradation issue in the case of ResNets, while it ensured feature reusability in the case of DenseNets [28]. In the sections below, we'll go into further detail on each one. In this article, we introduce an unique subject-independent EEG data analysis method based on convolutional neural network that is taking advantage of skip connection. Through multiple skip connections the flow of mutual information is preserved. The encoding of EEG signals is accomplished using convolutional neural networks. ## II Materials and methods ### _Subjects_ The experimental protocol was designed to record EEG signals while performing overt speech following the perception stage with auditory stimuli in between. Sixteen subjects participated in the study. The study was approved by the Korea University Institutional Review Board [KUIRB-2019-0143-01] and was conducted in accordance with the Declaration of Helsinki. Informed consent was obtained from all subjects. All subjects are asked to fill out the questionnaire before and after the experiment to check their physiological and mental conditions and evaluate the experimental paradigms. ### _Experimental Protocol and Paradigm_ We recorded EEG signals from scalp during overt speech through 64 channels. First, an auditory stimuli is given to the subject which is a beeping sound. Then, the perception is performed which is a hearing of the given word with visual perception as well through the monitor. With another auditory stimuli, the corresponding overt speech is performed. EEG signals from both stages are collected. In total, more than 1,400 trials are performed in each subject The experimental paradigm is described in Fig. 1. Thirteen classes are introduced, with labels ranging from 0 to 12. These classes include "ambulance," "clock," "hello," "help me," "light," "pain," "stop," "thank you," "toilet," "TV," "water," and "yes," and a silent phase. Fig. 1: Experimental paradigm for recording EEG signals from overt speech Fig. 2: Overall architecture of the model used in this work. Waveform of the overt speech EEG signals is used as an input to the model with 64 channels. Each block performs 1-dimensional convolution, 1-dimensional batch normalization, and activation (ELU). To enable skip connection throughout the intermediate layers, maxpooling is applied to match the dimensions. After extracting the latent vector from the CNN, fully-connected layer is applied to output the final classification probabilities. ### _Preprocessing_ From the beginning of each trial, the EEG signal was segmented into 1.5 seconds and sampled at 1,000 Hz. A fifth Butterworth filter was used to preprocess the EEG data in the high-gamma region of 0.5-120 Hz, and baseline was corrected by deducting the average of 500 ms before to the start of each trial. We chose the channels (AF3, F3, F5, FC3, FC5, T7, C5, TP7, CP5, and P5) that are in the Broca and Wernicke's areas. Using independent component analysis and references from EOG and EMG, we performed artifact reduction techniques for the muscle activity around the mouth. Using OpenBMI Toolbox [25], BBCI Toolbox [29], and EEGLAB [30], all data processing operations were carried out in Python and Matlab. ### _Architecture_ The proposed classification framework consists of convolution layers and skip connection to extract time-spectral-spatial information, as shown in Fig. 2. The raw EEG signals as a waveform were used as an input to the model. To capture various EEG features such as spectral, spatial, and temporal information, we used deep convolutional network. In each encoding block, 1-dimensional convolutional layer is followed by 1-dimensional batch normalization, activation and dropout. The probability of the dropout was 0.5. In total, 5 encoding blocks are used to produce the final latent representation of the raw EEG signals. We used the exponential linear unit (ELU) as the activation function. 64 channels are maintained throughout each encoding block. The classifier (Fully-connected layer) is then applied with 3 linear, ELU activation and dropout combination. 128 hidden units are used. The output for classification is set to 13 classes with the input as raw signals (C \(\times\) T). For training, we applied the mean squared error loss. For each condition, 2000 epoch training and 5-fold cross-validation were used to conduct the evaluation. The likelihood of chance level for this experiment was \(5\%\) due to the 13 classes present. ### _Model training_ The training session consists of more than 1,000 epochs. For every 10 epochs, we validated the model with a validation set, which is \(10\%\) randomly selected portion of the entire dataset. When the model's performance does not increase for more than 10 validation, we stop the training. Adam was used for the training optimizer. The learning rate was \(0.0001\) which was fixed throughout the training without weight decay with betas \((b_{1},b_{2})\) with 0.9 and 0.999. A batch size of 128 was used for all experiments. ## III Results and Discussion For 13 classes of overt speech, we created frameworks for deciphering speech-related EEG data. For overt speech, the performance of the model with and without skip connection was examined. As table I shown above, for 13 classes with skip connections, the average accuracy of overt speech was 98.69%, compared to 80.24% for the model without skip connections. F1-score, as well as precision and recall showed clear advantage, as shown in table I, indicating a better performance of the model using the skip connection compared to the model without skip connection. The confusion matrix for the model with skip connection is depicted in Fig. 3. ## IV Conclusion In this study, we proposed a deep convolutional neural network for EEG decoding that applies skip connections in between convolutional layers to enable the flow of mutual information between layers. To do this, we add a skip connection between first, second and third convolutional layer, Fig. 3: The confusion matrix for the model comparing the proposed method with no skip connection. The true labels are on the vertical axis, while the predicted labels are on the horizontal axis. The upper matrix represent the results from the model without skip connection and the lower matrix represents the model with skip connection. allowing the mutual information to flow from the input layer to the output layer. The output of the encoder is then passed through the fully-connected layer to finally represent the probabilities of the 13 classes. According to the findings, the performance was significantly enhanced when the subject-independent classification was carried out on seven subjects utilizing the skip connection. As a result, the method that was proposed for decoding brain activity with skip connections had the potential to be used with reliable BCI devices for any subject.
2302.01839
Investigating Stylistic Profiles for the Task of Empathy Classification in Medical Narrative Essays
One important aspect of language is how speakers generate utterances and texts to convey their intended meanings. In this paper, we bring various aspects of the Construction Grammar (CxG) and the Systemic Functional Grammar (SFG) theories in a deep learning computational framework to model empathic language. Our corpus consists of 440 essays written by premed students as narrated simulated patient-doctor interactions. We start with baseline classifiers (state-of-the-art recurrent neural networks and transformer models). Then, we enrich these models with a set of linguistic constructions proving the importance of this novel approach to the task of empathy classification for this dataset. Our results indicate the potential of such constructions to contribute to the overall empathy profile of first-person narrative essays.
Priyanka Dey, Roxana Girju
2023-02-03T16:30:09Z
http://arxiv.org/abs/2302.01839v1
# Investigating Stylistic Profiles for the Task of Empathy Classification in Medical Narrative Essays ###### Abstract One important aspect of language is how speakers generate utterances and texts to convey their intended meanings. In this paper, we bring various aspects of the Construction Grammar (CxG) and the Systemic Functional Grammar (SFG) theories in a deep learning computational framework to model empathic language. Our corpus consists of 440 essays written by premed students as narrated simulated patient-doctor interactions. We start with baseline classifiers (state-of-the-art recurrent neural networks and transformer models). Then, we enrich these models with a set of linguistic constructions proving the importance of this novel approach to the task of empathy classification for this dataset. Our results indicate the potential of such constructions to contribute to the overall empathy profile of first-person narrative essays. ## 1 Introduction Much of our everyday experience is shaped and defined by actions and events, thoughts and perceptions which can be accounted for in different ways in the system of language. The grammatical choices we make when writing an essay (i.e., pronoun use, active or passive verb phrases, sentence construction) differ from those we use to email someone, or those we utter in a keynote speech. "Word choice and sentence structure are an expression of the way we attend to the words of others, the way we position ourselves in relation to others" [14]. Such choices allow us to compare not only the various options available in the grammar, but also what is expressed in discourse with what is suppressed [10]. Given the great variability in the modes of expression of languages, the search for an adequate design of grammar has long motivated research in linguistic theory. One such approach is CxG [19, 18, 17] which prioritizes the role of constructions, conventional form-meaning pairs, in the continuum between lexis and syntax [20]. As such, these constructions form a structured inventory of speakers' knowledge of the conventions of their language [17]. Another particular grammatical facility for capturing experience in language is Halliday's system of transitivity as part of the Systemic Functional Grammar (SFG) [13, 14], a theory of language centred around the notion of language function. SFG pays great attention to how speakers generate utterances and texts to convey their intended meanings. This can make our writing effective, but also give the audience a sense of our own personality. However, unlike CxG, Halliday's system of transitivity describes the way in which the world of our experience is divided by grammar into a'manageable set of process types' [14] each offering not only a form-meaning mapping, but also a range of stylistic options for the construal of any given experience through language. In stylistics, researchers have used this model to uncover and study the grammatical patterns through which texts can enact a particular ideology, or an individual's distinctive'mind style' of language [21]. The idea of'style as choice' in Halliday's transitivity system can be best understood as experiential strategies (like avoiding material processes or repeating passive voice constructions) such as those identified as contributing to a reduced sense of awareness, intentionality or control in the human agent responsible [21, 15]. Such an individual is often said to appear 'helpless' and 'detached' [14, 15], or 'disembodied' [1]. Take for instance, construction choices like 'I reassured her' vs. 'She was reassured', or "I greeted her upon entrance" vs. "The nurse greeted her upon entrance" vs. "She was greeted upon entrance" - which show the degree of agency and intended involvement on the part of the agent in the action. Such linguistic choices often occur together in stylistic profiling exercises to showcase the techniques contributing to 'passivity', or the degree of suppression of agency and power in characterisation (Kies, 1992). In this paper, we try to bring CxG and SFG closer together in the study of discourse level construction of arguments for the analysis of empathic content of narrative essays. Specifically, inspired by research in critical discourse analysis, we are taking a step further to show ways in which such construction choices can manipulate (and even reduce) the attention we give to the agency and moral responsibility of individuals (Jeffries, 2017; Van Dijk, 2017). Specifically, such form-meaning-style mappings can be used to capture the point of view as an aspect of narrative organization and the perspective through which a story is told, the way the characters are portrayed in terms of their understanding of the processes they are involved in, as well as their own participation in the story. In this respect, "narratives seem necessary for empathy [..] they give us access to contexts that are broader than our own contexts and that allow us to understand a broad variety of situations" (Gallagher, 2012). They provide a form/structure that allows us to frame an understanding of others, together with a learned set of skills and practical knowledge that shapes our understanding of what we and others are experiencing. Drawing on Halliday's transitivity framework rooted in Systemic Functional Linguistics, this paper attempts to reveal the (dis)engaged style of empathic student essays from a semantic-grammatical point of view. Specifically, we want to investigate how certain types of processes (i.e., verbs) and constructions (i.e., passive voice) function to cast the essay writers (as main protagonists and agents) as perhaps rather ineffectual, passive, and detached observers of the events around them and of the patient's emotional states. We take a narrative approach to empathy and explore the experiences of premed students at a large university by analysing their self-reflective writing portfolios consisting of a corpus of first-person essays written by them as narrated simulated patient-doctor interactions. The corpus has been previously annotated and organized (Shi et al., 2021; Michalski and Girju, 2022) following established practices and theoretical conceptualizations in psychology (Cuff et al., 2016; Eisenberg et al., 2006; Rameson et al., 2012). Computationally, we introduce a set of informative baseline experiments using state-of-the-art recurrent neural networks and transformer models for classifying the various forms of empathy. As initial experiments show relatively low scores, we measure the presence of several grammatical structures, leveraging Halliday's theory of transitivity, and its correlation with the essays' overall empathy scores. We apply this framework to state-of- the-art and representative neural network models and show significant improvement in the empathy classification task for this dataset. Although previous research suggests that narrative-based interventions tend to be effective education-based methods, it is less clear what are some of the linguistic mechanisms through which narratives achieve such an effect, especially applied to empathy, which is another contribution of this research. ## 2 Related Work In spite of its increasing theoretical and practical interest, empathy research in computational linguistics has been relatively sparse and limited to empathy recognition, empathetic response generation, or empathic language analysis in counselling sessions. Investigations of empathy as it relates to clinical practice have received even less attention given the inherent data and privacy concerns. Most of the research on empathy detection has focused on spoken conversations or interactions, some in online platforms (e.g. (Perez-Rosas et al., 2017; Khanpour et al., 2017; Otterbacher et al., 2017; Sharma et al., 2021; Hosseini and Caragea, 2021), very little on narrative genre (Buechel et al., 2018; Wambsganss et al., 2021), and even less in clinical settings. Buechel et al. (2018) used crowdsourced workers to self-report their empathy and distress levels and to write empathic reactions to news stories. Wambsganss et al. (2021) built a text corpus of student peer reviews collected from a German business innovation class annotated for cognitive and affective empathy levels. Using Batson's Empathic Concern-Personal Distress Scale (Batson et al., 1987), Buechel et al. (2018) have focused only on negative empathy instances (i.e., pain and sadness "by witnessing another person's suffering"). However, empathy is not always negative (Fan et al., 2011). A dataset reflecting empathic language should ideally allow for expressions of empathy that encompass a variety of emotions, and even distinguish between sympathy and empathy.1 Footnote 1: Some studies don’t seem to differentiate between sympathy and empathy (Rashkin et al., 2018; Lin et al., 2019). Following a multimodal approach to empathy prediction, R. M. Frankel (2000) and Cordella and Musgrave (2009) identify sequential patterns of empathy in video-recorded exchanges between medical graduates and cancer patients. Sharma et al. (2020) analyzed the discourse of conversations in online peer-to-peer support platforms. Novice writers were trained to improve low-empathy responses and provided writers with adequate feedback on how to recognize and interpret others' feelings or experiences. In follow-up research, they performed a set of experiments Sharma et al. (2021) whose results seemed to indicate that empathic written discourse should be coherent, specific to the conversation at hand, and lexically diverse. To our knowledge, no previous research has investigated the contribution of grammatical constructions like Halliday's transitivity system to the task of empathy detection in any genre, let alone in clinical education.2 Footnote 2: Besides our own research (Shi et al., 2021; Michalski and Girju, 2022; Dey and Girju, 2022; Girju and Girju, 2022). ## 3 Self-reflective Narrative Essays in Medical Training Simulation-based education (SBE) is an important and accepted practice of teaching, educating, training, and coaching health-care professionals in simulated environments (Bearman et al., 2019). Four decades-worth of SBE research has shown that "simulation technology, used under the right conditions... can have large and sustained effects on knowledge and skill acquisition and maintenance among medical learners" (McGaghie et al., 2014). In fact, simulation-based education, an umbrella term that covers a very broad spectrum of learning activities from communication skill role-playing to teamwork simulations, is known to contribute to shaping experiences in undergraduate and postgraduate medical, nursing and other health education. In all these activities, learners contextually enact a task which evokes a real-world situation allowing them to undertake it as if it were real, even though they know it is not (Dieckmann et al., 2007; Bearman, 2003). Personal narratives and storytelling can be viewed as central to social existence (Bruner, 1991), as stories of lived experience (Van Manen, 2016), or as a way in which one constructs notions of self (Ezzy, 1998). In this research, we focus on self-reflective narratives written by premed students given a simulated scenario. Simulation is strongly based on our first-person experiences since it relies on resources that are available to the simulator. In a simulation process, the writer puts themselves in the other's situation and asks "what would I do if I were in that situation?" Perspective taking is crucial for fostering affective abilities, enabling writers to imagine and learn about the emotions of others and to share them, too. As empathy is other-directed (De Vignemont and Jacob, 2012; Gallagher, 2012), this means that we, as narrators, are open to the experience and the life of the other, in their context, as we can understand it. Some evidence shows that we can take such reliance on narrative resources to open up the process toward a more enriched and non-simulationist narrative practice (i.e., real doctor-patients interactions in clinical context) (Gallagher, 2012). This study's intervention was designed as a written assignment in which premed students were asked to consider a hypothetical scenario where they took the role of a physician breaking the news of an unfavorable diagnosis of high blood cholesterol to a middle-aged patient3. They were instructed to recount (using first person voice) the hypothetical doctor-patient interaction where they explained the diagnosis and prescribed medical treatment to the patient using layman terms and language they believed would comfort as well as persuade the hypothetical patient to adhere to their prescription. Prior to writing, students completed a standard empathic training reading assignment (Baile et al., 2000). They received the following prompt instructions and scenario information.4 Footnote 3: The patient was referred to as Betty, initially. Later in the data collection, students could also identify the patient as John. Footnote 4: All data collected for this study adheres to the approved Institutional Review Board protocol. Prompt Instructions: Imagine yourself as a physician breaking bad news to a patient. Describe the dialogue between the patient and you, as their primary care physician. In your own words, write an essay reporting your recollection of the interaction as it happened (write in past tense). Think of how you would break this news if you were in this scenario in real life. In your essay, you should be reflecting on (1) how the patient felt during this scenario and (2) how you responded to your patient's questions in the scenario below. Scenario: Betty is 32 years old, has a spouse, and two young children (age 3 and 5). You became Betty's general practitioner last year. Betty has no family history of heart disease. In the past 6 months, she has begun experiencing left-side chest pain. Betty's bloodwork has revealed that her cholesterol is dangerously high. Betty will require statin therapy and may benefit from a healthier diet and exercise. With the students' consent, we collected a corpus of 774 essays over a period of one academic year [22]. Following a thorough annotation process, annotators (undergraduate and graduate students in psychology and social work)5 labeled a subset of 440 randomly selected essays at sentences level following established practices in psychology [19, 10, 11]. The labels are: _cognitive empathy_ (the drive and ability to identify and understand another's emotional or mental states; e.g., "She looked tired"); _affective empathy_ (the capacity to experience an appropriate emotion in response to another's emotional or mental state; e.g.: "I felt the pain"); and _prosocial behavior_ (a response to having identified the perspective of another with the intention of acting upon the other's mental and/or emotional state; e.g.: "I reassured her this was the best way"). Everything else was "no empathy". The six paid undergraduate students were trained on the task and instructed to annotate the data. Two meta-annotators, paid graduate students with prior experience with the task, reviewed the work of the annotators and updated the annotation guidelines at regular intervals, in an iterative loop process after each batch of essays6. The meta-annotators reached a Cohen's kappa of 0.82, a good level of agreement. Disagreed cases were discussed and mitigated. At the end, all the essays were re-annotated per the most up-to-date guidelines. Footnote 5: The students were hired based on previous experience with similar projects in social work and psychology. Footnote 6: 10 essays per week In this paper, we collapsed all the affective, cognitive, and prosocial empathy labels into one _Empathy Language_ label - since we are interested here only in emphatic vs. non-empathic sentences. After integrating the annotations and storing the data for efficient search [15], our corpus consisted of 10,120 data points (i.e., sentences) highlighted or not with empathy. Each essay was also rated by our annotators with a score on a scale from 1-5 (one being the lowest) to reflect overall empathy content at essay level. ## 4 Constructions and Stylistic Profiles in Empathic Narrative Essays In CxG, constructions can vary in size and complexity - i.e., morphemes, words, idioms, phrases, sentences. In this paper, we focus mainly on simple sentence-level constructions7, which, since we work with English, are typically of the form S V [O], where S is the subject, V is the verb, and O is the object (e.g., a thing, a location, an attribute). For instance, "Betty took my hand" matches the construction S V O with the semantics <Agent Predicate Goal>. SFG and CxG give the same semantic analysis, modulo some terminological differences [13]. Specifically, they agree that the sentence above describes a process (or a predicate), which involves two participant roles providing the same linking relationship between the semantic and the syntactic structures: an Actor (or Agent) / Subject, and a Goal (Patient) / Object. Footnote 7: We also consider constructions at word level - i.e., verbs. We start by checking whether the subject of a sentence consists of a human or a non-human agent. After identifying the grammatical subjects in the dataset's sentences with the Python Spacy package, we manually checked the list of human agents (the five most frequent being \(I\) (24.56%), _She_ (5.76%), _Betty_ (18.43%), _John_ (6.24%), _Patient_ (4.86%)).8 Footnote 8: Other subjects: _Nurse_, _Doctor_, _Family_, _Children_, _Wife_, _Husband_, and _Spouse_ Halliday's transitivity model describes the way in which the world of our experience can be divided by grammar into a manageable set of process types, the most basic of which are: _material processes_ (external actions or events in the world around us; e.g., verbs like "write", "walk", "kick") and _mental processes_ (internal events; e.g., verbs of thinking, feeling, perceiving). We first identify sentences containing material and mental processes by extracting the verbs in each sentence (Table 1). About 75% of the dataset contains such processes, with material processes appearing more frequently than mental ones (by a small margin: 0.9%). Inspired by the success of Halliday's transitivity system on cognitive effects of linguistic constructions in literary texts [19], we also examine a set of construction choices which seem to co-occur in texts as material and mental actions or events. In our quest of understanding empathy expression in student narrative essays, we want to test if such contributions lead to a reduced sense of intentionality, awareness or control for the agentive individual represented (i.e., the essay writer in the role of the doctor), and thus, identifying the stylistic profile of the narrative. Specifically, these constructions are: _Human Actor + Process (HA+P); Body Part + Process (BP+P); Other Inanimate Actor + Process (IA+P); Goal + Process (G+P)_ (see Table 1). We identify HA+P to be the most common construction within our dataset, appearing in just less than half of the sentences (49.82%). The remaining constructions are much rarer with G+P being the least frequent (12.54%). Drawing from (Langacker, 1987), Nuttall (2019) also notes that these experiences can vary in force-dynamic (energetic) quality and thus sentences exhibiting an energetic tone are linked with 'high' transitivity and those with lower or static energy can be linked to 'low' transitivity. In order to identify energetic sentences, we leverage the IBM Watson Tone Analyzer API (Yin et al., 2017) which assesses the emotions, social propensities, and language styles of a sentence. We denote sentences containing high extroversion and high confidence (values > 0.8) as energetic. Sentences with low scores are marked as static. 61.77% of the sentences exhibit a static tone, energetic tone being less frequent. In SFG, active and passive voice plays an important role as well. Nuttall (2019) shows that, in some genres, text indicating a lower degree of agentive control tends to use more passive voice constructions. As this is also relevant to our task, we test whether voice contributes indeed to a reduced sense of intentionality, awareness or control for the Agent (in particular the essay writer playing the doctor's role) and how these features correlate with the overall empathy score at essay level. Using an in-house grammatical-role extraction tool developed on top of Spacy's dependency parser, we find that 66% of sentences use active voice and 34% passive voice.9 77.92% of active-voice sentences exhibit human actor subjects and only 22.08% include non-human actors. Similarly for passive voice, the majority (83.09%) of sentences had human actors. Comparing frequencies of active and passive voice across various essay empathy score ranges (Figure 1), we notice that higher empathy essays (scores >3) seem to rely more on active voice (65-70% of the sentences in active voice) as opposed to lower empathy essays (scores < 3) which have less than 65% of sentences in active voice. Footnote 9: The active/passive voice ratio varies per genre (Strunk Jr and White, 2007). Note that in a sentence using passive voice, the subject is acted upon, which shows the main character’s degree of detachment, which is of interest here. Stylistic research has also shown (Nuttall, 2019) the importance of movement of body parts as non-human agents. We, too, parsed sentences for the use of body parts, i.e. _eyes_, _arms_, _head_ and curated a list based on anatomical terminology as defined by wiktionary.org (2022) resulting in about 18.61% of the dataset sentences (statistics for top 5 most common bodyparts are in Table 2). Table 1 summarize all the identified constructions and stylistic features discussed in this section. ## 5 Empathy Classification Task Our ultimate goal is to build an informed and performant classifier able to determine the degree of empathetic content of a medical essay overall and at sentence level. Taking advantage of form-meaning-style mappings in the language system, in this paper, we built and test a number of state-of-the-art classifiers enriched with varied constructions and stylistic features (Table 1) which are described next. ### Identification of Sentence Themes In medical training, students learn not only how to diagnose and treat patients' medical conditions, but also how to witness the patient's illness experience. In fact, in practical interactions with patients, they often switch between these positions: empathizing with the patient's situation (i.e., witnessing what it is like for the patient), and providing medical care (i.e., understanding what they need medically). As such, we wanted to capture the distribution of such emphatic content and medical information in Figure 1: Frequency distribution (%) of voice in essays for various overall empathy score ranges our narrative essays of hypothetical doctor-patient interactions. Specifically, we looked at recurring topics within sentences and identified the following themes in our dataset at the sentence level: _Medical Procedural Information; Empathetic Language; Both_ (Medical and Empathetic Language); and _Neither_. Sentences referring to _Medical Procedural Information_ were identified based on keyword matching following established medical term vocabulary generated from Dr. Kavita Ganesan's work on clinical concepts [1]. Sentences containing _Empathetic Language_ were already annotated manually by our annotators for each essay at the sentence level (see Section 3). Sentences containing both medical procedural info and empathetic content were marked as _Both_, while remaining sentences are marked as _Neither_. Table 3 shows these categories, their definitions, examples and counts per category (10,120 sentences overall). We also give examples of two essays highlighted with these themes in the Appendix (Section 7). In the next sections we present the classification results of various multi-class machine learning models (for each of the 4 themes: _Medical Procedural Information_, _Empathetic Language_, _Both_, and _Neither_). ### Baseline Models and Analysis In evaluating several state-of-the-art machine learning algorithms, we started with two representative baseline models: support vector machines (SVM) and logistic regression (logR). As we are interested in observing the performance of deep learning methods, we also experiment with long-short term memory (LSTM) [1], bidirectional long-short term memory (bi-LSTM) [13], and convolutional neural network (CNN) [16] models; additionally, we use the transformer models BERT [14] and roBERTa. \begin{table} \begin{tabular}{p{42.7pt} p{142.3pt} p{142.3pt}} \hline **Feature** & **Frequency** & **Definition** \\ \hline _Active_ & 62.12\% & the subject of the sentence is the one doing the action expressed by the verb & chair.” \\ _Passive_ & 37.88\% & the subject is the person or thing acted on or affected by the verb’s action & \\ _Material_ & 37.39\% & external actions or events in the world around us & \\ _Mental_ & 36.49\% & events/feelings expressed by a user & \\ _HA+P_ & 49.82\% & consists of a human actor and a material/mental process & \\ _BP+P_ & 15.85\% & consists of a non-human actor related to body parts in material/mental process & \\ _IE+P_ & 18.34\% & consists of an inanimate actor in material/mental process & \\ _G+P_ & 12.54\% & consists of the passivisation of material/mental process and deletion of actor e.g., high extroversion and confidence & \\ _Energetic_ & 38.23\% & e.g., high extroversion and confidence & \\ _Static_ & 61.77\% & e.g., low extroversion and confidence & \\ \hline \end{tabular} \end{table} Table 1: Our set of SFG’s transitivity constructions with their distribution and examples. Note that the total distribution should not add to 100%, as these are not mutually exclusive features. \begin{table} \begin{tabular}{p{42.7pt} p{142.3pt} p{142.3pt} p{142.3pt}} \hline **Body Part** & **POS Used** & **Frequency** & **Example** \\ \hline _Eye_ & subject, indirect object, prepositional object, & 42.96\% & ”I saw in her eyes tears forming as she realized the gravity of the issue at hand.” \\ _Hand_ & subject, prepositional object, direct object, direct object & 16.14\% & ”John began clasping his hands.” \\ _Head_ & direct object, indirect object & 8.60\% & ”John shook his head as he sat down across from me.” \\ _Shoulder_ & subject, prepositional object, direct object & 5.47\% & ”The patient shrugged his shoulders.” \\ _Body_ & subject, prepositional object, direct object & 4.99\% & ”The vitals showed that the patient’s body was not in its healthiest form.” \\ \hline \end{tabular} \end{table} Table 2: Most common body parts in the empathy essay dataset As we are performing sentence classification, our features are unigrams (single words). For the logistic regression models, we used a L2 regularization and for the SVM models, a linear kernel function. We initialized the embedding layers in our neural models (LSTM, bi-LSTM, CNN) with GloVe embeddings since the expression of empathy involves larger units than words, and embeddings are known to better capture contextual information. We further decided to apply an attention layer to these models to learn patterns that may improve the classification. For the transformer BERT and roBERTa models, we use the default embeddings and apply a dropout layer with probability 0.4 which helps to regularize the model; we use a linear output layer and apply a sigmoid on the outputs. For each type of theme, we reserve an 80/20 training/test ratio, with 5-fold cross validation. As our dataset is imbalanced, we report the precision, recall, and F1-score (harmonic mean of the precision and recall) as shown in Table 4. We observe that the classification of _Empathetic Language_ is particularly difficult. The best model is the transformer BERT model which achieves an F-1 score of 0.58. On the other hand, sentences with _Medical Procedural Information_ are much easier to identify with most classifiers achieving an F-1 score above 0.65. Sentences labeled _Both_ are increasingly difficult (best classifier score of 0.6 F-1). Classification scores for sentences containing _Neither_ fall just short of scores from _Medical Procedural Information_ sentences. To better understand how these themes correlate with the overall empathy score at essay level, we compare frequencies and distribution of each theme for various essay empathy score ranges (Figure 2) across the entire dataset. High empathy essays \begin{table} \begin{tabular}{l c c|c c c c|c c c c c} \hline \hline **Classifier** & \multicolumn{3}{c}{**Medical Procedural Information**} & \multicolumn{3}{c}{**Empathetic Language**} & \multicolumn{3}{c}{**Both**} & \multicolumn{3}{c}{**Neither**} \\ \cline{2-13} & Prec. & Rec. & F1 & Prec. & Rec. & F1 & Prec. & Rec. & F1 & Prec. & Rec. & F1 \\ \hline SVM & 0.70 & 0.68 & 0.69 & 0.52 & 0.61 & 0.56 & 0.49 & 0.47 & 0.48 & 0.78 & 0.39 & 0.51 \\ LogR & 0.62 & 0.67 & 0.64 & 0.49 & 0.54 & 0.51 & 0.51 & 0.53 & 0.52 & 0.68 & 0.61 & 0.64 \\ LSTM & 0.64 & 0.69 & 0.67 & 0.51 & 0.54 & 0.52 & 0.59 & 0.53 & 0.56 & 0.66 & 0.61 & 0.63 \\ biLSTM & 0.65 & 0.7 & 0.68 & 0.51 & 0.54 & 0.52 & 0.56 & 0.53 & 0.54 & 0.68 & 0.62 & 0.65 \\ CNN & 0.70 & 0.71 & 0.70 & 0.52 & 0.54 & 0.53 & 0.64 & 0.53 & 0.57 & 0.71 & 0.63 & 0.66 \\ BERT & 0.69 & 0.72 & 0.70 & 0.55 & 0.61 & 0.58 & 0.57 & 0.63 & 0.60 & 0.68 & 0.65 & 0.66 \\ \hline \hline constructionBERT & 0.71 & 0.73 & 0.72 & 0.64 & 0.67 & 0.65 & 0.76 & 0.58 & 0.66 & 0.78 & 0.72 & 0.75 \\ constructionBERT-_Voice:Active_ & 0.71 & 0.73 & 0.72 & 0.58 & 0.63 & 0.65 & 0.64 & 0.64 & 0.62 & 0.77 & 0.72 & 0.74 \\ constructionBERT-_Voice:Passive_ & 0.71 & 0.73 & 0.72 & 0.65 & 0.67 & 0.66 & 0.76 & 0.61 & 0.67 & 0.78 & 0.72 & 0.75 \\ constructionBERT-_Process:Material_ & 0.70 & 0.72 & 0.71 & 0.61 & 0.65 & 0.63 & 0.68 & 0.58 & 0.63 & 0.78 & 0.72 & 0.75 \\ constructionBERT-_Process:Mental_ & 0.70 & 0.72 & 0.71 & 0.59 & 0.63 & 0.61 & 0.66 & 0.58 & 0.62 & 0.78 & 0.71 & 0.74 \\ constructionBERT-_HA+P_ & 0.69 & 0.72 & 0.70 & 0.59 & 0.64 & 0.62 & 0.66 & 0.58 & 0.62 & 0.68 & 0.69 & 0.68 \\ constructionBERT-_PA+P_ & 0.71 & 0.73 & 0.72 & 0.55 & 0.64 & 0.59 & 0.61 & 0.63 & 0.62 & 0.71 & 0.72 & 0.71 \\ constructionBERT-_IE+P_ & 0.70 & 0.73 & 0.71 & 0.61 & 0.64 & 0.62 & 0.73 & 0.57 & 0.64 & 0.76 & 0.72 & 0.74 \\ constructionBERT-_G+P_ & 0.71 & 0.73 & 0.72 & 0.64 & 0.66 & 0.65 & 0.74 & 0.56 & 0.64 & 0.78 & 0.72 & 0.75 \\ constructionBERT-_Tone:Energetic_ & 0.71 & 0.73 & 0.72 & 0.58 & 0.62 & 0.60 & 0.66 & 0.57 & 0.61 & 0.78 & 0.72 & 0.75 \\ constructionBERT-_Tone:Static_ & 0.71 & 0.73 & 0.72 & 0.64 & 0.62 & 0.63 & 0.71 & 0.58 & 0.64 & 0.78 & 0.73 & 0.75 \\ \hline \hline \end{tabular} \end{table} Table 4: Precision, recall and F1 scores of all baseline classifiers on the imbalanced test dataset: 770 _Medical Procedural Information_, 722 _Empathetic Language_, 433 _Both_, 98 _Neither_ sentences \begin{table} \begin{tabular}{l c} \hline \hline **Theme** & **Freq.** & **Example** \\ \hline _Medical Procedural Information_ & 37.39\% & \begin{tabular}{l} “The patient’s vitals showed that his body was not healthy and it was necessary to make some diet and lifestyle changes." \\ _Empathetic Language_ \\ \end{tabular} & 36.49\% & \begin{tabular}{l} “I noticed Betty looked confused and so I tried to reassure her we would do everything possible to make the changes in her lifestyle." \\ _Both_ \\ \end{tabular} \\ _Neither_ & 4.84\% & \begin{tabular}{l} “I knew the statin treatment could be difficult, so I wanted to make sure Betty felt comfortable and understood the procedure." \\ _”The file was left on the counter, and I picked it up before going in to see Betty." \\ \end{tabular} \\ \hline \hline \end{tabular} \end{table} Table 3: Examples and distribution of identified themes in sentences Figure 2: Frequency distribution (%) of themes in essays for various empathy score ranges (scores >3) tend to show a large amount of _Empathetic Language_ and _Both_, while low empathy essays (scores < 3) seem to favor _Medical Procedural Information_ language. **Heatmaps of Medical Narrative Essays**. It is also interesting to visually analyze the distribution of these themes in the layout of the narrative essays. Thus, for each essay, we highlight the sentences containing each theme and generate heat maps that might highlight high theme concentrations. We standardized the format of each essay to an A4 paper,10 generating a 42 x 14 matrix. 11 For each essay and position - i.e., (row, column) - we note the occurrence of each theme. We then build a heat map from these counts, thus generating 3 heatmaps, one for each theme along the following overall empathy score ranges: (1-2), (2-3), (3-4), and (4-5) (Figure 3). Footnote 10: Times New Roman, size 12: 42 lines of 14 words each Footnote 11: We generated a separate heatmap (size: 81 x 14) for 24 essays since these were much longer and didn’t fit on a standard A4 paper. These showed similar position patterns. The heatmaps for theme _Medical Procedural Information_ for low empathy score essays show darker colors (purple) indicating a higher frequency of use at the beginning and middle of the essay. Lighter colors (orange and yellow) showcasing lower concentrations of the theme seems to be more prevalent in higher empathy score essays. _Empathetic Language_ tends to increase in coverage (i.e., darker color portions) from low to high-score empathy essays, with a preference toward the end of the essay.12_Both_ themes seem to concentrate, specifically towards the top and middle of the essays for high empathy scores (darker colors). Low empathy essays also show some shades of purple (i.e. some concentration) towards the bottom and lower third of the essays. Footnote 12: A closer look indicates that students who wrote low-empathy essays showed a tendency to use some emotional language in the last paragraph - which appeared rather rushed and forced. ### Incorporating Halliday Features into the Theme Classifier In this section, we seek to improve our sentence theme classifier by incorporating the constructions and stylistic features identified in Section 4. For each sentence, we append a Boolean value indicating whether each feature is present in the given sentence - e.g., if a sentence is in active voice (feature _Active_ is 1; feature _Passive_ is 0); if the sentence contains a HA+P (feature value is 1), and so on. Since in our baseline experiments the BERT model gave the best results across all 4 themes, we extend it here with all the features (construction-BERT) and report new scores (see bottom part of Table 4). Indeed, the inclusion of these features yields better performance, with a large increase for most of our themes including, _Empathetic Language_, _Both_, and _Neither_, and smaller performance increases in _Medical Procedural Information_. Leave-one-out feature contribution experiments (see bottom of Table 4) show that removing _Voice: Active_ and _Voice: Passive_ slightly decreases performance in _Empathetic Language_ and _Both_ (with _Voice: Active_ providing the highest decrease). Removing _Processes_ also shows a fair decrease in all themes except _Neither_ which shows no change in performance. A deeper analysis indicates that _Processes: Material_ helps with _Medical Procedural Information_ but hurts performance on _Empathetic Language_. The constructions _HA+P_ and _BP+P_ are most important for classification; the removal of _BP+P_ yields the lowest F-1 score measure for detecting empathy. This shows the doctor (i.e., the student writer) paid particular attention to the patient's emotional state (thus showing empathy). Body parts in this type of discourse are particularly associated with non-verbal emotional language, which is highly indicative of empathy. _HA+P_ is also an important feature for the theme _Neither_. Removal of _IE+P_ gives a slight decrease in performance, while _G+P_ has almost no effect on the classification results. Finally, the _Tone: Energetic_ and _Tone: Static_ features (constructionBERT-_Tone_) show to be important for the themes _Medical Procedural Information_, _Empathetic Language_, and _Both_. For _Tone: Energetic_, there is a 0.02 decrease in F-1 for medical procedural information, and a 0.05 for _Empathetic Language_ and _Both_. For _Tone: Static_, we observe a decrease in performance for _Empathetic Language_ by 0.02 and _Both_ by 0.01. With our binary classification task, we see similar patterns as constructionBERT-Tone yields much lower performances. The energetic and static tones yield 0.004 and 0.01 increases in F-1 scores for _Medical Procedural Information_ and _Empathetic Language_. Our analysis also showed that G+P (Goal+Process), Processes (Mental and Material), and HA+P (Human Actor+Process) were also increasingly important for score improvements. Interested in directly comparing the _Medical Pro cedural Information_ and _Empathetic Language_ sentences, we further built a binary version of the simple BERT model, and another of constructionBERT, and found these tasks to be slightly easier. The binary BERT model achieved an F-1 score of 0.75 for _Medical Procedural Information_ and a 0.62 for _Empathetic Language_. After adding the generated features (i.e., the binary constructionBERT), we see a small increase in F-1 scores (+0.01 for _Medical Procedural Information_ and +0.03 for _Empathetic Language_). Overall, the results of the effects of transitivity features on meaning, perceived agency and involvement of the Agent are in line with those obtained for literary genre texts by Nuttall (2019) through manual inspection. More specifically, the stylistic choices given by such linguistic constructions seem to be good indicators of the degree of perceived agency an Agent has in relation to others and the environment, as tested here for the empathy task on our dataset. In research on stylistics, the set and usage of such stylistic constructions and features in a text is known as the stylistic profile of the text. Encouraged by the correlations between Halliday's features with our essay level empathy scores, we would like to extrapolate and maintain that a set of rich stylistic constructions (like those tested in this research) can ultimately lead to informative **Empathy Profiles** - essay level form-meaning-style structures that can give an indication of the degree of social and empathetic detachment of the doctor toward the patient. Of course, while more research is needed in this direction, we believe we showed here the potential of such an approach to the task of empathy detection classification overall, and to clinical context in particular. ## 6 Conclusions Medical education incorporates guided self-reflective practices that show how important it is for students to develop an awareness of the emotional and relational aspects of the clinical encounter with their patients Warmington (2019). The way people identify themselves and perform in particular roles and in relation to others brings together a specific set of values, attitudes, and competencies that can be supported through ongoing self-reflection. Such interactions can be captured in language via constructions as part of CxG and Halliday's transitivity system. In this paper, we bring various aspects of these theories in a deep learning computational framework to model empathetic language in a corpus of essays written by premed students as narrated simulated patient-doctor interactions. We start with baseline classifiers (state-of-the-art recurrent neural networks and transformer models). Then, we enrich these models with a set of linguistic constructions proving the importance of this novel approach to the task of empathy classification for this dataset. Our results indicate the potential of such constructions to contribute to the overall empathy profile of first-person narrative essays. Figure 3: Heatmaps for themes in sentences of narrative essays across all overall empathy score ranges: Row#1 shows heatmaps for _Medical Procedural Information_; Row#2 for _Empathetic Language_; Row#3 for _Both_. Dark colors (purple) indicate that many essays exhibit the theme in the respective position of the essay. Light colors (yellow) indicate a small number of essays have occurrences of the theme for the given position.
2310.07081
Crossing the Threshold: Idiomatic Machine Translation through Retrieval Augmentation and Loss Weighting
Idioms are common in everyday language, but often pose a challenge to translators because their meanings do not follow from the meanings of their parts. Despite significant advances, machine translation systems still struggle to translate idiomatic expressions. We provide a simple characterization of idiomatic translation and related issues. This allows us to conduct a synthetic experiment revealing a tipping point at which transformer-based machine translation models correctly default to idiomatic translations. To expand multilingual resources, we compile a dataset of ~4k natural sentences containing idiomatic expressions in French, Finnish, and Japanese. To improve translation of natural idioms, we introduce two straightforward yet effective techniques: the strategic upweighting of training loss on potentially idiomatic sentences, and using retrieval-augmented models. This not only improves the accuracy of a strong pretrained MT model on idiomatic sentences by up to 13% in absolute accuracy, but also holds potential benefits for non-idiomatic sentences.
Emmy Liu, Aditi Chaudhary, Graham Neubig
2023-10-10T23:47:25Z
http://arxiv.org/abs/2310.07081v2
Crossing the Threshold: Idiomatic Machine Translation through Retrieval Augmentation and Loss Weighting ###### Abstract Idioms are common in everyday language, but often pose a challenge to translators because their meanings do not follow from the meanings of their parts. Despite significant advances, machine translation systems still struggle to translate idiomatic expressions. We provide a simple characterization of idiomatic translation and related issues. This allows us to conduct a synthetic experiment revealing a tipping point at which transformer-based machine translation models correctly default to idiomatic translations. To expand multilingual resources, we compile a dataset of \(\sim\) 4k natural sentences containing idiomatic expressions in French, Finnish, and Japanese. To improve translation of natural idioms, we introduce two straightforward yet effective techniques: the strategic upweighting of training loss on potentially idiomatic sentences, and using retrieval-augmented models. This not only improves the accuracy of a strong pretrained MT model on idiomatic sentences by up to 13% in absolute accuracy, but also holds potential benefits for non-idiomatic sentences.1 Footnote 1: Code and data available at [https://github.com/nightingal3/idiom-translation/](https://github.com/nightingal3/idiom-translation/) ## 1 Introduction An idiom is a conventionalized expression in which the intended meaning differs from its literal translation. The translation of idioms has remained a problem for state-of-the-art research and commercial translation systems, as idioms tend to be translated literally (Dankers et al., 2022; Shao et al., 2017; Anastasiou, 2010). Failure to translate these expressions correctly may lead to incomprehensible translations, particularly in literary text (Toral and Way, 2018). To illustrate the difficulty of understanding mistranslated idioms, we present mistranslations from commercial systems in Table 1.3 Footnote 3: Translations from commercial systems were collected at the end of 2022. Although idiom translation has been recognized as a problem even before the advent of neural machine translation (Bar-Hillel, 1952; Wehrli, 1998), most work has focused on identifying and evaluating the problem cross-linguistically (Baziotis et al., 2022; Dankers et al., 2022), or on interpreting the behaviour of transformer-based models in translating or memorizing idioms (Haviv et al., 2022; Dankers et al., 2022). Others pose idiom identification and paraphrasing as a separate task from machine translation (Pershina et al., 2015). Comparatively fewer recent works have attempted to remedy this problem. Early work made use of idiom dictionaries and direct substitution, or example-based machine translation (Salton et al., 2014; Nagao, 1984). However, we would ideally want to make use of the contextual translation abilities of neural models. Data augmentation and the creation of new datasets have helped address this problem (Agrawal et al., 2018), but it may also be possible to use existing data resources more effectively, especially for higher-resource languages. We first frame the general problem of non-compositional translation, which encompasses the translation of idioms and other multi-word expressions that cannot be translated word-for-word (SS2). We then perform synthetic experiments in a very simple case, finding that transformer-based machine translation models generally translate word-for-word until a proportional threshold of sentences contain non-compositional expressions, at which point the translations flip to being correct (SS4.1). We evaluate translations by commercial models in three natural languages, and find a drop in performance on idiomatic sentences and stronger performance on more common idioms (SS4.2). We hypothesize that this may reflect similar trends as exist in processing other long-tail phenomena, and similar tactics to those used to deal with rare phenomena may work (Kandpal et al., 2022). With this intuition, we improve the idiomatic translations generated by a strong pretrained machine translation model, \(\Delta\)LM Ma et al. (2021), without harming the translation quality of literal expressions. To contribute resources toward documenting idioms and improving their translation cross-linguistically, we create a dataset of sentences containing idiomatic expressions in three languages (French (fr), Finnish (fi) and Japanese (ja) (SS3). We propose two simple but effective ways to improve translation of idioms, namely upweighting training loss on potentially idiomatic sentences and retrieval augmentation (SS5). We find that this can improve the idiomatic translation abilities of the model significantly, by an average of 10.4% in absolute accuracy (SS7.1). Moreover, this does not harm translation of sentences where the literal sense of the idiom is used, and it improves translation of out-of-distribution sentences in French and Finnish as well. We perform human evaluation and error analysis, and find that the rate of severe semantic errors is reduced by an average of 7.52% absolute accuracy (SS7.2). The ultimate aim for machine translation is to ensure accessibility for all texts. This requires addressing idiomatic phrases, culturally-informed language, and complex semantics. We demonstrate the potential for enhancing idiom translation using existing resources. ## 2 Non-Compositional Translation ### Background on Idioms Idioms are commonly understood to be fixed expressions that contradict the principle of compositionality in language, which is to say that their meaning cannot be predicted from the meanings of their parts Radford (2004); Portner (2005). Idioms occur relatively frequently in all languages, and are often challenging for non-native speakers Cooper (1999). For instance, a literal translation of one Portuguese idiom is _"it is from little that you twist the cucumber"_. This is difficult to understand. However, an equivalent English expression is "As the twig is bent, so is the tree inclined", which refers to actions during childhood influencing behaviours that people have as adults Unbabel (2019). This example illustrates the importance of translating idioms using equivalent idioms from the target culture, or a paraphrase if there is no equivalent. Idiomatic expressions are heavily shaped by the culture of language speakers, including religious beliefs, history, geography, and cuisine. For instance, food-related idioms in English tend to refer to foods such as beef and potatoes, while in Chinese, these idioms tend to refer more to rice and tofu Yang (2010). Cross-cultural knowledge is important in choosing a translation that conveys the proper intent to readers in the target language Liu (2012). Overly-literal translations and lack of broader context are two reasons why machine translation is still not at parity with human translators, particularly when translating literary text Matusov (2019); Omar and Gomaa (2020); Poibeau (2022). ### Formal definition We use the idea of non-compositionality to frame idiomatic translation more precisely. Let \(\mathcal{X}=\{x_{1},...,x_{N}\}\) be the set of tokens in the source language, and \(\mathcal{Y}=\{y_{1},...,y_{M}\}\) be the set of tokens in the target language. Suppose that we have an oracle function \(\text{TRANSLATE}:\mathcal{X}^{*}\rightarrow\mathcal{Y}^{*}\) that always produces a correct translation. We can imagine this to be a helpful speaker who is perfectly familiar with both languages and never misreads text. Then we can say that a multi-token string \begin{table} \begin{tabular}{l l l l l} \hline \hline Source & Target & Translation & Language & System \\ \hline Voss-devez semi la dalle. & You gets be sarcive. & You must have the slab. & fr & Deepl. \\ \hline \hline If lait fast toujosure cheer la petite & He has to doit all the \(\tau\)s, cross all the \(\tau\)s, & He always has to look for the little beat. & fr & Deepl. \\ \hline \hline \(\tau\)s la petite h h mort. & Good as bell. & I have the petite h to death. & fr & Google \\ \hline \hline \(\theta\)f19/ft & The Weak are Mess, the Strong de Ent. & The Weak are the Strong. & ja & Deepl. \\ \hline \(\phi\)f19/ft & Oh, oh, rn net falling for that. & Fm net gamma cat that hand. & ja & Deepl. \\ \hline \(\theta\)f19/ft & Well, sometimes what we don’t know doesn’t & I don’t know, but sometimes Buddha & ja & Google \\ & hart \(\theta\)f19/ft & & & \\ \hline \hline Balkos dalle brain pothos. & Who. Homegeli clearly never met a trahscan. & The hen lives like a field. & fi & Deepl. \\ \hline Siman ddisi pitatory ottaa minit & You should have included me in this huge medium & You should have included me in this big decision of years - the way our father throws the person in the corner. & fi & Deepl. \\ \hline Roger usi pitanyt requires non-compositional translation if it can be translated correctly by the oracle as a whole, but it cannot be translated correctly by individually translating parts of the sentence and joining them (according to the target language's word order). In other words, for a string of tokens \(x_{1},...,x_{n}\),4 Footnote 4: \(\bigoplus_{X}\) denotes string concatenation given the word order of language \(X\), i.e. if the word order is SVO, the tokens belonging to the subject should be placed in front of the tokens belonging to the verb, and so on. \[\bigoplus_{i=1_{Y}}^{n}\textsc{translate}(x_{i})\neq\textsc{translate}( \bigoplus_{i=1_{X}}^{n}x_{i}) \tag{1}\] We note that this definition is very general and also includes other phenomena such as multi-word expressions and named entities. However, we can now use this definition to create a relevant synthetic task, allowing us to observe translation compositionality under different settings (SS4.1). ## 3 Idioms and Data Collection We can use the formal definition from the previous section to generate synthetic data for experiments. However, we ultimately want to improve translation of real idioms. To do so, we collect a dataset of natural sentences to evaluate commercial systems and the model we seek to improve. Although a large corpus of potentially idiomatic expressions exists in English Haagsma et al. (2020), there are no readily accessible equivalents in other languages. Therefore, we collected idioms in French, Finnish, and Japanese from language-learning sites, listed in Appendix B. These languages were chosen for phylogenetic diversity, and due to availability of commercial translation systems. In total, there were 148 French idioms collected, 92 in Finnish, and 1336 in Japanese. To collect sentences containing these idioms, we matched on lemmatized forms from the 2018 version of OpenSubtitles Lison et al. (2018), where lemmatization was performed with Stanza Qi et al. (2020). In total, there were 85632 French sentences containing potentially idiomatic expressions, 51811 Finnish sentences, and 23018 Japanese sentences. To filter out unaligned sentences, we scored each source and reference sentence using COMET-QE Rei et al. (2020) and removed the bottom 10% of each language's sentences by COMET-QE scores. Some idioms have a plausible literal meaning (such as "kick the bucket" to mean kicking a physical bucket). To make sure that all examples in the idiomatic test set were actually idiomatic, we sorted sentences into an idiomatic test set where the idiomatic meaning of a phrase was used (e.g. "to die") and a literal test set, where the literal meaning of the phrase was used (e.g. kicking a physical bucket). The first 100 examples containing each idiom's lemmatized form were collected, and up to the first 3 (for Japanese) or 5 (for Finnish and French) literal and figurative examples in this set were collected to create the test set. This was to avoid dominance of very common idioms in the test set. This created two test sets related to the idiom list for each language, the _idiomatic_ and _literal_ test sets. To validate these judgments, we hired native annotators in French and Finnish. They were presented with examples from the final literal and idiomatic test sets in a shuffled order, and asked to label them with idiomatic, literal, or N/A labels if they didn't think it was an instance of either. Agreement Krippendorff (1970); Castro (2017)) in both cases was moderately high (French \(\alpha=0.5754\), Finnish \(\alpha=0.6454\)). Details can be found in Appendix D. Finally, we collect two random test sets, one which is in-domain and another which is out-of-domain. For the in-domain test set, we simply select sentences from the development set of OpenSubtitles (see subsection 6.2 for details on our split of OpenSubtitles). For the out-of-domain test set, we use the Ted Talks corpus Reimers and Gurevych (2020). This is to ensure that translation quality of other, unrelated sentences is not impacted by any modifications meant to improve translation of idioms. Topics discussed and vocabulary used in Ted Talks may be slightly different from what is discussed in movies or TV shows, so training the model on OpenSubtitles and testing on Ted Talks allows us to evaluate model generalization. For both test sets, to control for translation length as a source of difficulty, sentences were length-matched on the target side with corresponding sentences in the idiomatic set. This created the _random_ set, which is the same size as the idiomatic test set. All three test sets are summarized in Table 2. ## 4 Evaluating Non-Compositional Translation ### Artificial Language Translation We first use the definition of non-compositional translation in (SS2) to create a synthetic task. This allows us to gain an understanding of how much data is required to memorize non-compositional patterns. Although this experiment is not realistic to natural language (notably, there is no token-level ambiguity in this experiment), we note that using synthetic experiments allows us to easily extend the data generation setup and examine model behaviour along many different conditions, such as informativity. The source language in these experiments was composed of tokens 0 through 9, \(X=\{0,1,2,...,9\}\). The target language was produced by adding 10 to each token, \(Y=\{10,...,19\}\). The translation rule was to add 10 to the value of each token in the source language, e.g. \(0\to 10\), \(1\to 11\). We add a single non-compositional rule that doesn't follow this trend, \(0\ \ 1\to 12\) (rather than \(0\ 1\to 10\ 11\)). We limited the maximum sequence length to 6 tokens. We generated synthetic training corpora of several sizes containing different numbers of occurrences of the non-compositional rule \(0\ \ 1\to 12\). The number of training sentences ranged from 100k to 10M, while the number of noncompositional occurrences ranged from 10 to 1M. We examined two informativity conditions, corresponding to the case where the context provides no information (tokens are randomized around the non-compositional expression), and the context being perfectly informative. The perfect informativity condition was achieved by adding the canary token "11" to the source vocabulary, and only inserting this token prior to the non-compositional pattern "0 1". We experimented with three different transformer sizes Vaswani et al. (2017), each of which had a hidden dimension and embedding size of 512, as well as 16 attention heads. Only the number of encoder and decoder layers varied, such that the small transformer had 3 encoder and decoder layers, the medium transformer 8, and the large transformer 16. We fix the number of epochs for the small, medium and large models to respectively be 10, 20, and 30 in the non-informative case and 15, 15 and 25 in the informative case.5 Further training details can be found in Appendix A. Footnote 5: The number of training epochs was determined by the number of epochs it took for the validation loss to plateau in the 100k size corpus with 1k non-compositional examples, rounded up to multiples of 5. This was done to mimic the typical training process for MT models, which are trained until loss or accuracy plateaus on a general dev set. Since idiomatic expressions tend to be uncommon compared to literal ones, there may not be many in the dev or train sets, and so the model’s performance on idiomatic expressions may not be tracked. Although this may seem like a simple task, we found it surprisingly difficult for models to learn this non-compositional pattern. Results in each setting, averaged across 5 random seeds, are presented in Figure 1. Especially for the small model, there is a sharp gradation from translating none of the non-compositional expressions correctly to translating them all correctly, which occurs when roughly 10% of training data contains a non-compositional pattern. A similar trend exists for larger models, but the threshold is less distinct. This corroborates the tendency for transformers to translate non-compositional phrases literally Dankers et al. (2022). Comparatively less data is required when the context is informative, but the trends remain similar to the non-informative case. As model size and corpus size increase, the rate of correct translations for non-compositional examples actually drops, contrary to expectation. It is unlikely that any individual idioms occur in 10% of sentences in natural language. Due to the highly regular translation rules in this synthetic language, there may be a stronger bias toward translating compositionally in this experiment. However, we gain the intuition that idioms can be translated effectively if they appear frequently, and that clear context clues reduce data required. ### Evaluation of Commercial Systems Although synthetic experiments provide intuition on the difficulty of translating idioms, one might ask whether similar results hold in natural language. To answer this, we examine the performance of commercial systems on the test sets in \((\lx@sectionsign 3)\). Namely, we examine Google Translate and DeepL on Finnish, French, and Japanese idiomatic, literal, and random sentences. Results are in Table 3. We observe drops in translation quality on idiomatic sentences in all languages, with lower automatic metrics overall. \begin{table} \begin{tabular}{l l l l l l l} \hline Language & 160nm enables & Idomax & Idomax & Leed & Random (in) & Random (out) & Total \\ \hline \(\varepsilon_{x}\) & 85632 & 777 & 79 & 777 & 777 & 2410 \\ \hline \(\varepsilon_{1}\) & 51811 & 449 & 81 & 449 & 449 & 1428 \\ \hline \(\lx@sectionsign\) & 23018 & 3253 & 389 & 3253 & 3253 & 10148 \\ \hline \end{tabular} \end{table} Table 2: Size of test sets for each language. The idiomatic and literal sentences contain strings matching known idioms (after lemmatization), and the in-domain random set contains unrelated sentences from Open-Subtitles, but the out-of-domain random set contains unrelated sentences from the Ted Talks corpus. Although it's impossible for us to determine what data these commercial systems were trained on, we examine the frequency of each idiom within OpenSubtitles as a proxy for its overall frequency in the training data, and bucket idioms into quintiles based on their occurrence frequency in source text. As idioms become more frequent, the quality of translations increases. An example of DeepL on the French idiom set is shown in Figure 2. Trends for other languages and systems are in Appendix H. This indicates that like in the synthetic experiments, there may be strong frequency effects on translation quality of idioms. ## 5 Methods to Improve Non-Compositional Translation We explore two methods to improve translation, loss weighting and kNN-MT. These two methods are relatively simple to use, where loss weighting Figure 1: Accuracy of a transformer in translating a non-compositional phrase after training on datasets of different sizes, with different numbers of non-compositional patterns (only non-compositional translation accuracy is depicted). Results are averaged across 5 seeds, and standard deviation is shown. \begin{table} \begin{tabular}{c c c c c} \hline \hline Language & System & BLEU & METEOR & BERTScore \\ \hline \multirow{2}{*}{f\({}_{\text{fismism}}\)} & DeepL & 0.1001 & 0.2497 & 0.8866 \\ & Google & 0.0923 & 0.2250 & 0.8726 \\ & \(\Delta\)LM-base & 0.1608 & 0.3592 & 0.9126 \\ \hline \multirow{2}{*}{f\({}_{\text{fismism}}\)} & DeepL & 0.1488 & 0.3908 & 0.9146 \\ & Google & 0.1398 & 0.3577 & 0.9017 \\ & \(\Delta\)LM-base & 0.2093 & 0.5050 & 0.9350 \\ \hline \multirow{2}{*}{f\({}_{\text{fismism}}\)} & DeepL & 0.2052 & 0.4082 & 0.9103 \\ & Google & 0.2288 & 0.4357 & 0.9062 \\ & \(\Delta\)LM-base & 0.2365 & 0.4971 & 0.9145 \\ \hline \hline \multirow{2}{*}{f\({}_{\text{fismism}}\)} & DeepL & 0.1575 & 0.3278 & 0.9006 \\ & Google & 0.1261 & 0.2794 & 0.8808 \\ & \(\Delta\)LM-base & 0.2001 & 0.4393 & 0.9211 \\ \hline \multirow{2}{*}{f\({}_{\text{fismism}}\)} & DeepL & 0.2219 & 0.4022 & 0.9122 \\ & Google & 0.2034 & 0.3830 & 0.9012 \\ & \(\Delta\)LM-base & 0.2778 & 0.5504 & 0.9377 \\ \hline \multirow{2}{*}{f\({}_{\text{fismism}}\)} & DeepL & 0.2854 & 0.4650 & 0.9125 \\ & Google & 0.3103 & 0.4922 & 0.9149 \\ & \(\Delta\)LM-base & 0.2778 & 0.5504 & 0.9377 \\ \hline \hline \multirow{2}{*}{\({}_{\text{fismism}}\)} & DeepL & 0.1172 & 0.2735 & 0.8932 \\ & Google & 0.10672 & 0.1839 & 0.8644 \\ & \(\Delta\)LM-base & 0.09048 & 0.2998 & 0.9234 \\ \hline \multirow{2}{*}{\({}_{\text{fismism}}\)} & DeepL & 0.1517 & 0.3440 & 0.9059 \\ & Google & 0.0937 & 0.2565 & 0.8829 \\ & \(\Delta\)LM-base & 0.1416 & 0.4222 & 0.9222 \\ \hline \multirow{2}{*}{\({}_{\text{fismism}}\)} & DeepL & 0.1074 & 0.2934 & 0.8878 \\ & Google & 0.1079 & 0.2834 & 0.8829 \\ & \(\Delta\)LM-base & 0.0948 & 0.3436 & 0.8946 \\ \hline \hline \end{tabular} \end{table} Table 3: Performance of commercial systems on idiomatic, literal, and random test sets. There is a clear degradation in performance on idiomatic sentences. Figure 2: Automatic metrics – Quality of DeepL French translations on idiomatic test set bucketed by idiom frequency. The bottom 20% of least common idioms are excluded, as they may occur fewer than 3 times and not be in our test set. only requires a list of potentially idiomatic phrases in the source language, and kNN-MT only requires enough space on disk to save the datastores. More formally, we consider the basic case of autoregressive machine translation, with a set of parallel sentences in the source (\(X=\{x^{(i)}\}_{i=1}^{N}\)) and target (\(Y=\{y^{(i)}\}_{i=1}^{N}\)) language: \(\mathcal{D}=\{(x^{(i)},y^{(i)}),...,(x^{(N)},y^{(N)})\}\). The model \(p_{\theta}\) with parameters \(\theta\) is trained by minimizing the loss: \[\mathcal{L}(\theta,\mathcal{D})=\sum_{i=1}^{N}\ell(y^{(i)},p_{\theta}(x^{(i) })) \tag{2}\] **Upweighting** here refers to sentence-level upweighting, where there is a set of sentences \(A\) that we'd like to upweight with a weight coefficient \(\alpha\). In this case, \(A\) would be potentially idiomatic sentences. We keep all other parameters for training the same as in the base model. \[\mathcal{L}(\theta,\mathcal{D})=\sum_{i=1}^{N}\alpha^{\lambda(x^{(i)}\in A)} \ell(y^{(i)},p_{\theta}(x^{(i)})) \tag{3}\] **kNN-MT** augments a translation model with a retrieval component (Khandelwal et al., 2021). Given each sentence \((x,y)\), we construct a datastore with keys based on hidden representations constructed from the translation model, and values being the next word in the target sentence. During generation, a probability distribution over next words can be computed based on the retrieved next words and the distance of their keys to the current context. A parameter \(\lambda\) controls interpolation between the distribution over next words predicted by the base model, and the distribution predicted by the retrieved \(k\) neighbours.6 Footnote 6: We run a hyperparameter search using the validation set to find the best kNN-MT settings for each language. Further details are in Appendix C. \[p(y_{i}^{(j)}|x^{(j)},y_{1:i-i}^{(j)}) =\lambda p_{\text{kNN}}(y^{(j)}|x^{(j)},y_{1:i-1}^{(j)})\] \[+(1-\lambda)p_{\theta}(y_{i}^{(j)}|x^{(j)},y_{1:i-1}^{(j)}) \tag{4}\] We also combine loss weighting with kNN-MT, where a model is trained with sentence upweighting and interpolated with a datastore based on representations from the upweight-trained model. Intuitively, these methods make sense to use for idiom translation - we have previously seen that one problem with non-compositional phrases may simply be their rarity. Upweighting training examples that contain idioms may help with under-representation. Furthermore, retrieving similar examples may find occurrences of the same idiom which were translated correctly. ## 6 Experimental Settings ### Experimental Settings We run experiments on \(\Delta\)LM-base, a transformer encoder-decoder model with 360M parameters, a larger version of which ranked first in the WMT21 multilingual translation task (Ma et al., 2021; on Machine Translation, WMT21). We train one \(\Delta\)LM model for each language pair. Each model was trained for 2 million steps, and the checkpoint with the best loss on the validation set was kept. Further details are in Appendix C. To decode, we used beam search with a beam size of 5. ### Data Models were trained on OpenSubtitles for each language pair. Data from test sets were removed, and 10% of the remaining data was used as a validation set. There were 33.8M sentences in the fr-en train set, 22.0M in fi-en, and 1.6M in ja-en. ### Evaluation We use multiple automatic metrics to evaluate translation quality. However, due to the importance of accurate semantic evaluation, the authors (native English speakers and fluent in French and Japanese) conduct a human evaluation inspired by MQM (Lommel et al., 2014). Only errors that would fall under the"terminology" and "accuracy" error types are considered, as we are focused on severe semantic errors. We give a score of 0 for severe errors and a score of 0.5 for major errors. A score of 1 is given otherwise. Exact evaluation standards are in Appendix E. ## 7 Results ### Automatic and Human Evaluation In most cases, as reported in Figure 3, using a combination of sentence upweighting and kNN-MT led to the greatest increase in automatic metrics on all three test sets, of up to 3.08 BLEU points on the idiomatic test set (f_r_), 2.69 BLEU points on the literal test set (fi), and 5.75 points on the random test set (fr). In all cases except ja-rand, using one or more of these methods improved over the baseline. Exact numerical results are in Appendix J. We evaluate the statistical significance of the results through a one-tailed permutation test (Graham et al., 2014). Further details are in Appendix F. Exact results are in Appendix G. For Finnish, significance is achieved for all three test sets, and for French, significance is achieved for the idiomatic and random test sets. For Japanese, values achieved are not significant, but are borderline. As our focus is on mitigating semantic errors, we mostly focus on the results of human evaluation, which are summarized in Table 4. Here, we also find that using both sentence upweighting and kNN is the best condition in most cases, increasing accuracy by roughly 13% in French and Finnish, and 4.5% in Japanese for idiomatic sentences. Encouragingly, this does not overly harm translation of literal sentences, as accuracy on the literal set either increases slightly (by roughly 4% in French and Finnish), or decreases very slightly (by roughly 0.4% in Japanese). For the random set, the combination of sentence upweighting and kNN-MT by around 7% accuracy. However, in Japanese, performance on the random test set decreases by 4%. In all cases except ja-rand, one or more of these methods improves over the baseline. We note that the Japanese model was trained on roughly 1/10th of the data of the French and Finnish models, so its translations are not as high-quality. This also leads to the construction of a much smaller datastore, which may lead to weaker performance on the random set. ### Error analysis We repeat the frequency analysis performed on commercial systems (\(\lx@sectionsign\)4.2) for \(\Delta\)LM, and find that adding upweighting and kNN-MT generally improves translations at all frequency levels. These increases are not concentrated in low-frequency idioms, so more common idioms continue to be translated better.7 A representative example (for \begin{table} \begin{tabular}{l c c c c} \hline \hline & base & knn & upweight & upweight + knn \\ \hline fr-idioms & 0.6177 & 0.6659 & 0.7010 & **0.7463** \\ fr-literal & 0.7039 & 0.7303 & 0.7105 & **0.7434** \\ fr-rand-out & 0.7526 & **0.8398** & 0.7477 & 0.8232 \\ \hline fi-idioms & 0.4803 & 0.5562 & 0.5604 & **0.6194** \\ fi-literal & 0.7692 & **0.8462** & 0.8205 & 0.8141 \\ fi-rand-out & 0.7647 & 0.8235 & 0.7771 & **0.828** \\ \hline ja-idioms & 0.4152 & 0.4286 & **0.4643** & 0.4598 \\ ja-literal & 0.6475 & 0.6516 & **0.6557** & 0.6434 \\ ja-rand-out & **0.6207** & 0.5560 & 0.5776 & 0.5862 \\ \hline \hline \end{tabular} \end{table} Table 4: Human-judged accuracy on sentence-level semantics. Figure 3: Results of automatic metrics. In most cases, combining loss weighting with KNN-MT improves automatic metrics the most on all three test sets, including the out-of-distribution (Random) test set. French) is in Figure 4. A complete set of plots are in Appendix I. Footnote 4: The \(\alpha\)-score is not a good measure of the \(\alpha\)-score. We examine the rate of severe and major errors made in the base model and the upweight+knn model in Table 5. In French and Finnish, the rate of critical errors decreased greatly, particularly in the idiomatic and random test sets. This is true to a lesser extent in Japanese. Major errors also decreased to a lesser extent. The only test set where errors increase is again the ja-rand test set. We note that it's possible for the rate of major errors to be higher in the upweight+knn model because some severe errors transitioned to major errors. One question is why the error rate on out-of-distribution sentences drops for French and Finnish. In fi-rand, the severe error rate more than halves (\(0.1317\to 0.603\)), and in fr-rand, it nearly halves (\(0.1624\to 0.09407\)). However, it is unclear why this should be the case. We examined sentences where the original translation was incorrect but the upweight+knn translation was correct, and found that they tended to contain named entities. For instance, for the sentence "_La chirurgie a coeur ouvert au Nigeria, c'est un gros probleme._ (Open heart surgery in Nigeria - big trouble.)", the base model incorrectly produced the translation "Open-heart surgery in Forbes, that's a big problem.", while the upweight+knn model translated correctly. In some cases, words with multiple possible translations (e.g. _spectre_: ghost, spectrum) became correctly translated. "_Mais regardez le nombre de lignes noises dans ce spectre._ (But look at the number of black lines in that spectrum.)" was originally translated incorrectly as "But look at the number of black lines in that ghost". ## 8 Related Work Recent work has raised the issue of idiom handling in MT (Baziotis et al., 2022; Dankers et al., 2022b, a). There is historical recognition of the problem, including of multi-word expressions (Sag et al., 2002; Calzolari et al., 2002; Zaninello and Birch, 2020). This has historically motivated example-based machine translation (Nagao, 1984). Similar motivations underlie the use of kNN-MT. However, neural models may already be capable of translating idiomatic phrases if they appear often enough in training data. Other works focus on data augmentation and creating new data resources (Ho et al., 2014; Fadaee et al., 2018; Agrawal et al., 2018; Haagsma et al., 2020). A related task is detection of conventionalized metaphors (Levin et al., 2014). Automatic identification of idiomatic phrases, as well as data augmentation are promising avenues to improve performance in lower-resource languages. Instance weighting has been explored previously in the MT literature, but has been mostly explored in the context of domain adaptation, rather than being used to improve translations of rare or non-compositional phrases in the same domain (Foster et al., 2010; Wang et al., 2017). Idiomatic phrases are a prototypical case of phrases that need to be memorized (Haviv et al., 2022). Many also occur infrequently in training data, which may make it difficult for transformer-based models to translate them (Kandpal et al., 2022). This can be mitigated, as we have shown in this paper. However, more work is needed to effec Figure 4: Automatic metrics for fr-idiom sentences, plotted by frequency, for base and upweight+knn. \begin{table} \begin{tabular}{l r r r} \hline \hline & System & Severe (\(\downarrow\)) & Major (\(\downarrow\)) \\ \hline fi-idioms & base & 0.4258 & 0.1648 \\ & upweight+knn & **0.3242** & **0.0962** \\ \hline fi-literal & base & 0.1728 & **0.1234** \\ & upweight+knn & **0.1234** & 0.1358 \\ \hline fi-random & base & 0.1317 & **0.2009** \\ & upweight+knn & **0.0603** & 0.2188 \\ \hline fr-idioms & base & 0.3042 & 0.1528 \\ & upweight+knn & **0.198** & **0.1092** \\ \hline fr-literal & base & 0.2326 & 0.1047 \\ & upweight+knn & **0.2209** & **0.04651** \\ \hline fr-random & base & 0.1624 & 0.1688 \\ & upweight+knn & **0.09407** & **0.1649** \\ \hline ja-idioms & base & 0.4643 & 0.2411 \\ & upweight+knn & **0.4464** & **0.1875** \\ \hline ja-literal & base & 0.2867 & **0.1311** \\ & upweight+knn & **0.2787** & 0.1557 \\ \hline ja-random & base & **0.2931** & **0.1724** \\ & upweight+knn & 0.3190 & 0.1897 \\ \hline \hline \end{tabular} \end{table} Table 5: Rate of major and severe errors in translations. tively learn idioms and other infrequent linguistic elements with few repetitions. ## 9 Conclusion We highlight the challenge idiomatic expressions pose to machine translation systems and provide simple solutions to improve performance. Through synthetic experiments, we identify a threshold at which transformer-based models correctly default to idiomatic translations. We develop a dataset of sentences containing idiomatic expressions in French, Finnish, and Japanese, and introduce two techniques - upweighting training loss on potentially idiomatic sentences and augmenting models with kNN-MT - which enhance the idiomatic translation accuracy of a strong model, while offering potential benefits for non-idiomatic sentences. Future research could extend these techniques to additional languages, and explore their effectiveness in dealing with other long-tail phenomena. We hope that this work contributes toward increasing the intelligibility of translations containing idioms or set phrases. Ultimately, for machine translation to be useful for everyone without causing misunderstandings, "last mile" problems involving cultural knowledge, long-tail phenomena, and complex semantic evaluation should be taken into account. ## Acknowledgements Thank you to Perez Ogayo for helping with DeltaLM setup, all annotators who validated idiomatic/literal judgments, as well as to NeuLab members for providing feedback on parts of the draft. This project was funded by the P2020 program MAIA (LISBOA-01-0247-FEDER-045909). ## 10 Limitations Our research provides a first step toward capturing non-compositional expressions in machine translation. However, we do not conclusively solve the problem, as ideally a machine translation system should be able to learn any idiom or non-compositional phrase from a few examples. First, our experiments were conducted on a select group of languages (Finnish, French, and Japanese), which do not fully capture the variety and complexity of languages worldwide. Given the diversity of language structures and idiomatic expressions, the generality of our findings to languages with drastically different grammatical structures or idiom usage patterns remains uncertain. Next is our use of synthetic data. While synthetic data allowed us to control for certain variables, our setting is purposefully simplified, potentially limiting the ecological validity of our findings. Although our synthetic language was designed to mimic non-compositional translation issues, it may not encapsulate the full extent of such complexities in real-world languages. Namely, there is only one non-compositional pattern and the remaining translations are one-to-one mappings. Our research also depends on the quality and representativeness of the training and evaluation corpora. For instance, certain idioms may be over-represented or underrepresented, which could affect the translation performance. Lastly, our improvement methods, namely upweighting and kNN-MT, have inherent limitations. Upweighting could lead to overfitting on idiomatic expressions and may not be as effective when idioms occur infrequently in the data. On the other hand, kNN-MT might not yield significant improvements if the idiom or its correct translation rarely appears in the training data, limiting its utility in such scenarios. Future work could address these limitations by expanding the linguistic scope of the study, exploring more complex methods or architectures, or investigating to what extent similar techniques can be applied to related issues in semantic preservation during machine translation.
2305.03051
Controllable Visual-Tactile Synthesis
Deep generative models have various content creation applications such as graphic design, e-commerce, and virtual Try-on. However, current works mainly focus on synthesizing realistic visual outputs, often ignoring other sensory modalities, such as touch, which limits physical interaction with users. In this work, we leverage deep generative models to create a multi-sensory experience where users can touch and see the synthesized object when sliding their fingers on a haptic surface. The main challenges lie in the significant scale discrepancy between vision and touch sensing and the lack of explicit mapping from touch sensing data to a haptic rendering device. To bridge this gap, we collect high-resolution tactile data with a GelSight sensor and create a new visuotactile clothing dataset. We then develop a conditional generative model that synthesizes both visual and tactile outputs from a single sketch. We evaluate our method regarding image quality and tactile rendering accuracy. Finally, we introduce a pipeline to render high-quality visual and tactile outputs on an electroadhesion-based haptic device for an immersive experience, allowing for challenging materials and editable sketch inputs.
Ruihan Gao, Wenzhen Yuan, Jun-Yan Zhu
2023-05-04T17:59:51Z
http://arxiv.org/abs/2305.03051v1
# Controllable Visual-Tactile Synthesis ###### Abstract Deep generative models have various content creation applications such as graphic design, e-commerce, and virtual Try-on. However, current works mainly focus on synthesizing realistic visual outputs, often ignoring other sensory modalities, such as touch, which limits physical interaction with users. In this work, we leverage deep generative models to create a multi-sensory experience where users can touch and see the synthesized object when sliding their fingers on a haptic surface. The main challenges lie in the significant scale discrepancy between vision and touch sensing and the lack of explicit mapping from touch sensing data to a haptic rendering device. To bridge this gap, we collect high-resolution tactile data with a GelSight sensor and create a new visuotactile clothing dataset. We then develop a conditional generative model that synthesizes both visual and tactile outputs from a single sketch. We evaluate our method regarding image quality and tactile rendering accuracy. Finally, we introduce a pipeline to render high-quality visual and tactile outputs on an electroadhesion-based haptic device for an immersive experience, allowing for challenging materials and editable sketch inputs. ## 1 Introduction The past few years have witnessed significant progress in content creation powered by deep generative models [31, 60] and neural rendering techniques [46, 72]. Recent works can synthesize realistic images with various user controls, such as user sketches [29], text prompts [56], and semantic maps [52]. However, most works focus on synthesizing _visual_ outputs, ignoring other sensory outputs such as touch. In real life, humans use vision and touch to explore objects. When shopping for clothing, we look at them to perceive their shape and appearance and touch them to anticipate the experience of wearing them. A single touch can reveal the material's roughness, hardness, and local geometry. Multi-modal perceptual inputs enable humans to obtain a more comprehensive understanding of the target objects, enhancing user experiences, such as online shopping and quick prototyping. Moreover, it opens up new possibilities for content creation, such as touchable VR and movies. In this work, we aim to expand the capability of content creation. We introduce a new problem setting, _controllable visual-tactile synthesis_, for synthesizing high-resolution images and haptic feedback outputs from user inputs of a sketch or text. Our goal is to provide a more immersive experience for humans when exploring objects in a virtual environment. Visual-tactile synthesis is challenging for two reasons. First, existing generative models struggle to model visual and tactile outputs jointly due to the dramatic differences in perception scale: vision provides a global sense of our surroundings, while touch offers only a narrow scale of local details. Second, there do not exist data-driven end-to-end systems that can effectively render the captured tactile data on a haptic display, as existing haptic rendering systems heavily rely on manually-designed haptic patterns [5, 3, 34, 62]. To address the challenges, we introduce a haptic material modeling system based on surface texture and topography. We first collect the high-resolution surface geometry of target objects with a high-resolution tactile sensor Gell-Sight [85, 76] as our training data. To generate visual-tactile outputs that can render materials based on user inputs, we propose a new conditional adversarial learning method that can learn from multi-modal data at different scales. Different from previous works [29, 77], our model learns from dense supervision from visual images and sparse supervision from a set of sampled local tactile patches. During inference, we generate dense visual and tactile outputs from a new sketch design. We then render our models' visual and tactile output with a TanvasTouch haptic screen [15]. The TanvasTouch device displays the visual output on a regular visual screen and uses electroadhesion techniques [69] to render the force feedback of different textures according to a friction map. Humans can feel the textures as a changing friction force distribution when sliding their fingers on the screen [7]. We collect a spatially aligned visual-tactile dataset named TouchClothing that contains 20 pieces of clothing, including pants and shirts, with diverse materials and shapes. We evaluate our model regarding image quality and perceptual realism with both automatic metrics and user study. Experimental results show that our method can successfully integrate the global structure provided by the sketch and the local fine-grained texture determined by the cloth material, as shown in Figure 1. Furthermore, we demonstrate sketch- and text-based editing applications enabled by our system to generate new clothing designs for humans to _see_ and _feel_. Our code and data are available on our website [https://visual-tactile-synthesis.github.io/](https://visual-tactile-synthesis.github.io/). ## 2 Related Work Vision and touch.Multimodal perception and learning using vision and touch inputs have been shown effective for several computer vision and robotics applications, such as estimating material proprieties [87, 89, 86, 88], object grasping and manipulation [39, 11, 10, 80, 73, 90, 43], object recognition [41, 71], future frame prediction [82], and representation learning for downstream tasks [32, 36, 82]. While most existing works focus on improving recognition and learning systems, we aim to synthesize visual-tactile outputs for content creation and VR applications. Several recent works learn to predict tactile outputs given visual inputs [40, 9, 8, 12]. Rather than predicting one modality from the other, we aim to simultaneously synthesize outputs in both modalities from user sketches and text descriptions. Haptic rendering of textures.Haptic rendering refers to generating physical signals that simulate the feeling of touch and delivering it to humans, typically involving software for modeling and physical hardware for rendering. Rendering high-resolution material textures remains a challenge, despite extensive studies on the topic [6, 16]. One branch of works [59, 17, 18] used kinesthetic haptic devices to render single-point temporal signals. Users feel a vibrating force signal when holding a pen-like stylus and sliding on a plane surface. The lack of spatial resolution during the rendering limited the feeling of reality for haptic rendering. Prior works also proposed to render textures on electroadhesion-based devices [67, 83, 49, 4], but they are limited to rendering homogeneous textures or coarse object shapes. In contrast, we propose to use the TanvasTouch device [15] to render detailed local geometry and material texture of garment objects. This device creates a programmable spatially distributed friction force using electroadhesion, allowing users to feel the texture by sliding their fingers across the touch screen. Using the new device boosts the user's feeling of reality regarding the textures and local geometries. Deep generative models.Prior works [33, 22, 75, 19, 26, 70, 60] have enabled various content creation applications such as text-to-image synthesis [64, 56, 57, 84], virtual Try-on [24, 37, 1], and style transfer [93, 63, 42]. Most existing works focus on generating single-modal _visual_ output like images, videos [25], and 3D data [50]. Several unconditional GANs synthesize outputs in two domains, such as images and semantic labels [2, 92, 38, 74], or RGBD data [78, 48]. While the above works sample multimodal outputs from latent vectors, they are not controllable. In contrast, our method allows us to control multimodal synthesis according to the user inputs. Image-to-image translation.Various methods have adopted conditional generative models [22]to translate an image from one domain to another [29, 93, 28, 47, 63, 45, 14]. They are widely used in cross-modal prediction tasks such as sketch-to-photo [29, 65] and label-to-image [77, 52, 94]. In contrast, given user input, our model learns to synthesize outputs in two modalities at different spatial scales. Our method also differs from previous works as we learn to synthesize dense tactile outputs from only sparse supervision. ## 3 Data Acquisition and Hardware To develop our multimodal synthesis method, we construct a new spatially aligned visual-tactile dataset, TouchClothing, which consists of 20 pieces of garments as shown in Figure 2. They cover various fabrics commonly seen in the market, such as denim, corduroy, linen, fleece, and wool. This dataset could be useful for online shopping and fashion design applications. For each garment, we obtain a single 1,280 \(\times\) 960 visual image capturing the entire object and \(\sim\)200 tactile patches (32 \(\times\) 32 pixels) sparsely sampled from the object surface. We track the 3D coordinates of the sensor's contact area and project them on 2D visual images for spatial alignment. Finally, we extract the contour as the input sketch for each visual image. Please find our dataset on the website. Below we detail our collection process. Visual-tactile data collection setup.Figure 3 shows our setup to collect aligned visual-tactile data, where each garment object is fixed on a planar stage with tapes. We capture a top-down view with a Pi RGB Camera mounted on the top aluminum bar and record hundreds of tactile patches by manually pressing a GelSight sensor [85, 76] at different locations of the object in a grid pattern. Our setup enables us to capture diverse patches from each object, including the flat sewing pattern with homogeneous texture, local geometry changes such as pocket edges, and randomly distributed features like flower-shaped decoration. GelSight tactile sensor.The GelSight sensor [85, 76] is a vision-based tactile sensor that uses photometric stereo to measure contact geometry at a high spatial resolution of several tens of micrometers. In this paper, we use the GelSight R1.5, modified from Wang et al. [76]. It has a sensing area of 32mm \(\times\) 24mm (\(H\times W\)) and a pixel resolution of 320\(\times\)240, equivalent to 100 micrometers per pixel. The sensor outputs an RGB image, which can be converted to the surface gradients and used to reconstruct a 3D height map. Visual-tactile correspondence.To calculate the relative position of the GelSight sensor with respect to the camera, we attach four Aruco markers to the GelSight and run RANSAC [20] to track its 3D pose. This allows us to project the 3D coordinate of the contact area onto the 2D visual image and to determine the bounding box coordinates of each tactile patch. Example data are shown in Figure 4. Tactile data pre-processing and contact mask.Each tactile output represents a single touch of the GelSight sensor on the garment, where only a small portion of the sensing area is in contact. We observed noticeable artifacts when training the model with raw data. Instead, we mask out the non-contact region and improve the model using only the contact area. Specifically, we downsample the tactile output from 320\(\times\)240 to 104\(\times\)78 (about 300 micrometers per pixel) to match the image resolution and then create a contact mask for each tactile patch by thresholding the height map. We heuristically determine the threshold to be the 75th percentile of the height map values and apply dilation to avoid false negative detections. We sample 32\(\times\)32 patches based on the contact mask as the final tactile data. We capture roughly 200 patches per clothing, covering \(1/6\) of the image area. Sketch image.We follow the procedure described in pix2pix [29] to obtain sketches from visual images. We first extract coarse contours using the DexiNed network [54] and then manually remove small edges to obtain thin contours. TanvasTouch for haptic rendering.TanvasTouch [15] is a haptic screen that renders a distributed friction map for finger contact. It models the air gap between the screen surface and the human finger as a capacitor. When a human finger slides across it, the varying voltage underneath the screen induces a small current in the finger, which is perceived as a changing Figure 3: **Visual-tactile data acquisition setup.** (a) Our setup includes a PiCamera RGB camera, a GelSight R1.5 high-resolution tactile sensor, and Aruco markers to track the relative pose of the sensor. (b) We show the captured tactile data, including the raw sensor output, the derived surface gradients in x and y directions \(g_{x}\) and \(g_{y}\), and the computed contact mask. (c) We locate the bounding box in the visual image corresponding to the tactile data. Figure 2: **Objects in the TouchClothing dataset.** Our dataset consists of 20 pieces of clothes with different shapes (shirts, jackets, shorts, pants, etc.) and various fabrics (denim, corduroy, linen, fleece, etc.). Please zoom in to see more details. friction force. The device takes a grayscale friction map as input to modulate the voltage distribution across the screen. The screen displays visual images and renders haptic signals simultaneously, creating a coupled visual-haptic output. ## 4 Method Visual-tactile synthesis is challenging due to the large discrepancy between the receptive field of vision and touch. While a camera captures global features of an object, such as color and shape, a touch sensor captures local information within a small patch, such as edges and material texture. Existing conditional generative models are not directly applicable as they assume all inputs to be relatively the same scale. To address this challenge, we propose a new multi-modal conditional GAN that learns from global visual supervision and sparse local tactile supervision. As shown in Figure 5, our model synthesizes spatially aligned visual-tactile output given a single sketch. We formulate the task in Section 4.1 and introduce our learning objective in Section 4.2. We describe the network design in Section 4.3 and discuss how to render the visual and tactile outputs on the TanvasTouch haptic device in Section 4.4. ### Visual-Tactile synthesis We train one object for each object and formulate the visual-tactile synthesis task as a conditional form of single-image generative modeling [51, 68, 66], which has demonstrated flexible editing ability even though the model is trained on a single image. Specifically, given a single sketch \(x\) of size \(H\times W\), where \(H\) and \(W\) are the image height and width, we aim to learn a function that maps the input sketch \(x\) to two spatially aligned outputs, an RGB visual image \(y_{I}\) and a tactile output \(y_{T}\). The sketch \(x\) is a contour map that outlines the object and captures its coarse-scale edge and patterns. For example, in Figure 4 (a), the sketch of a pair of shorts illustrates the overall shape of the shorts, the location of pockets and waistbands, and local decorative patterns. In practice, we follow Isola et al. [29] to extract a sketch using DexiNed [54] and edge thinning. Figure 4 shows examples of the sketch, visual, and tactile images for a pair of shorts and a sweater. The visual image \(y_{I}\) is an RGB image captured by the camera. The tactile output \(y_{T}=(g_{x},g_{y})\) is a 2-channel image representing the gradients of the surface in \(x\) and \(y\) direction. They can be converted into surface normal \(\mathbf{n}\) using Eqn. 1 and then converted into a height map by Poisson integration [85]. Since the tactile output is obtained from a calibration network mapping GelSight raw output (RGBXY) to surface gradient \((g_{x},g_{y})\)[76], it is more robust to local noise and position shift in sensor coordinates. It is also less sensitive to integration errors that the height map may suffer after Poisson integration. Therefore, our conditional GAN uses \((g_{x},g_{y})\) as the tactile output format. \[\mathbf{n}=\frac{(g_{x},g_{y},-1)}{\sqrt{g_{x}^{2}+g_{y}^{2}+1}},\quad g_{x}= \frac{n_{x}}{n_{z}},\quad g_{y}=\frac{n_{y}}{n_{z}}. \tag{1}\] The generated visual and tactile outputs can be used for applications such as fashion design and haptic rendering. In this work, we render a garment on the TanvasTouch screen, allowing people to simultaneously _see_ and _feel_ it. ### Learning Objective We have two main challenges in this learning task. First, we must learn from dense vision images and sparse tactile supervision while accounting for scale differences. Second, we have limited training data, as we need to learn a synthesis network on a single high-resolution example. To address these challenges, we introduce the following learning objective. Visual synthesis loss.To synthesize a realistic visual image \(y_{I}\) conditional on a user sketch \(x\), we optimize the visual generator \(G_{I}\) and visual discriminator \(D_{I}\) to match the conditional distribution of real sketch-image pairs. We optimize the following minimax objective [29, 47]: \[\begin{split} V(G_{I},D_{I},x,y_{I})&=\mathbb{E}_{x,y_{I}}[\log D_{I}(x,y_{I})]\\ &+\mathbb{E}_{x}[\log(1-D_{I}(x,G_{I}(x)))].\end{split} \tag{2}\] Unfortunately, the above adversarial loss introduces training instability due to our single-image training setting. To accommodate the limited dataset size, we use a vision-aided discriminator \(D_{\text{clip}}\)[35] that consists of a frozen CLIP feature extractor [55] and a small trainable MLP head. The vision-aided loss can reduce overfitting issues for small-scale datasets and synthesize visual images that better match Figure 4: **Data examples from our TouchClothing dataset. For each object, we show the input sketch, the visual image, and two tactile patches. For each tactile patch, we show their corresponding sketch crop, visual crop, and the captured tactile data, including surface gradients in the x and y directions (\(g_{x}\), \(g_{y}\)) and surface normal maps. The color-coded bounding boxes in the sketch mark the position of each tactile patch and instantiate the significant scale difference between the visual and tactile data, which makes our conditional synthesis task difficult.** human perception. Our adversarial loss includes: \[\mathcal{L}_{\text{cGAN}}=V(G_{I},D_{I},x,y_{I})+V(G_{I},D_{\text{clip}},x,y_{I}). \tag{3}\] To further stabilize GANs training, we incorporate a reconstruction-based loss. Here we use a combination of pixel-wise L1 distance and CNN feature-based perceptual loss (LPIPS) [91], as they encourage sharper images [29] and higher perceptual similarity to the ground truth. \[\begin{split}\mathcal{L}_{\text{rec}}(G_{I},x,y_{I})& =\mathbb{E}_{x,y_{I}}[\mathcal{L}_{\text{LPIPS}}(y_{I},G_{I}(x))]\\ &+\lambda_{1}\mathbb{E}_{x,y_{I}}[\|y_{I}-G_{I}(x)\|_{1}],\end{split} \tag{4}\] where \(\lambda_{1}\) balances the perceptual loss and L1 loss. The final objective function for visual output can be written as follows: \[\mathcal{L}_{I}=\mathcal{L}_{\text{cGAN}}+\mathcal{L}_{\text{rec}}. \tag{5}\] #### 3.2.2 Tactile synthesis loss. Unfortunately, we cannot simply use the above loss function to synthesize tactile output, as we no longer have access to the full-size tactile ground truth data. Additionally, the vision-aided loss does not apply to tactile data and small patches, as the vision-aided discriminator \(D_{\text{clip}}\) is pretrained on large-scale natural image collections. Instead, we learn a full-size tactile generator \(G_{T}\) with supervision from hundreds of tactile patches. Here we denote corresponding (sketch, image, tactile) patches as \((x^{p},y_{I}^{p},y_{T}^{p})\) at sampled location \(p\). While the generator \(G_{T}\) synthesizes the full-size tactile output at once, our patch-level discriminator \(D_{T}\) learns to classify whether each patch pair is real or fake, with the following objective: \[\begin{split}& V(G_{T},D_{T},x,y_{I},y_{T})=\mathbb{E}_{x,y_{I},y_ {T},p}[\log D_{T}(x^{p},y_{I}^{p},y_{T}^{p})]\\ &+\mathbb{E}_{x,p}[\log(1-D_{T}(x^{p},G_{I}^{p}(x),G_{T}^{p}(x)) )],\end{split} \tag{6}\] where \(G_{I}^{p}(x)\) and \(G_{T}^{p}(x)\) denote cropped patches of synthesized visual and tactile outputs. To reduce training memory and complexity, we do not backpropagate the gradients to \(G_{I}\). Besides the standard non-saturating GAN objective, we use the feature matching objective [77] based on the discriminator's features as the discriminator adapts to the tactile domain better, compared to a pre-trained CLIP model. In addition, we also add a patch-level reconstruction loss. Our final loss for the tactile synthesis branch can be written as follows: \[\mathcal{L}_{T}=\lambda_{\text{GAN}}V(G_{T},D_{T},x,y_{I},y_{T})+\lambda_{ \text{rec}}\mathcal{L}_{\text{rec}}(G_{T},x^{p},y_{T}^{p}). \tag{7}\] #### 3.2.3 Patch sampling. We sample two types of patches. We sample patches with paired ground truth tactile data, for which we can use both reconstruction loss and adversarial loss. However, we only have 200 patches for training. To further increase training patches, we also randomly sample patches without paired ground truth. We only try to minimize the second term \(\log(1-D_{T}(x^{p},G_{I}^{p}(x),G_{T}^{p}(x)))\) of the tactile adversarial loss (Eqn. 6) as it is only dependent on synthesized patches. #### 3.2.4 Full objective. Our final objective function is \[G_{I}^{*},G_{T}^{*}=\arg\min_{G_{I},G_{T}}\max_{D_{I},D_{T},D_{\text{clip}}} (\mathcal{L}_{I}+\mathcal{L}_{T}). \tag{8}\] Figure 5: **Overview.**_Generators_: Given a user sketch, its foreground mask, and positional encoding of the pixel coordinates, we feed them into a two-branch generator. The two branches share the encoder and the first four layers of the decoders and then split to synthesize visual and tactile results, respectively. _Discriminators_: We feed the entire visual image to our visual discriminator \(D_{I}\) and patches to our tactile discriminator \(D_{T}\). \(D_{I}\) is conditional on the sketch, and \(D_{T}\) is conditional on both sketch crops and visual crops. The weights are chosen using a grid search so that the losses have a comparable scale, and the final values are \(\lambda_{1}=100\), \(\lambda_{\text{GAN}}=5\), \(\lambda_{\text{rec}}=10\). The grid search is done only once for a randomly selected object, and the same parameters are used for all objects in the dataset. In Section 5, we carefully evaluate the role of the adversarial loss and image reconstruction loss regarding the performance of our final model. ### Training details Below we describe our generator and discriminator's network architectures and other training details. **Network architectures.** We use a U-Net [61] as the backbone of our generator, which splits into two branches, \(G_{I}\) and \(G_{T}\), from an intermediate layer of the decoder. This way, the visual and tactile outputs share the same encoding for global structure while maintaining modality-specific details at each pixel location. For discriminators, we use multi-scale PatchGAN [29, 77] for both visual discriminator \(D_{I}\) and tactile discriminator \(D_{T}\), since multi-scale PatchGAN has been shown to improve the fine details of results. **Positional encoding and object masks.** Since sketches often contain large homogeneous texture areas, we use Sinusoidal Positional Encoding (SPE) [81] to encode the pixel coordinates and concatenate the positional encoding and the sketch at the network input. We also extract the object mask and use it to remove the background from the input and output. Thus the final input to the network is a masked version of the concatenated sketch and positional encoding features. Please refer to our Appendix A for more training details. ### Haptic rendering After synthesizing the visual and tactile output, we render them on the TanvasTouch haptic screen using the following rendering pipeline so that users can _see_ and _feel_ the object simultaneously. Specifically, we display the visual image directly on the screen and convert the two-channel tactile output \((g_{x},g_{y})\) into a grayscale friction map required by TanvasTouch. As shown by Manuel et al. [44] and Fiesen et al. [21], humans are sensitive to contours and high-frequency intensity change for surface haptic interpretation. Inspired by this, we first compute the squared magnitude of the gradient \(z=g_{x}^{2}+g_{y}^{2}\), \(z\in[0,1]\), then apply non-linear mapping function \(z^{\prime}=\log_{10}(9\times z+1)\), \(z^{\prime}\in[0,1]\) for contrast enhancement, and finally resize it to the TanvasTouch screen size as the final friction map. We empirically find this helpful to enhance textures' feeling with electroadhesive force. ## 5 Experiment Below we present our main results. Please check out our website for data capture and user interaction videos. **Evaluation metrics.** We evaluate our method on the similarity between the synthesized output and the real data of the TouchClothing dataset. For both visual and tactile output, we report the LPIPS metric [91] for perceptual realism as prior works [91, 30] have shown that the LPIPS metric better matches human perception, compared to PSNR and SSIM [79]. We also use Single Image Frechet Inception Distance (SIFID) [66] for texture similarity, as extensively used in prior works [66, 53]. Since the dataset only contains one visual image per object, we evaluate LPIPS on seen sketches for visual reconstruction and SIFID on unseen sketches for texture consistency in generalization. In addition to automatic metrics, we perform a human preference study. **Baselines.** To our knowledge, this paper is the first to study visual-tactile synthesis conditioned on a sketch input. Thus we consider image-to-image translation as a similar task and compare our method with several conditional GANs, including pix2pix [29], pix2pixHD [77] and GauGAN [52]. Pix2pix is one of the most commonly used image translation networks, pix2pixHD uses a coarse-to-fine generator and a multi-scale discriminator to handle high-resolution image synthesis, and GauGAN adopts spatially-adaptive de-normalization layers. Both pix2pixHD and GauGAN are trained using a perceptual loss, a conditional GAN loss, and a GAN-based feature matching loss. For baselines, we add two channels for tactile output \(g_{x}\) and \(g_{y}\), increasing the number of output channels from 3 to 5. The visual and tactile outputs are fed into two discriminators, both conditioned on the sketch input. Since only patch data are available as tactile ground truth, we crop the corresponding region of the sketch and visual images into patches and train the network using sketch-visual-tactile patch pairs. We perform the same amount of augmentation as our method. We follow the default parameters in the original works. During inference, we feed in the entire sketch image to obtain the full-scale visual and tactile outputs, as the fully convolutional network generalizes to inputs of different sizes. **Quantitative comparisons.** As shown in Table 1, our method outperforms all baselines by a large margin in all metrics. Our method reduces visual LPIPS by more than 50% and tactile LPIPS by about 30%. Our results depict more realistic and faithful textures, as demonstrated by \(5\times\) and \(2\times\) lower SIFID for visual and tactile output, respectively. This shows the advantage of our method for both visual and tactile synthesis. We notice that pix2pix works better than pix2pixHD and GauGAN regarding most metrics. This may be because all baselines require paired datasets, and in our case, paired data are low-resolution (32\(\times\)32), which does not fit the application of pix2pixHD and GauGAN. **Qualitative results.** Figure 6 provides an example of qualitative comparisons with baselines. For each method, the first row shows the full-scale visual output; the second row shows the reconstructed 3D height map; the third row shows some sampled patches in visual, grayscale \(g_{x}\), \(g_{y}\), and derived surface normal formats. Our method can successfully capture the prominent geometric features, such as pockets and flower-pattern decorations, and the local geometry details of the material textures. In contrast, baselines can only capture some prominent geometric features but miss local texture details and generate color artifacts. **Generation using unseen sketch images.** Our visual-tactile synthesis model trained on a single sketch image can be generalized to new sketch inputs, allowing users to edit and customize their sketches for fast design and prototyping. Since we train one model per object, we show the testing results using sketches of unseen objects in Figure 7. Each row corresponds to one testing sketch, and each column represents a model trained on one object. We visualize results by showing the visual image on the left and the normal map on the right. The visual and tactile outputs are well aligned and maintain fine-scale material texture details for each model. Our method can adapt to the global geometry information, including the edges and pockets of new sketch inputs. **Text-contioned visual-tactile synthesis.** We also extend our method to synthesizing visual-tactile outputs given both sketches and text prompts. We use DALL-E2 [56] to create variations of an original sketch and then feed the edited sketches to our conditional generative models. Figure 8 shows examples of text-based synthesis with text prompts. \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c}{Visual} & \multicolumn{2}{c}{Tactile} \\ \cline{2-5} & LPIPS\(\downarrow\) & SIFTD\(\downarrow\) & LPIPS\(\downarrow\) & SIFTD\(\downarrow\) \\ \hline Ours & **0.070** & **0.029** & **0.676** & **0.104** \\ Pix2pix [29] & 0.173 & 0.115 & 1.028 & 0.247 \\ Pix2pixHD [77] & 0.161 & 0.289 & 0.753 & 0.458 \\ GauGAN [52] & 0.189 & 0.252 & 1.034 & 0.286 \\ \hline \hline \end{tabular} \end{table} Table 1: **Baseline comparisons. Our method outperforms all baselines regarding both perceptual realism measured by LPIPS [91] and texture consistency measured by SIFID [66].** \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c}{Visual} & \multicolumn{2}{c}{Tactile} \\ \cline{2-5} & LPIPS\(\downarrow\) & SIFTD\(\downarrow\) & LPIPS\(\downarrow\) & SIFTD\(\downarrow\) \\ \hline Ours & **0.070** & **0.029** & **0.676** & **0.104** \\ Ours w/o \(\mathcal{L}_{\mathrm{cGAN}}\) & 0.113 & 0.115 & 0.687 & 0.107 \\ Ours w/o \(\mathcal{L}_{\mathrm{rec}}\) & 0.084 & 0.079 & 1.035 & 0.260 \\ \hline \hline \end{tabular} \end{table} Table 2: **Ablation study of loss components. We compare our full method with two variants: Ours w/o \(\mathcal{L}_{\mathrm{cGAN}}\) (w/o conditional GAN losses) and \(\mathcal{L}_{\mathrm{rec}}\) (w/o image reconstruction loss). Our full method outperforms these variants regarding multiple metrics.** Figure 6: **Qualitative comparisons with baselines. We compare our method with the pix2pix [29], pix2pixHD [77], and GauGAN [52]. For each method, we show the visual output (top) and the rendering of their height maps (middle). We also present two zoom-in patches in color-coded bounding boxes (bottom), paired with the visual crop, tactile surface gradients, and the normal map.** Figure 7: **Sketch and material swapping. Our model can synthesize vision and touch images for both known and unseen sketches. For each output, we show visual output at the left half and a normal map of tactile output at the right half.** Even when trained on a single sketch, our model can reasonably generalize to unseen sketches with varying strokes and shapes while capturing the visual and tactile features of the original material. **Ablation studies.** We run ablation studies on each loss component to inspect their effects on the training objective. Table 2 shows that removing either adversarial loss or reconstruction loss for both visual and tactile synthesis together increases LPIPS errors and SIFID metric. Qualitatively, we observe overly smooth images after removing adversarial loss and checkerboard artifacts after removing reconstruction loss. Please see our Appendix B for more visual results. **Human Perceptual Study for Visual Images.** We perform a human perceptual study using Amazon Mechanical Turk (AMTurk). We do a paired test with the question - "Which image do you think is more realistic?". Each user has five practice rounds followed by 30 test rounds to evaluate our method against pix2pix, pix2pixHD, GauGAN, Ours w/o \(\mathcal{L}_{\text{cGAN}}\), and Ours w/o \(\mathcal{L}_{\text{rec}}\). All samples are randomly selected and permuted, and we collect 1,500 responses. As shown in Figure 8(a), our method is preferred over all baselines, even compared to Ours w/o \(\mathcal{L}_{\text{cGAN}}\) and Ours w/o \(\mathcal{L}_{\text{rec}}\), which shows the importance of each term. **Human Perceptual Study for Haptic Rendering.** We also perform a human perceptual study to evaluate the perceived fidelity of the generated haptic output, following conventions in prior works [23, 13]. We render two different haptic outputs on the TanvasTouch screen side by side with the same ground-truth visuals and ask participants "Which side do you feel better matches the real object material?". Twenty people, 13 males and 7 females with an average of 24.1 years (SD: 2.1), participated in the experiments. Figure 10 shows an example setup, and more details can be found in our Appendix A. As shown in Figure 8(b), participants strongly favor our method over all other baselines (chance is \(50\%\)). \(76.7\%\) of the participants prefer our method to pix2pixHD; compared with pix2pix and GauGAN, our method has a larger advantage, winning \(79.6\%\) and \(84.2\%\) of the participants, respectively. It is harder for users to distinguish the ablated models, but our method still beats Ours w/o \(\mathcal{L}_{\text{cGAN}}\) and Ours w/o \(\mathcal{L}_{\text{rec}}\), by \(52.1\%\) and \(64.3\%\) respectively. The user study results are consistent with the quantitative evaluation using various metrics shown in Table 1. ## 6 Discussion and Limitations In this work, we presented a new method for automatically synthesizing visual and tactile images according to user inputs such as sketch and text. We used a high-resolution tactile sensor GelSight to capture the high-fidelity local geometry of objects. We then proposed a new conditional GAN model to generate visual and tactile output given a single Figure 8: **Text-based visual-tactile synthesis.** We use DALL-E2 [56], a text-to-image model, to modify the original sketch designs via text-based image inpainting. We then synthesize both visual (left) and tactile (right) outputs using edited sketches. The text prompts are included below each image. Figure 10: **Experiment setup for the user study.** We perform an A/B test comparing the haptic output of our method and one of the baselines. The lower left corner shows the rendered haptic signal (friction map). The real garment is put on one side for reference. Figure 9: **Human perceptual study.** For each paired comparison, our method is preferred (\(\geq 50\%\)) over the baseline for both visual and haptic output. sketch image. Finally, we introduced a pipeline to render visual and tactile outputs on the TanvasTouch touchscreen. Our visual-tactile synthesis method can be used for different materials and objects, providing users with a more immersive experience when exploring virtual objects. **Limitations.** First, as shown in Figure 11, distinctive patterns, such as enclosed letters, remain challenging. Our model fails to generalize to other user sketches. Second, as touch is an active perception, rendering performance relies on specific hardware constraints. In this work, the surface haptic device excels at rendering clothing, which is primarily flat with fine textures. Nevertheless, it is challenging to render 3D objects with substantial surface normal changes, such as an apple, on the same device. Finally, since tactile data are collected during static touch and the rendering device mainly focuses on friction force, we can render roughness well but have limited capacity to render softness. **Societal impacts.** Controllable visual-tactile synthesis for haptic rendering is a new research problem that has yet to be explored extensively. We take the first step to address the modeling challenge and deploy our model to the latest hardware. Ultimately, we hope our work will facilitate multi-modal synthesis with generative models in applications such as online shopping, virtual reality, telepresence, and teleoperation. **Acknowledgment.** We thank Sheng-Yu Wang, Kangle Deng, Muyang Li, Aniruddha Mahapatra, and Daohan Lu for proofreading the draft. We are also grateful to Sheng-Yu Wang, Nupur Kumari, Gaurav Parmar, George Cazenavette, and Arpit Agrawal for their helpful comments and discussion. Additionally, we thank Yichen Li, Xiaofeng Guo, and Fujun Ruan for their help with the hardware setup. Ruihan Gao is supported by A*STAR National Science Scholarship (Ph.D.).
2302.07290
A Flexible Multi-Metric Bayesian Framework for Decision-Making in Phase II Multi-Arm Multi-Stage Studies
We propose a multi-metric flexible Bayesian framework to support efficient interim decision-making in multi-arm multi-stage phase II clinical trials. Multi-arm multi-stage phase II studies increase the efficiency of drug development, but early decisions regarding the futility or desirability of a given arm carry considerable risk since sample sizes are often low and follow-up periods may be short. Further, since intermediate outcomes based on biomarkers of treatment response are rarely perfect surrogates for the primary outcome and different trial stakeholders may have different levels of risk tolerance, a single hypothesis test is insufficient for comprehensively summarizing the state of the collected evidence. We present a Bayesian framework comprised of multiple metrics based on point estimates, uncertainty, and evidence towards desired thresholds (a Target Product Profile) for 1) ranking of arms and 2) comparison of each arm against an internal control. Using a large public-private partnership targeting novel TB arms as a motivating example, we find via simulation study that our multi-metric framework provides sufficient confidence for decision-making with sample sizes as low as 30 patients per arm, even when intermediate outcomes have only moderate correlation with the primary outcome. Our reframing of trial design and the decision-making procedure has been well-received by research partners and is a practical approach to more efficient assessment of novel therapeutics.
Suzanne M. Dufault, Angela M. Crook, Katie Rolfe, Patrick P. J. Phillips
2023-02-14T19:05:25Z
http://arxiv.org/abs/2302.07290v2
A Flexible Multi-Metric Bayesian Framework for Decision-Making in Phase II Multi-Arm Multi-Stage Studies ###### Abstract We propose a multi-metric flexible Bayesian framework to support efficient interim decision-making in multi-arm multi-stage phase II clinical trials. Multi-arm multi-stage phase II studies increase the efficiency of drug development, but early decisions regarding the futility or desirability of a given arm carry considerable risk since sample sizes are often low and follow-up periods may be short. Further, since intermediate outcomes based on biomarkers of treatment response are rarely perfect surrogates for the primary outcome and different trial stakeholders may have different levels of risk tolerance, a single hypothesis test is insufficient for comprehensively summarizing the state of the collected evidence. We present a Bayesian framework comprised of multiple metrics based on point estimates, uncertainty, and evidence towards desired thresholds (a Target Product Profile, TPP) for 1) ranking of arms and 2) comparison of each arm against an internal control. Using a large public-private partnership targeting novel TB arms as a motivating example, we find via simulation study that our multi-metric framework provides sufficient confidence for decision-making with sample sizes as low as 30 patients per arm, even when intermediate outcomes have only moderate correlation with the primary outcome. Our reframing of trial design and the decision-making procedure has been well-received by research partners and is a practical approach to more efficient assessment of novel therapeutics. Bayesian methods tuberculosis phase II time to positivity interim analysis multi-arm multi-stage ## 1 Introduction Decision-making in phase II clinical trials carries risk and is far from straightforward. While only 18% of phase II studies establish sufficient evidence to advance a drug into phase III, it seems the wrong drug is often advanced resulting in a failure rate of 50% of phase III studies [1]. Current approaches are inefficient at differentiating good from poor regimens under phase II settings. Sample sizes tend to be considerably smaller in phase II trials than in phase III. Further, adaptive phase II trials tend to rely on intermediate outcomes for decision-making at interim analyses. While in some disease areas, phase II outcomes are the same as those in phase III [2], it is common that alternative endpoints are used which may not have perfect correspondence with the primary outcome of interest. In addition to the complications of phase II designs, the typical estimands for decision-making are often suboptimal. Standard approaches in multiarm studies include selecting the \(k\) best performing arm(s) or more broadly advancing any arms "close" to the best performing arm [1]. A recent extension of Network Meta-Analysis highlighted the pitfalls of basing selection on ranking alone and authors provided recommendations for best practices that "[consider] not only the magnitude of relative effects but also their uncertainty and overlap of their confidence/credible intervals " [3]. An additional factor for regimen selection in phase II studies is ensuring sufficient evidence has been collected to have confidence that the regimen credibly meets a target product profile (TPP) with respect to safety, efficacy, and general desirability. Frequentist approaches, such as significance testing and group sequential methods, can advance regimens where there is little to no potential to meet the TPP [4, 5, 6]. Bayesian frameworks, using a single or a multi-level framework, [5, 6] have recently been proposed to more directly address the critical question: "How likely is it that the TPP is [fulfilled] based on my observed data?" [6] The aim of this paper is to present a Bayesian-supported decision framework which we have developed in the context of a phase II trial with an intermediate endpoint that is not a perfect surrogate and with limited outcome data. We propose a multi-metric approach for 1) ranking of arms and 2) comparison of each arm against a control, using a two-level target product profile. We demonstrate via simulations the potential for de-risking decision-making at interim analyses under a flexible decision framework comprised of metrics incorporating point estimates, estimate variability, and evidence towards desired performance thresholds (i.e., a target product profile). ## 2 Methods ### Motivating Example This decision-making framework is motivated by UNITE4TB, a global public-private partnership with the objective of identifying, in phase IIb trials, new combinations of novel and existing compounds that perform better than the six-month standard of care, HRZE, for the treatment of tuberculosis (TB) when given for four months, thereby supporting evaluation of even shorter durations in a phase IIc trial [7]. The primary clinical outcome in the UNITE4TB-01 trial is assessed based on the number of unfavorable outcomes (treatment failure, relapse, or re-treatment) occurring within 52 weeks of follow-up. In addition, weekly sputum samples will be collected for twelve weeks post-randomization to monitor the change in time-to-positivity (TTP), defined as "the time [from inoculation in culture media] it takes for a given sputum sample to yield a positive mycobacteria growth indicator tube culture" [8]. This biomarker, while by no means a validated surrogate endpoint, is available much sooner than the primary endpoint, reflects the potency of the regimen in killing off drug-susceptible TB bacterium [9], and is associated with the primary clinical endpoint such that a more potent regimen (one with a steeper change in TTP) is expected to have a lower rate of unfavorable outcomes than a less potent regimen [8, 10]. ### Proposed Metrics Our proposed framework combines the Bayesian multi-level target product profile framework proposed by Pulkstenis, Patra, and Zhang [6] with Bayesian approaches for capturing uncertainty in the ranking of arms in a multi-arm study. As clinical trials continue to improve efficiency by including simultaneous evaluation of multiple novel interventions [11], decision-making on the basis of performance alone will not provide information on the prioritization of arms when there are several promising performers. Arm prioritization and ranking will be a key second target. As such, we do not consider a single metric as adequate to tackle both decision-making components: performance and ranking. Instead, we frame the decision-making around three motivating questions, each targeting a necessary element of the decision-making process. _Motivating Question 1 (Arm de-prioritization)_. Can we identify and deprioritize sub-optimal arms early? Arms will first be flagged for deprioritization based on whether the number of observed unfavorable outcomes exceeds a set threshold, \(p\), although these are likely to be few. This can be thought of as an early screening for removal of arms with larger than acceptable anticipated unfavorable event rates. The remaining metrics rely on the intermediate outcome, TTP, as all patients will have TTP data by the time of the interim analysis. _Motivating Question 2 (Arm performance)_. Can we identify and advance desirable arms early? Arms will be assessed according to a pre-specified two-level target product profile based on the change in \(\log_{10}\)(TTP) slope relative to the control slope. Let \(k\) be an arm indicator ranging from \(k=1,\ldots,K\) where \(k=1\) denotes control. Let \(\theta\) denote the percent change in \(\log_{10}\)(TTP) slope relative to the control slope. The quantities that must be pre-specified for the target product profile include the "target value" or level of efficacy corresponding to solid competitiveness, \(\theta_{TV}\), the "minimum acceptable value" or minimal level of acceptable efficacy, \(\theta_{MAV}\), the maximum allowable risk that an arm is issued a NO-GO decision when it has an unequivocal improvement in efficacy, \(\tau_{TV}\), and the maximum allowable risk that an arm is advanced that does not reach the minimal level of acceptable efficacy, \(\tau_{MAV}\). _Motivating Question 3 (Arm ranking)._ Can we reliably rank among multiple promising arms for decision-making? Finally, arms will be ranked. Let \((r)\) denote the true relative ranking in the steepness of \(\log_{10}(\text{TTP})\) slope, where \(r=1\) denotes the steepest slope. We propose a suite of posterior probability estimands for the relative ranking of the arms and their comparison with the control. We also report a credible estimate (median of the Bayesian posterior distribution) for the relative percent-change in \(\log_{10}(\text{TTP})\) slope as compared to the control, along with a credible interval (confidence level: \(1-\alpha\)). Table 1 displays the proposed decision objectives, their triggers, and statistical estimands. We propose a sequential application of the framework as it is intuitive and better reflects natural decision-making in terms of predetermined hierarchies of risk tolerance. Figure 1 demonstrates such a stepwise decision-making framework. \begin{table} \begin{tabular}{l l l} \hline \hline **Objective** & **Trigger** & **Statistical Estimand** \\ \hline \multirow{2}{*}{Arm deprioritization} & High number of observed unfavorable events & No. of unfavorable outcomes \(\geq p\) \\ \cline{2-3} & NO-GO: Low probability that target value is met & \(\text{Pr}_{\theta}(\theta_{k}\geq\theta_{TV}|X)\leq\tau_{TV}\) \\ \cline{2-3} & Continue: Neither ’NO-GO’ nor ’GO’ conditions met & \(\text{Pr}_{\theta}(\theta_{k}\geq\theta_{TV}|X)>\tau_{TV}\) and \\ & & \(\text{Pr}_{\theta}(\theta_{k}>\theta_{MAV}|X)\leq 1-\tau_{MAV}\) \\ \cline{2-3} & GO: High probability that minimum acceptable value is exceeded and at & \(\text{Pr}_{\theta}(\theta_{k}\geq\theta_{TV}|X)>\tau_{TV}\) and \\ & least modest probability that target value might be exceeded & \(\text{Pr}_{\theta}(\theta_{k}>\theta_{MAV}|X)>1-\tau_{MAV}\) \\ \hline \multirow{3}{*}{Arm ranking} & Confidence arm slope is steeper than control & \(\text{Pr}_{\theta}(\theta_{k}>\theta_{1}|X)\) \\ \cline{1-1} & Confidence arm has steepest slope & \(\text{Pr}_{\theta}(\theta_{k}=\theta_{1}|X)\) \\ \cline{1-1} \cline{2-3} & Confidence arm is in top 2 steepest slopes & \(\text{Pr}_{\theta}(\theta_{k}\in\{\theta_{(1)},\theta_{(2)}\}|X)\) \\ \hline \hline \end{tabular} \end{table} Table 1: Proposed quantities for the multi-metric decision-making framework. Figure 1: Example flowchart of the decision-making framework applied in a sequential manner. The third component (_Does it rank well?_) is in a dashed-line box as it is only relevant when more than one arm has successfully advanced through the first two decision-making steps. ### Simulation Study We describe our simulation study using the the Aims, Data Generation, Estimand, Methods, and Performance Measures (ADEMP) framework outlined by [12]. #### 2.3.1 Aims Our overall aim is to evaluate how well our framework can de-risk decision-making around arm selection based on the objectives of depriitization, performance, and ranking for multi-arm phase II trials. #### 2.3.2 Data-generating mechanism **TTP.** The weekly individual-level TTP data is simulated from a parametric linear mixed effects model using the approach described by Arnold et al. [13]. Analysis of longitudinal TTP data from the REMoxTB phase III trial [14] motivated our choice. For individual \(i\) and visit \(j\), let \(T_{ij}\) denote the weeks since randomization at visit \(j\). Let \(X_{i}\) denote the assigned treatment arm for individual \(i\), \(X_{i}=1,\ldots,K\) where \(X=1\) denotes the control arm. Equation 1 allows for flexibility in individual-level intercepts and slopes. \[\log_{10}(\text{TTP}_{ij})=\beta_{0i}+\beta_{1i}T_{ij}+\beta_{2}\mathbb{I}\{X _{i}=2\}T_{ij}+\cdots+\beta_{K}\mathbb{I}\{X_{i}=K\}T_{ij}+e_{ij} \tag{1}\] We pre-specify the random intercept \(\beta_{0i}\sim N(\beta_{0},\sigma_{g_{i}}^{2})\), the random slope \(\beta_{1i}\sim N(\beta_{1},\sigma_{g_{2}}^{2})\), the correlation between the random effects \(\rho=\text{Cor}(\beta_{0i},\beta_{1i})\) and the residual error \(e_{ij}\sim N(0,\sigma_{e}^{2})\). \(\mathbb{I}\{\}\) is an indicator function, returning 1 when the condition is true and 0 otherwise. The parameter values used for data-generation are defined in Section A.1 of the Supplemental Material. **Unfavorable outcomes.** Individual-level time to unfavorable outcomes, \(t_{i}\), measured from end of treatment, is simulated using a two parameter Weibull proportional hazards model (Equation 2). All individuals are assumed to complete treatment. Assuming there is no loss to follow-up, event times are censored at the end of 52 weeks of post-randomization follow-up if an unfavorable outcome does not occur before. We assume that an individual's hazard of unfavorable outcome depends only on their intervention assignment, not on their individual-level TTP trajectory; correlation between intermediate and final outcomes is therefore induced only at the level of allocated treatment arm. \[\ln h(t_{i})=\ln(pt^{p-1})+\beta_{0}+\beta_{1}\mathbb{I}\{X_{i}=2\}+\ldots+ \beta_{k}\mathbb{I}\{X_{i}=K\}+\epsilon_{i} \tag{2}\] The Weibull parameters are tuned such that approximately 75% of unfavorable outcome events occur within the first 13 weeks of post-intervention follow-up [15] (setting scale parameter \(p=0.425\)) and such that unfavorable outcomes by the end of follow-up occur according to pre-specified rates. **Interim.** Enrolment dates are randomly assigned such that a rate of ten patients are enrolled per week and randomized to one of five different arms. The interim analysis occurs one week after complete TTP results are available for the sample size of interest and uses the full TTP data as well as any unfavorable outcome data accumulated up to that point in time. All simulated datasets consist of one control and four novel arms. TTP and unfavorable outcomes were simulated according to the parameterizations in Table 2. TTP is only simulated for 8 weeks post-randomization. We consider three settings for TTP slopes representing evenly spaced slopes with a clear winner ('One Winner'), a mixture of steep and shallow slopes ('Two Winners'), and a setting were all four arms have similarly steep slopes ('Four Winners'). We also consider three settings for unfavorable outcome rates whereby 2.5% unfavorable outcome is considered desirable and 5% is considered minimal for treatment shortening \begin{table} \begin{tabular}{l l l} \hline \hline Endpoint & Setting & Conditions (Arm \(k\) = 2,3,4,5) \\ \hline Relative \% TTP Slope (Control: \(\theta_{1}=0\%\)) & One Winner & 10\%, 20\%, 30\%, 40\% \\ \(\theta_{2},\theta_{3},\theta_{4},\theta_{5}\) & Two Winners & -10\%, 10\%, 35\%, 40\% \\ & Four Winners & 35\%, 37\%, 39\%, 41\% \\ \hline \multirow{3}{*}{Unfavorable outcome Rates (Control: 5\%)} & Mixed & 10\%, 5\%, 5\%, 2.5\% \\ & All Minimal & 5\%, 5\%, 5\%, 5\% \\ \cline{1-1} & All Desirable & 2.5\%, 2.5\%, 2.5\% \\ \hline \hline \end{tabular} \end{table} Table 2: Simulation settings for relative percent change in \(\log_{10}(\text{TTP})\) slope and unfavorable outcome rate. Note, \(k=1\) is the control arm and is used as the comparator. in the context of a 4-month regimen. All possible combinations of TTP and unfavorable outcome were simulated for each possible sample size in 1,000 simulated datasets representing settings where the intermediate and final outcomes were well correlated (steep slopes and low unfavorable outcome rates correspond) and where they were poorly correlated (shallow slopes and low unfavorable outcome rates correspond, and vice versa). Results for any combinations not described here are available in the Supplemental Material and GitHub repository ([https://github.com/sdufault15/tb-seamless-design](https://github.com/sdufault15/tb-seamless-design)). #### 2.3.3 Targets of analysis The targets of analysis are the arm decision objectives as supported by the framework metrics (Table 1). Specifically, we aim to determine whether the framework, when used with standard phase II sample sizes, is sufficient to determine the appropriate arm(s) to de-prioritize or progress, with an acceptable level of risk. #### 2.3.4 Analysis methods The weekly \(\log_{10}\)(TTP) data are analyzed using a Bayesian linear mixed effects model with random intercept and random slope specified at the level of the individual and weakly informative priors. The model formula is reported in the Appendix (Eq. A.1), but echoes that used for data generation (Eq. 1). Bayesian methods were chosen since they lend themselves to direct probability statements addressing the likelihood of arm success that better facilitate complex decision-making involving non-statisticians [4, 16, 17]. Additionally, in this setting, Bayesian methods are desirable because of their ability to handle limit-censoring of the outcome variable [18]. The maximum recommended MGIT incubation time for a sputum sample is 42 days, resulting in a maximum observable TTP value of 42 days and right censoring of TTP values above this limit [8]. While alternative approaches exist to handling right censored outcome variables, likelihood-based approaches have been integrated into standard Bayesian statistical software and are readily available in the setting of non-linear mixed effects models. Unfavorable outcomes are counted at the arm level and compared against count-based thresholds as described in Table 1. Simulations and analyses are performed using R version 4.1.2 (2021-11-01) "Bird Hippie" [19]. All code necessary to simulate the data, perform the analyses, and recreate the figures presented in this manuscript is available in a GitHub repository maintained by the first author ([https://github.com/sdufault15/tb-seamless-design](https://github.com/sdufault15/tb-seamless-design)). Bayesian estimation was performed with the brms package [18, 20]. #### 2.3.5 Performance measures We evaluate estimator performance by assessing the following across a range of effect and sample sizes: the proportion of simulations where 1) the arm with the true steepest slope was estimated to have the steepest observed \(\log_{10}\)(TTP) slope, 2) the arm with the true steepest slope was estimated to have one of the top two observed steepest \(\log_{10}\)(TTP) slopes, and 3) the null hypothesis of no difference could be rejected based on a 95% credible interval (power) when comparing slopes between each arm and the control. To assess the performance of the proposed multi-metric framework (Table 1), we first consider the performance of the estimators individually by objective: arm depriitization, arm performance, and arm ranking. For arm depriitization, we examine the rates of depriitization for desirable, minimal, and sub-optimal arms when the unfavorable outcome threshold is set at fewer than one, two or three unfavorable events by the time of the first interim analysis. Arm performance is evaluated by the proportion of simulations returning "GO", "NO-GO", and "Contine" decisions for an array of the \(\log_{10}\)(TTP) slopes and sample sizes. For arm ranking, we focus on the proportion of simulations returning posterior probability estimates that favor the arm with the true steepest slope over the arm with the true second steepest slope (\(\text{Pr}_{\theta}(\hat{\theta}_{(1)}=\theta_{(1)}|X)-\text{Pr}_{\theta}(\hat{ \theta}_{(2)}=\theta_{(1)}|X)\)) in order to identify our ability to differentiate between top performers as the gap in their performance decreases from 10% to 2%. Finally, we examine how each of these metrics can contribute to decision-making when used simultaneously. Because the relationship between TTP and unfavorable events is not well understood, we additionally assess the performance of the framework as the correspondence between TTP slope and unfavorable events becomes less well correlated. ## 3 Results ### Evaluation of estimator performance When 30 patients are enrolled per arm and there is at least a 5% difference between the steepest and second steepest slopes, more than 65.2% of simulated datasets returned estimates that would correctly estimate the true steepest arm as having the observed steepest slope (Fig. 2A). The ability to discriminate and correctly identify the true steepest arm decreases to 38.4% at 30 patients per arm when the difference in steepest slopes shrinks to 2%. At this margin, performance only increases to 52.2% when sample size is increased to 80 patients per arm. If advancing the top two best performers within a simulated study is an option, the chance that the true best arm is contained within the advancing subset increases substantially: given a sample size of only 20 per arm at least 91.4% of simulated datasets advancing the estimated top two arms will correctly advance the arm with the true steepest slope when the difference between the two best is at least 5% ('1 Winner' and '2 Winner'). This remained true in 60.1% of simulations when the difference is as small as 2% ('4 Winner's). Figure 2C demonstrates estimates of traditional "power" to detect a relative difference between a novel arm's estimated slope and the control slope when comparing the null value of zero against the estimated 95% credible interval. As expected, the power to make decisions based solely on this metric is lower than typically desired given the sample size restrictions and the variability. This result echoes what has previously been demonstrated on the futility of arm selection solely on the basis of traditional hypothesis testing when the feasible sample size is low. ### Using the proposed metrics separately **Arm deprioritization.** Figure 3 shows the impact of various count-based thresholds for the step of arm de-prioritization. A good decision threshold should result in a high probability for deprioritizing sub-optimal arms and a low probability for desirable arms. At a sample size of \(n_{k}=30\) per arm, an unfavorable outcome threshold of 2 is associated with a 22% probability of deprioritizing a sub-optimal arm while maintaining a low risk (3%) of stopping a desirable arm. If the sample size per arm can be increased to \(n_{k}=40\), the efficiency in deprioritizing sub-optimal arms based solely on early observation of unfavorable outcomes more than doubles (53%) while maintaining a relatively low risk of deprioritizing a desirable arm (7%) given the same threshold. **Arm performance.** Our second step in arm assessment is based on whether the arm meets a two-level target product profile on the \(\log_{10}\)(TTP) slope. Figure 4 displays the impact of assessing arm performance on the basis of the \(\log_{10}\)(TTP) slope against a multi-level target product profile with prespecified values of \(\theta_{MAV}=0\%,\theta_{TV}=20\%,\tau_{MAV}=\tau_{TV}=0.025\). In this setting, an arm with a 10% poorer slope than the control would be flagged for deprioritization (NO-GO) at least 44% of the time, even when the sample size is as low as 20 per arm. The probability of advancing (GO) promising arms, those with a \(\log_{10}\)(TTP) slope 20% greater than the control, is at least 25% with a sample size of 20 per arm and increases with increasing sample size. Notably, at a sample size of 40 patients per arm, a promising arm with a \(\log_{10}\)(TTP) slope 20% greater than the control is rarely stopped (by design, this proportion hovers around \(\tau_{TV}\)) and is flagged for early advancement in nearly 50% of simulations. **Arm ranking.** Figure 5 demonstrates that the ability to properly rank the arm with the true steepest slope depends on sample size and competitiveness of the other arms. For clarity, we have restricted these figures to compare the arms with the true steepest and second steepest slopes in \(\log_{10}\)(TTP). Each density curve corresponds to the distribution of posterior probability Figure 2: Frequentist summary of estimator performance across changes in sample size (\(n_{k}\)) and differences in \(\log_{10}\)(TTP) slope. For all panels, results are based on 1,000 simulated datasets for each sample size and condition. **A)** The proportion of simulations (\(y\)-axis) where a given arm was estimated to have the steepest slope. **B)** The proportion of simulations (\(y\)-axis) where the steepest estimated slope belonged to one of the true top two steepest arms. **C)** The proportion of simulations (\(y\)-axis) where the null of no relative difference in slope between an intervention arm and the control arm (null value = 0%) is excluded from the estimated 95% credible interval around the relative percent change in slopes (\(x\)-axis).. estimates that a given arm is the steepest; ideally, the arm with the true steepest slope (\(\theta_{(1)}\), blue curve) would have a posterior probability estimate of 1 in all simulations and the other arms would have posterior probability estimates of 0. Despite uncertainty in estimation in small sample sizes, the posterior probability estimates are often sufficiently higher for the arm with the true steepest slope than for its competitors (median, vertical lines), resulting in a sufficient metric for decision-making. For example, when \(\theta_{(1)}-\theta_{(2)}\geq 10\%\) ('1 Winner', Fig. 5A), a sample size of 30 per arm is sufficient to separate the posterior probability distributions in most simulated datasets. ### Evaluation of the proposed metrics as an overall package We now examine the performance of the framework when applied in concert to decision-making. In practice, a holistic approach should be taken to guide decision-making, including the evaluation of safety data. These results are generated under a series of hypothetical, rigid decision-criteria in order to gain intuition into the operating characteristics of the framework. Figure 6 shows the percentage of simulated datasets where sub-optimal arms (true unfavorable outcome rate: 10%) are deprioritized based on the metrics included for decision-making. For this example, TTP results are based on the following settings: A) arm \(k=2\) from '2 Winners', B) arm \(k=3\) from '2 Winners' and, C) arm \(k=4\) from '2 Winners'. Note, for simplicity we have used \(\text{Pr}_{\theta}(\theta_{k}\in\{\theta_{(1)},\theta_{(2)}\}|X))\leq 0.6\) as a proxy for the ranking metrics, effectively de-prioritizing any arm that is unlikely to rank in the top two performers. For a sample size of 30 per arm, when the sub-optimal arm has a relative \(\log_{10}\)(TTP) slope of -10% compared to the control, all three metrics deprioritized a sub-optimal arm for advancement in 20.5% of all simulated datasets (Fig. 6A). In other words, 20.5% of the time, it doesn't matter what metric is used, the decision would be the same in terms of stopping poor performing arms. The advantages of the multimetric framework are then evident when seeing how the use of all of the metrics can improve upon this baseline of 20% efficiency in deprioritizing sub-optimal arms. To correctly deprioritize 100% of sub-optimal arms in this setting, we must incorporate at least one of the TTP-based metrics. As TTP slope becomes less of a reliable proxy for the primary endpoint, both in terms of improved performance relative to control and no longer falling last in terms of arm ranking, the framework remains effective in deprioritizing sub-optimal arms, so long as each component piece is used (Fig. 6B-C). Figure 6B uses the 'Two Winners' condition for simulating TTP slope, Figure 3: The proportion of simulations where an arm with a given unfavorable outcome rate (panels) would be flagged for deprioritization on the basis of collected unfavorable outcome counts at the first interim analysis given varying sample sizes per arm (\(n_{k}\)) and pre-specified unfavorable outcome thresholds. The first interim analysis is triggered by the complete collection of 8 weeks of post-randomization \(\log_{10}\)(TTP) data on \(n_{k}\) patients per arm. Results are based on the evaluation of 1,000 simulated datasets. meaning the true rank of the evaluated arm's TTP slope is third steepest. Even with these improvements, 98.9% of sub-optimal arms would fail to advance based on the framework. When the relative slope in \(\log_{10}\)(TTP) increases to exceed the pre-specified target value from the TPP framework (\(\theta_{TV}\) = 20%, Fig. 6C), this framework still correctly deprioritizes a sub-optimal arm 39.4% of the time, an improvement over the relapse only decision threshold. It is expected that performance will decrease in this setting (Fig. 6C) since the TTP slope meets the target product profile and ranks second steepest among the novel arms considered. For clarity, an example table of the metrics estimated from a single simulated dataset is included in the Appendix (A.3). This reflects what would be used for decision-making at the interim analysis during a single trial. ## 4 Discussion Decision-making at any point along the clinical trial pathway is an inherent challenge. We have proposed a flexible, multi-metric framework to de-risk decision-making at interim analyses during phase II trials in TB and, with slight adaptation, other disease settings. Our framework combines innovation in both performance evaluation (multi-level target product profile frameworks) [6] and arm ranking, and couches all estimation in a readily interpretable Bayesian estimation framework. Using a simulation study, we have demonstrated our proposed framework's suitability to capture critical elements of regimen performance even when sample sizes are low. By examining increasingly discordant behavior between the intermediate endpoint used in decision-making and the primary endpoint, we have demonstrated how valuable a multiple metric framework becomes for informed decision-making. Middle-development TB clinical trials have relied on a handful of commonly used candidate biomarkers (e.g., 14-day EBA, colony forming unit counts, proportion culture negative at 2 months, time to stable culture conversion) as well as novel biomarkers (e.g., MBLA, RS Ratio, gene signature, PET-CT, sputum LAM) to assess regimen efficacy. The relative utility of the various endpoints remains a topic of debate [9, 10, 21, 22, 23, 24, 25]. Our work is based on TTP as the intermediate endpoint as it is the most commonly and readily available outcome in TB trials and appears somewhat promising in terms of trial-level correlation with the primary endpoint. In this setting, we are not using TTP on an individual level to predict or anticipate a single patient's Figure 4: The proportion of trials where an arm with a given percent change in \(\log_{10}\)(TTP) slope relative to the control (panels) would be assigned a particular decision at the first interim analysis given varying sample sizes per arm (\(n_{k}\)). Results are based on the evaluation of 1,000 simulated datasets and assume \(\theta_{MAV}=0\%,\theta_{TV}=20\%,\tau_{MAV}=\tau_{TV}=0.025\). Figure 5: Comparison of distributions of posterior probability estimates of whether a given regimen arm has the steepest \(\log_{10}\)(TTP) slope, \(\Pr_{\theta}(\theta_{k}=\theta_{(1)})\) for the arms with the true steepest \(\theta_{(1)}\) and second steepest \(\theta_{(2)}\) slopes. Results are shown for differences A) 10% (‘1 Winner’), B) 5% (‘2 Winners’), and C) 2% (‘4 Winners’). Results are based on 1,000 simulated datasets for each sample size (row-wise panels, \(n_{k}\)) and TTP condition (column-wise panels). Vertical lines mark the median of the corresponding distributions of posterior probability estimates. Figure 6: The percentage of simulated studies where sub-optimal arms (fixed unfavorable outcome rate: 10%) are deprioritized on the basis of two or more unfavorable outcomes (**Unfavorable Outcome**), less than a 60% posterior probability of having one of the top two steepest slopes (**Ranking**), and receiving a “NO-GO” or “Continue” decision based on the multilevel target product profile on TTP slope (**TPP**). Each panel (L-R) corresponds to a decrease in agreement between the underlying unfavorable outcome rate (fixed) and the time-to-positivity activity. Specifically, the TTP results are based on the following settings: A) arm \(k=2\) from ‘2 Winners’, B) arm \(k=3\) from ‘2 Winners’ and, C) arm \(k=4\) from ‘2 Winners’. Results are based on 1,000 simulated datasets per setting and a sample size of 30 per arm. The total percentage of simulated datasets where a sub-optimal arm is improperly advanced based on not meeting any of the proposed cutoffs for prioritization is noted at the bottom of each Venn diagram. likelihood of cure. Instead, we are assuming that, at the trial-level, the intermediate TTP slope and final outcomes are correlated and that the differences between arms that is observed on TTP is meaningfully correlated with the differences expected in terms of arm performance for the primary endpoint. In the presence of a positive individual level correlation (which may be a plausible assumption for existing drugs [26] and perhaps also for new drugs), we anticipate the operating characteristics of the framework to be even more favorable. As research progresses on this endpoint, general learnings about the relevance of TTP for regimen development can be used to adjust the target and minimum acceptable values. Our proposed framework, when applied with an appropriate model for the intermediate endpoint, can be extended or adapted to alternative biomarkers, should another option (or the inclusion of additional biomarkers) be of interest to decision-makers. Bayesian methods for the evaluation of Phase II studies are growing in acceptability [17] and have been approved by regulatory agencies as the primary method of analysis [27, 28]. One advantage of Bayesian estimation is the ability to explicitly state and incorporate prior information into the estimation procedure. In the setting of TB studies, there is a wealth of knowledge around the standard of care. Ignoring the decades of evidence that has been accumulated is inefficient and, perhaps, unethical when phase II studies are required to keep sample sizes low for equipoise. Though not explored here, future research and applications of this framework should consider the effect of incorporating prior information for the \(\log_{10}(\text{TTP})\) slope for the standard of care. Following guidance generated by ongoing efforts to incorporate translational pre-clinical and clinical data to improve regimen evaluation (e.g., ACTG RAD-TB), such data sources could also be used to inform reasonable priors on novel regimens as well. Proper incorporation of informative priors should decrease estimator variability in the \(\log_{10}(\text{TTP})\) slopes, ultimately 1) strengthening the ability to compare novel regimens against the standard of care, 2) improving confidence in ranking, particularly for novel regimens with small relative differences in slope, and 3) result in fewer "Continue" categorizations within the target product profile framework. Each of these changes will improve efficiency in the evaluation of regimen performance. Further, it is straightforward to perform sensitivity checks on the impact of the priors and can be an additional tool in guiding decision-makers [29]. One concern with the use of Bayesian methods for the planning and analysis of clinical trials is its inability to strictly control the type I error rate. This is further complicated by our recommendation that the multi-metric framework be applied holistically, upon the close evaluation of all metrics to comprehensively evaluate a study arm's performance and promise. These concerns are worth investigating and future research will evaluate how more complex decision frameworks, such as the one proposed here, can be properly evaluated to limit this risk. One key advantage of our multi-metric framework includes a direct adaptability to decision-makers' level of risk tolerance. Instead of focusing on a strict frequentist type I error, we have shown that this framework has good operating characteristics for prioritizing arms with desirable performance and de-prioritizing sub-optimal arms which directly addresses the objectives of middle-development clinical trials. Further, strict control of the type I error rate may not be the driving determinant in study design for some trial settings. In UNITE4TB-01, this framework can be used to identify which arms advance from phase IIb to phase IIc, a period of further observation where the duration of the arm is also randomized. Evidence generated in this second phase will help to further elucidate which arms (and durations) should be advanced into large, definitive phase III trials. In summary, we propose a Bayesian decision framework, building on the two-level target product profile [6], for the setting of multi-arm middle development clinical trials using intermediate endpoints that are not perfect surrogates. We have shown that our flexible multi-metric framework has good operating characteristics and is a practical solution for de-risking drug development. ## Disclaimer This communication reflects the views of the authors and neither IMI nor the European Union and EFPIA are liable for any use that may be made of the information contained herein. ## Conflict of Interest UNITE4TB (academia and industry united innovation and treatment for tuberculosis) is a public-private partnership with representation from academic institutions, small- and medium-sized enterprises (SMEs), public organizations, and pharmaceutical companies. All partners of UNITE4TB were provided the opportunity to review a final version of this manuscript for factual accuracy, but the authors are solely responsible for final content and interpretation. Katie Rolfe is employed by and holds shares in GSK. Angela M. Crook and Katie Rolfe are co-leaders of the 'Clinical Trial Design' Work Package within the UNITE4TB consortium. ## Acknowledgments This project has received funding from the Innovative Medicines Initiative 2 Joint Undertaking (JU) under grant agreement No 101007873. The JU receives support from the European Union's Horizon 2020 research and innovation programme and EFPIA, Deutsches Zentrum fur Infektionsforschung e. V. (DZIF), and Ludwig-Maximilians-Universitat Munchen (LMU). EFPIA/AP contribute to 50% of funding, whereas the contribution of DZIF and the LMU University Hospital Munich has been granted by the German Federal Ministry of Education and Research. Suzanne M. Dufault has received funding from the UCSF Center for Tuberculosis and TB RAMP scholar program (NIH/NIAID R25AI147375).
2306.15363
Your Attack Is Too DUMB: Formalizing Attacker Scenarios for Adversarial Transferability
Evasion attacks are a threat to machine learning models, where adversaries attempt to affect classifiers by injecting malicious samples. An alarming side-effect of evasion attacks is their ability to transfer among different models: this property is called transferability. Therefore, an attacker can produce adversarial samples on a custom model (surrogate) to conduct the attack on a victim's organization later. Although literature widely discusses how adversaries can transfer their attacks, their experimental settings are limited and far from reality. For instance, many experiments consider both attacker and defender sharing the same dataset, balance level (i.e., how the ground truth is distributed), and model architecture. In this work, we propose the DUMB attacker model. This framework allows analyzing if evasion attacks fail to transfer when the training conditions of surrogate and victim models differ. DUMB considers the following conditions: Dataset soUrces, Model architecture, and the Balance of the ground truth. We then propose a novel testbed to evaluate many state-of-the-art evasion attacks with DUMB; the testbed consists of three computer vision tasks with two distinct datasets each, four types of balance levels, and three model architectures. Our analysis, which generated 13K tests over 14 distinct attacks, led to numerous novel findings in the scope of transferable attacks with surrogate models. In particular, mismatches between attackers and victims in terms of dataset source, balance levels, and model architecture lead to non-negligible loss of attack performance.
Marco Alecci, Mauro Conti, Francesco Marchiori, Luca Martinelli, Luca Pajola
2023-06-27T10:21:27Z
http://arxiv.org/abs/2306.15363v1
# Your Attack Is Too DUMB: ###### Abstract. Evasion attacks are a threat to machine learning models, where adversaries attempt to affect classifiers by injecting malicious samples. An alarming side-effect of evasion attacks is their ability to transfer among different models: this property is called _transferability_. Therefore, an attacker can produce adversarial samples on a custom model (surrogate) to conduct the attack on a victim's organization later. Although literature widely discusses how adversaries can transfer their attacks, their experimental settings are limited and far from reality. For instance, many experiments consider both attacker and defender sharing the same dataset, balance level (i.e., how the ground truth is distributed), and model architecture. In this work, we propose the **DUMB** attacker model. This framework allows analyzing if evasion attacks fail to transfer when the training conditions of surrogate and victim models differ. DUMB considers the following conditions: **D**ataset so**U**rees, **M**odel architecture, and the **B**alance of the ground truth. We then propose a novel testbed to evaluate many state-of-the-art evasion attacks with DUMB; the testbed consists of three computer vision tasks with two distinct datasets each, four types of balance levels, and three model architectures. Our analysis, which generated 13K tests over 14 distinct attacks, led to numerous novel findings in the scope of transferable attacks with surrogate models. In particular, mismatches between attackers and victims in terms of dataset source, balance levels, and model architecture lead to non-negligible loss of attack performance. Adversarial Machine Learning, Adversarial Attacks, Evasion Attacks, Transferability, Surrogate Model + Footnote †: ccs: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (NLP), 2019, New York, NY, USA 2019 Our contributions can be summarized as follows: * We propose the **DUMB** attacker model, a novel evaluation system to measure evasion transferability. * We propose a novel testbed to evaluate evasion transferability with the DUMB attacker model. The testbed comprises three distinct computer-vision tasks, four distinct balance levels of the classes, and three distinct state-of-the-art models. * An extensive evaluation of state-of-the-art evasion attacks with the DUMB attacker model. FindingsAfter evaluating many evasion attacks on all possible combinations of dataset source, model architecture, and class balance of the datasets, our findings can be summarized as follows: 1. Less robust models are more susceptible to adversarial perturbations than highly performing models. 2. Adversarial attacks in literature face difficulty transferring across architectures. 3. Simple image obfuscation is an effective offensive strategy. 4. Adversarial attacks struggle when transferring. 5. Not all basic surrogate models are ideal for evading attacks. 6. The discrepancy in class distributions between surrogate and victim datasets can greatly hinder the effectiveness of evasion attacks. Additionally, targeting the minority class seems to be easier than targeting the majority. 7. Creating surrogate data can negatively impact the effectiveness of transferable attacks. Our testbed and experiments are open-source and available at the following link: [https://github.com/Mhackiori/DUMB](https://github.com/Mhackiori/DUMB). OrganizationThis paper is organized as follows. Section 2 summarizes the literature on adversarial machine learning and transferable attacks. Section 3 introduces the DUMB attacker model. Section 4 describes the experimental settings. Sections 5 and 6 present the results and conclusions of our work, respectively. ## 2. Preliminaries Adversarial Machine LearningAdversarial machine learning (AML) is the discipline that studies how adversaries can exploit machine learning (ML) algorithms to conduct an attack. Adversarial attacks can be classified with the following properties (Goodfellow et al., 2014): the _influence_, where attackers can actively affect the training procedure (causative attacks), or they simply do not alter the victims' models (exploratory attacks); the _security violation_, where attackers might attempts to alter victims' model's performance (integrity violation), to make victims' model unavailable (availability violation), or to obtain sensitive information (privacy violation); last, the _specificity_ of the attack, if the attack targets a specific set of samples (targeted attack) or generic samples (untargeted attacks). The definition of an attack is further defined by the attackers' knowledge of the victims' system (e.g., training data, model architecture). In particular, we refer to _white-box_ attacks when the attacker has (nearly) perfect knowledge about the victim's system, setting the worst-case scenario; on the opposite, we refer to _black-box_ attacks when attackers know a little about the target. Evasion AttacksThis work focuses on _evasion attacks_, where attackers aim to modify an input sample to produce a misclassification in the victim's model. Malicious samples \(x^{*}\) can be defined as \(x^{*}=x+r\), where \(x\) is the original sample, and \(r\) is the perturbation. The perturbation \(r\) can be obtained through the following optimization process: \[r=\arg\min_{z}f(x+z)\neq f(x). \tag{1}\] Here, \(z\) is the variable being optimized, which represents the perturbation that is added to the original input \(x\) to create the perturbed input \(x+z\). Many ML algorithms do not guarantee that the optimization is linear or convex, so we cannot always find a closed-form solution. Prior works propose different approaches to estimate such a perturbation; for instance, the Fast Gradient Signed Method (FGSM) (Goodfellow et al., 2014): \[x^{*}=x+\varepsilon\cdot sign(\nabla_{x}J(\theta,x,y)), \tag{2}\] where \(\varepsilon\) is small to ensure an "imperceptible" perturbation, \(J\) is a loss function (e.g., cross-entropy), \(\theta\) the parameters of the model \(f\), and \(y\) the ground truth for the given input \(x\). Transferable AttacksA fascinating aspect of adversarial samples is their ability to potentially fool not only the model \(f\) used to find the perturbation \(r\) for a given sample \(x\) but also unknown models \(f^{\prime}\). This behavior has a strong repercussion in cyber-security: attackers can therefore leverage their own model \(f\) (named substitute or surrogate model) to produce adversarial samples for the victims model. Using a substitute model to generate an attack presents many advantages, such as white-box access. Papernot et al. (2019) defined two distinct transferability scenarios by considering the surrogate and victim models. They referred to _intra-techniques transferability_ when the two models share the same architecture (e.g., both logistic regression or both Deep Neural Network), or, vice-versa, to _cross-techniques transferability_ when the two models have distinct architecture (e.g., one is a logistic regression and the other a Deep Neural Network). Adversarial Attacks in PracticeThe literature primarily covers theoretical aspects of threats in machine learning systems. Little is known about attacks in practice, where challenges that occur only in real-life might not be considered in controlled environments. Therefore, real-life attacks might be utterly different from what is discussed in the literature (Goodfellow et al., 2014; Dosovitskiy et al., 2015). Consequently, industries might perceive as "innocuous" threats that are considered technically attractive by the research community and "serious" those that are not. For instance, consider Perspective, a toxicity detection model deployed by Google: in their recent report (Krizhevsky et al., 2015), the developers tested their model against a simple NLP attack introduced by Grondahl et al. (2017) that can be deployed by many end-users rather than more complex - and perhaps unrealistic - attacks studied in the literature. A few noticeable works proved the feasibility of attacking deployed ML applications: "All You Need Is Love", where simple textual perturbations (e.g., typos) endangered toxicity detectors (Krizhevsky et al., 2015); "stealthy porn", where researchers showed that social network users evaded porn detectors by applying simple image filters (Dosovitskiy et al., 2015); attacks on deployment libraries, where attackers can exploit vulnerabilities of the libraries utilized to deploy a machine learning model (Dosovitskiy et al., 2015); "camouflage attack", a threat that exploits image-scaling algorithms to produce evasion in computer-vision applications (Steintein et al., 2017); "Zero-Width Space attack", where invisible Unicode characters inserted in textual samples disrupted the textual representations of many NLP services deployed by top IT companies (Steintein et al., 2017); "captcha attack", where researchers showed potential adversarial samples utilized by Instagram users that endanger the OCR of automatic content moderators (Borda et al., 2017). _Challenges of Transferable Attacks._ Practical constraints might affect the transferability of the attacks as well. We now summarized relevant prior works that attempted to study different variables that might impact the attacks' transferability. Generally, such works are guided by a common observation: it is unrealistic that attackers have knowledge of the victims' systems (e.g., dataset, model architecture), limiting the adoption of surrogate models. For instance, training a surrogate model might be expensive (or even impossible) for an attacker since it requires possessing valid training data. We identified two types of solutions in the literature that relax the constraint of having valid data: (i) cross-domain perturbations, i.e. perturbations computed on a task (e.g, paintings, cartoons, or medical images) that transfer on models trained on a distinct task (e.g, ImageNet classes) (Steintein et al., 2017); (ii) data-free attacks, where the substitute model can be learned thanks to the cooperation between a generative model, a discriminator, and a series of queries to the victim's model (Steintein et al., 2017). Nevertheless, many works analyzed the impact of surrogates on transferable attacks. Mao et al. (Mao et al., 2017), instead, discuss the problem of transferring attacks among computer-vision Machine-Learning-as-a-Service (MLaaS) and analyze how different models' properties might impact the attack. For instance, the authors found that simple surrogates do not necessarily improve transferability and that there is no dominant architecture for surrogates. Suciu et al. (Suciu et al., 2018) proposed FAIL attacker model, where the authors investigated the impact of evasion transferability under different types of knowledge of victims' systems: the feature space, the architecture of the model, the label instances, and the leverage (i.e., constraints on the type of modification at the feature space). Compared to the previous works, with **DUMB**, we attempt to cover unique aspects of the surrogate training, and in particular **D**ataset so**U**recs, **M**odel architecture, and the **B**alance of the ground truth. In particular, while aspects like the impact of the model architecture have been covered in literature, others were not, like the source of data and the imbalance problem. Therefore, analyses combining these three aspects are, per se, novel, and they can unveil unique patterns of adversarial transferability. ## 3. The Dumb Attacker Model Suppose being in the shoes of an attacker aiming to evade a victim organization \(f^{\prime}\). What are the steps necessary to conduct a (potentially) successful attack? Current literature studies the effect of transferability on settings far from being real (Krizhevsky et al., 2015). Consider the adversary pipeline necessary to generate an adversarial sample; it consists of: (i) finding a suitable dataset that matches the victim's, (ii) choosing a surrogate model \(f\), and picking a methodology that produces adversarial attacks. When designing such a pipeline, we find the following challenges that _might_ affect the attack execution. _The dataset choice._ Prior works mainly use a dataset shared among attackers and victims. _This is unrealistic_. Building a proper surrogate dataset is all but trivial since attackers and victims might follow different corpus generation strategies. For instance, in the hate speech detection task, Grondahl et al. (Grondahl et al., 2017) show that prior works tackling hate speech propose many datasets following distinct generation procedures; as a result, models trained on a specific dataset lack in terms of generalization on distinct ones. Therefore, in such cases, the transferability might be a property not fully guaranteed. _Ground truth distribution._ Prior works mainly assume that attackers and victims use datasets originating from the same source and, therefore, the distributions of the ground-truth match. This is a hard constraint in real settings since such distributions might differ for many reasons. First, the two distributions might result from two distinct methodologies to produce the datasets (see _the choice of the dataset_). Second, many preprocessing techniques might be used to augment the training data. This scenario is likely especially when the task is inherently imbalanced (e.g., hate speech detection1). Augmentation techniques can over-sample the minority class (e.g., SMOTE (Borda et al., 2017), Generative Adversarial Networks (Dong et al., 2018)) or undersample the majority one. Footnote 1: In the hate-speech detection, usually datasets are strongly imbalanced toward the hateful class (Grondahl et al., 2017). _Model selection._ Prior works consider this scenario when analyzing the transferability of distinct adversarial attacks. Indeed, attackers and victims might use one of the many state-of-the-art models or custom ones. For instance, only in computer vision, someone might choose among several models to fine-tune, such as VGG (Vogers et al., 2016) (and its many versions like VGG16 and VGG19) and ResNet (He et al., 2017) (e.g., ResNet18, ResNet50). Considering such challenges, we can clearly see a need to enhance the study of adversarial transferability in many distinct scenarios and not limit empirical evaluations to a few artificial settings. Thus, experiments focusing on white-box (full access to the victims' model) and black-box (little known about the victims' model) might not be representative of the many shades that might occur in real-life. We address such a gap by proposing the **DUMB** attacker model for transferable samples that present many attack scenario cases. **DUMB** considers **D**ataset so**U**recs, **M**odel architecture, and the Balance of the ground truth, potential factors that might affect the transferability of the attacks. In Table 1, we present eight distinct variations of attacks that can occur in a black-box attack, and in particular, potential mismatches between the source (or surrogate) and target (or victim) models. Subscript \(a\) and \(v\) stand for attacker and victim, respectively. We highlight that, in real-life conditions, attackers do not know a priori in which attack scenario they are - except for the white-box case. ## 4. Methodology (the Dumb Testbed) To simulate the eight specific cases presented in the DUMB table (Table 1), we design a testbed that considers distinct sources of datasets, different balance levels, and different model architectures. In this section, we describe our experimental setup, starting from the data collection phase (Section 4.1), the definition of the balance levels (Section 4.2), and the choice of the models (Section 4.3). Finally, we describe the attacks that we use and their implementation (Section 4.4) and our testing methodology (Section 4.5). Our GitHub repository contains the code and datasets to reproduce our experiments. ### Dataset Sources (DU-dimension) In this work, we focused on the transferability of binary classifiers, which is a common setting in many cybersecurity applications (e.g., spam/non-spam, phishing/non-phishing, hate/non-hate speech). We focus on computer-vision tasks since most adversarial attacks literature covers this domain. We defined three distinct tasks: Bikes&Motorbikes, Cats&Dogs, and Men&Women. Given the specific requirements of our testbed, the datasets for each task have been manually collected and validated according to the following steps. 1. _Data Collection_ - We generate two distinct datasets for each binary task by manually collecting images from two popular search engines: Bing and Google. By creating our own dataset instead of using open-source ones, we can ensure their integrity and have more control over the complexity of \begin{table} \begin{tabular}{l|l|l} \hline \hline **Case** & **Condition** & **Attack Scenario** \\ \hline C1 & \(DU_{a}=DU_{0}\) & The ideal case for an attacker. We identified two potential attack scenarios. (i) Attackers legally or illegally gain information about the victims’ system. (ii) Attackers and victims use the state-of-the-art. \\ \hline C2 & \(DU_{a}=DU_{0}\) & Attackers and victims use state-of-the-art datasets and model architecture. However, victims modify the class balance to boost the model’s performance. This scenario can occur especially with imbalanced datasets. \\ \hline C3 & \(DU_{a}=DU_{0}\) & Attackers and victims use standard datasets to train their models. However, there is a mismatch in the model architecture. This scenario might occur when state-of-the-art presents many comparable models. Or similarly, the victims choose a specific model based on computational constraints. \\ \hline C4 & \(DU_{a}=DU_{0}\) & Attackers and victims use standard datasets to train their models, while models’ architectures differ. Furthermore, victims adopt data augmentation or preprocessing techniques that alter the ground truth distribution (balancing). This scenario can occur especially with imbalanced datasets. \\ \hline C5 & \(DU_{a}\neq DU_{0}\) & Attackers and victims use different datasets to accomplish the same classification task. The ground truth distribution can be equal, especially in inherently balanced tasks. Similarly, models can be equal if they both adopt the state-of-the-art. \\ \hline C6 & \(DU_{a}\neq DU_{0}\) & Attackers and victims use different datasets to accomplish the same classification task. Datasets have different balancing because they are inherently generated in different ways (e.g., see hate speech datasets example) or because the attackers or victims augmented them. Attackers and victims use the same state-of-the-art architecture. \\ \hline C7 & \(DU_{a}\neq DU_{0}\) & Attackers and victims use different datasets to accomplish the same classification task. Datasets ground truth distribution matches. Attackers and victims use different models’ architecture. \\ \hline C8 & \(DU_{a}\neq DU_{0}\) & The worst-case scenario for an attacker. Attackers do not match the victims’ dataset, balancing, and model architecture. \\ \hline \hline \end{tabular} For simplicity, C1 corresponds to the white-box setting, where attackers can access the victims’ model, including gradients. \end{table} Table 1. DUMB attacker model of adversarial transferable samples. \(DU\) = Dataset soUrec, \(M\) = Model architecture, \(B\) = balance level. the task and possible biases. We collect an average of 14264 images for each dataset. 2. _Duplicate Removal_ - Duplicated images in each dataset are discarded with the _diffPy2_ library. After this procedure, an average of 254 images are removed from each dataset. Footnote 2: [https://github.com/eliisemercurry/Duplicate-Image-Finder](https://github.com/eliisemercurry/Duplicate-Image-Finder) 3. _Manual Check_ - Through manual inspection, we ensure that the datasets do not contain erroneous samples (e.g., not coherent with the classes, paintings, sketches, or low quality). Although this procedure might reduce any bias of having different data validation strategies between attackers and victims, it allows us to reveal the true effect of having distinct sources that generate (theoretically) the same type of data. On average, we remove 1854 images from each dataset. Footnote 2: [https://pillow.readthedocs.io/en/stable/](https://pillow.readthedocs.io/en/stable/) 4. _Image Selection_ - We randomly selected, for each dataset, 10000 samples equally split among the two classes. Footnote 3: [https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html](https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html) 5. _Image Resizing_ - Using Python Imaging Library (PIL)3, each image is resized to \(300\times 300\) and converted to RGB. For the resizing process, we used the antialias option provided by PIL to prevent aliasing artifacts. Footnote 3: [https://github.com/eliisemercurry/Duplicate-Image-Finder](https://github.com/eliisemercurry/Duplicate-Image-Finder) For each class in each dataset, we split those 5000 samples into training, validation, and test sets with respective ratios of 70%, 10%, and 20% (i.e., 3500 samples for the training set, 500 samples for the validation set, and 1000 samples for the test set). The images contained in the test set will be used not only to first evaluate the models but also to generate the adversarial samples. ### Ground Truth Balancing (B-dimension) A second (potentially) critical variable is the different class balance levels between attacker and defender. We simulate different balancing levels in the training sets with the following ratios: * 50% minority class, 50% majority class. * 40% minority class, 60% majority class. * 30% minority class, 70% majority class. * 20% minority class, 80% majority class. For all our tasks, we choose the first class to be the minority class (i.e., Cats, Men, and Bikes), and this choice will be uniform for all balance levels. The number of class samples for each level of ground truth balancing is shown in Table 2. To achieve this, we fix the number of samples for the majority class and randomly undersample the minority class accordingly. For instance, to obtain a _strong imbalance_ for the Cats&Dogs task, we keep all the 3500 images of Dogs and randomly select only 875 images of Cats. The validation set and the test set are unaffected by this procedure and contain an equal number of samples between the two classes. ### Model Architectures (M-dimension) We utilize three state-of-the-art computer vision models for fine-tuning tasks: AlexNet (Krizhevsky et al., 2014), ResNet (He et al., 2015) (ResNet18 version), and VGG (Vogonyan and Zisserman, 2015) (VGG11-bn version). The training procedure follows what is described in the official PyTorch documentation.4 We train a total of 3 tasks \(\times\) 2 sources \(\times\) 4 class distribution levels \(\times\) 3 architectures = 72 models. A graphical overview of the training combinations is shown in Figure 1. Footnote 4: [https://github.com/eliisemercurry/Duplicate-Image-Finder](https://github.com/eliisemercurry/Duplicate-Image-Finder) After training the models on each dataset, we evaluate their baseline performance on the test set. As a metric of evaluation, we will use the _F1 score_, which is defined as the harmonic mean of the precision and recall. This metric provides a balanced measure that considers both aspects of model performance, which is relevant in scenarios with possibly unbalanced dataset distributions. In particular, the F1 score is expressed as follows: \[F1=2\frac{precision\cdot recall}{precision+recall}. \tag{3}\] In Table 2(a) and 2(b), we show the average performance of our models at the varying of task, architecture, and class balance levels for models trained on Bing and Google, respectively. All models are able to achieve good results on all balancing levels, but some differences can be noticed between the different tasks. Indeed, Men&Women appear to be the most complex task for any model, while Bikes&Motorbikes seem to be the easiest among the three. ### Attacks We consider two distinct attack families: _mathematical_, if the result of an optimization process (e.g., FGSM), and _non-mathematical_, if the result of a transformation that does not take into account any machine learning model (e.g., blurring). _Mathematical Attacks._ For the mathematical attacks, we use the following popular attacks. * Basic Iterative Method adversarial attack, as proposed by Kurakin et al. in their paper (Kurakin et al., 2014), is a method for generating adversarial examples for image classifiers. The attack works by iteratively perturbing the input image and using gradient descent to optimize the perturbation such that it causes the image classifier to produce the wrong output. One of the key features of the BIM attack is that it can be used \begin{table} \begin{tabular}{l l l} \hline \hline **Balance Level** & **Minority Class** & **Majority Class** \\ \hline _Balanced_ & 3500 & 3500 \\ _Weak Imbalance_ & 2334 & 3500 \\ _Medium Imbalance_ & 1500 & 3500 \\ _Strong Imbalance_ & 875 & 3500 \\ \hline \hline \end{tabular} \end{table} Table 2. Number of samples in different levels of imbalance of the training dataset. Figure 1. Model combinations during the training phase. to generate adversarial examples that are robust to various types of transformations, such as scaling and rotation. * Moosavi-Dezfooli et al. proposed an algorithm to compute a minimal norm adversarial perturbation for a given image in an iterative manner (Dezfooli et al., 2017). At each iteration, the algorithm adds some perturbation that is computed to take the image to the edge of the region confined by the decision boundaries of the classifier; after that, the perturbations are accumulated to compute the final perturbation, which it is shown to be smaller than the one computed by FGSM in terms of their norm. * Fast Gradient Sign Method is one first and simplest adversarial attacks, first proposed by Goodfellow in a paper from 2014 (Goodfellow, 2014). It works by computing the gradient of the loss of the prediction made by a model based on the true class label of an image and using its sign to construct the adversarial image. * Madry et al. proposed the Projected Gradient Descent (Goodfellow, 2014): an adversarial attack in which an attacker perturbs the input to a machine learning model in such a way as to cause the model to produce the wrong output. The attack works by iteratively calculating the gradient of the loss function with respect to the input and then using this gradient to update the input in the direction that will most likely cause the model to produce the wrong output. * Tramer et al. proposed an upgraded version of the FGSM attack called Random Fast Gradient Sign Method (Tramer et al., 2016). The most significant difference is that the FGSM attack generates the perturbation simultaneously, while the RFGSM attack generates the perturbation in a series of "random" steps. This makes the RFGSM attack more computationally efficient, as it can often find an adversarial example faster than the FGSM attack. * Andriushchenko et al. (Andriushchenko et al., 2016) proposed a new black-box attack called Square attack that does not rely on local gradient. It is a score-based attack, meaning that, while not having access to the target model, it can query the probability distribution over the classes predicted by the classifier. * The paper by Dong et al. (Dong et al., 2017) proposed a new method for generating adversarial examples, the Translation-Invariant Fast Gradient Sign Method (TI-FGSM), which aims to evade defenses that are based on input transformations by adding a translation-invariant constraint to the iterative FGSM algorithm. The key aspect of the paper is that it achieves high transferability of adversarial examples across different models by making the adversarial perturbations translation-invariant. All mathematical attacks are implemented with _Torchattacks_(Trockett et al., 2017), a popular python library used in the community (Zhu et al., 2017; Zhang et al., 2017). _Non-mathematical Attacks_. The other type of attacks we consider is _non-mathematical_ attacks. These kinds of attacks do not require any gradient computation and are independent of the model or the task considered. Indeed, non-mathematical attacks have been shown to be effective in real-life ML applications (Zhu et al., 2017). We implemented these attacks using the PIL library since only simple image processing is required. More in detail, we implemented the following transformations: * By applying this filter, it is possible to blur the image by setting each pixel to the average value of the pixels in a square box extending radius pixels in each direction. It is possible to specify a radius of arbitrary size. * A statistical noise having a probability density function equal to normal distribution. It is possible to specify a \(\sigma\) value. * To get a grayscale image, the color information from each RGB channel is removed, leaving only the luminance values. Grayscale images contain only shades of gray and no color because maximum luminance is white and zero luminance is black, so everything in between is a shade of gray. * An image negative is produced by subtracting each pixel from the maximum intensity value, so for color images, colors are replaced by their complementary colors. * We draw a black square in a random position inside the central portion of the image to cover some crucial information. It is possible to define a size for the black square. * An image can be altered by modifying a certain amount of the pixels in the image either black or white. The effect is similar to sprinkling white and black dots (salt and pepper) in the image. It is possible to specify the proportion of salt and pepper noise. _Parameters tuning_. All the considered attacks need parameters that regulate the intensity of the perturbations. For instance, all the mathematical attacks have the parameter \(\epsilon\), except for DeepFool, which is regulated by the "overshoot" parameter. Similarly, some non-mathematical attacks have a parameter as well: radius for Box Blur, \(\sigma\) for the Gaussian Noise, the size of a black square for the Random Black Box, and the proportion of salt and pepper noise for Salt and Pepper. In general, we identified optimal parameters \(\gamma\) \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c} \hline & \multicolumn{3}{c|}{**Bikes\&Motorbikes**} & \multicolumn{3}{c|}{**Cats\&Dogs**} & \multicolumn{3}{c}{**Men\&Women**} \\ \cline{2-10} & **A** & **R** & **V** & **A** & **R** & **V** & **A** & **R** & **V** \\ \hline **20/80** & 0.99 & 0.99 & 0.99 & 0.93 & 0.97 & 0.97 & 0.85 & 0.92 & 0.92 \\ **30/70** & 0.98 & 0.99 & 0.99 & 0.94 & 0.97 & 0.97 & 0.87 & 0.93 & 0.93 \\ **40/60** & 0.99 & 0.99 & 0.99 & 0.95 & 0.98 & 0.98 & 0.89 & 0.94 & 0.94 \\ **50/50** & 0.99 & 0.99 & 0.99 & 0.95 & 0.98 & 0.98 & 0.90 & 0.95 & 0.95 \\ \hline \end{tabular} \end{table} Table 3. Baseline evaluation of the models. **A** = **AlexNet**, **R** = **ResNet**, **V** = **VGG**. through the following optimization procedure: \[\begin{split}\gamma&=\arg\max_{s}\frac{1}{n}\sum_{i=1}^{n }f(x_{i})\neq f(x_{i}^{*}),\\ \text{subject to }&\frac{1}{n}\sum_{i=1}^{n}SSIM(x_{i},x_{i }^{*})\geq\alpha.\end{split} \tag{4}\] In the notation, \(f\) is the model owned by the attacker and used during the optimization process, \(x^{*}\) is the adversarial samples derived by \(\mathcal{A}(f,x;s)\), and \(\mathcal{A}\) is the adversarial procedure with parameter \(s\). The reader might notice that the first part of the equation is nothing more than the Attack Success Rate (ASR), where the higher, the more samples evaded. The optimization is constrained by the SSIM (Structural Similarity Index Measure), a measure that, given two images, computes their similarity. \(\alpha\) is the minimum degradation threshold we accept by the perturbations. In our experiments, we set \(\alpha=0.4\) for all types of attacks. For all mathematical attacks except for DeepFool (i.e., attacks using \(\epsilon\) as a parameter), we test \(\epsilon\) values in the range \([0.01,0.3]\) with a step of \(0.01\), while for DeepFool overshoot was tested in the range \([10,100]\) with a step of \(1\). For non-mathematical attacks with a parameter, ranges and steps were determined individually and based on performance and perturbation. More details on the ranges for the attack parameters can be found in the attack generation script in our repository. ### Testing Methodology In Section 4.4, we presented a total of \(13\) attacks, comprising \(7\) mathematical and \(6\) non-mathematical attacks. After the searching phase for the optimal attacks' configuration (see Equation 4), we generate sets of adversarial samples containing \(300\) instances equally distributed among the classes. The images are randomly selected from the corresponding test set, which, however, is filtered in order to consider only images that the model correctly classified. In this way, we ensure that any misclassified adversarial sample can count in the Attack Success Rate. In the remaining part of the section, we discuss our testing methodology for the adversarial samples against our models separately for mathematical and non-mathematical attacks, as these attacks rely on different approaches. Mathematical AttacksGenerating adversarial samples for mathematical attacks such as the FGSM requires an input model to compute and generate a perturbed image. Figure 2 shows an overview of the process. For each of the seven attacks we want to test, we need to evaluate all possible combinations of the following pairs \((M_{src},M_{trg})\): * The model used to generate the adversarial sample. The source model is the surrogate in the transferability setting. * The model against which the adversarial sample was tested. The target model is the victim's model in the transferability setting. As explained in Section 4.3, we trained \(24\) models for each task and used each of them as the \(M_{src}\) to generate a set of adversarial samples. We tested each set against \(24\) different \(M_{trg}\), resulting in \(24^{2}\times 3\) tasks \(=1728\) observations for each attack. Since we need to perform this evaluation for each of the seven mathematical attacks, we obtain a total of \(1728\times 7=12096\) observations. Non-mathematical AttacksNon-mathematical attacks, instead, are generated differently since they are transformations applied to the test set of a dataset and do not rely on any model. Thus, for each non-mathematical attack, we generate a total of \(2\) sets of samples (i.e., the datasets), and we test them on each model, obtaining a total of \(2\times 24\times 3\) tasks \(=144\). This is valid for each of the \(6\) non-mathematical attacks we consider, obtaining \(144\times 6=864\). Therefore, the total number of observations performed in our study is \(12096+864=12960\). ## 5. Results In this section, we will discuss the evaluation results carried out with our experimental setup. Given the number of variables that potentially affect our results, we first evaluate the performance of state-of-the-art evasion attacks in the scenarios detailed by the DUMB attacker model (Section 5.1). We then evaluate individually the impact of the model (Section 5.2), class distribution (Section 5.3), and dataset source (Section 5.4). All the raw files from which our results are obtained can be found in the results folder in our repository. ### DUMB Evaluation In this section, we assess how adversarial attacks perform in the eight distinct cases of our proposed DUMB attacker model. We start by analyzing the results of the mathematical attacks, shown in Figure 3. In that Figure, we can observe the effect of two main variables: the task and the attacks. Note that all the performances are averaged among the three DUMB dimensions. TaskThe first outcome of the analysis highlights how transferability highly varies at the varying of tasks. For instance, attacks poorly transfer in Bikes&Motorbikes, while they are effective in the Men&Women task. A possible explanation can be linked with models' performance (reported in Table 3), where the attack poorly transfers when models greatly solve the task: in the Bikes&Motorbikes, indeed, models almost perfectly distinguish the two classes, while, on the opposite, on Men&Women they struggle. The outcome suggests malicious actors might easily transfer attacks on models with performances that are far from perfect. This finding is concerning if we consider that many real-life tasks are challenging, and state-of-the-art performance is even much below \(0.90\) of the F1-score. Observation 1: Compared with high-performant models, models with performance far from perfect appear more vulnerable to adversarial perturbations. AttacksAnother noticeable outcome is the superiority of TIFGSM, which outperforms all the other attacks in most cases. We recall that this is the only attack among the considered set explicitly designed for transferability purposes. The attacks produce a strong transferability in the Men&Women task, with an evasion rate close to \(1\) (perfection) in four out of eight cases. Last, the "rectangular" shape of TIFGSM. By cross-looking with the DUMB attacker model, we can see that TIFGSM, and more in general, all the considered attacks, provide better performance on attacks where attackers and defenders use the same model architecture (i.e., C1, C2, C5, and C6). Conversely, much lower performance (almost unsuccessful) occurs when attackers and defenders do not share the same model architecture. **Observation 2:**_Literature proposes adversarial attacks that struggle to transfer among different architecture._ _Non-Mathematical Attacks._ A different pattern can instead be seen in the non-mathematical attacks, shown to be effective in the past by (Shen et al., 2017). For simplicity, we report in the paper only the case of Men&Women, while more details about the other tasks are available in our GitHub repository. Figure 4 shows the results. Generally, it appears that simple obfuscations are not effective on our complex models (i.e., AlexNet, ResNet, and VGG). The most effective attack is the RandomBlackBox, which, in contrast, results in the most "altered" images. While the TIFGSM generally outperforms non-mathematical attacks, this is not always true for the rest of the considered mathematical attacks. Therefore, we count how many times non-mathematical outperforms mathematical attacks for each case of the DUMB attacker model and for each task. We applied 42 comparisons (7 mathematical \(\times\) 6 non-mathematical) for each case, for a total of 336 tests (42\(\times\) 8 cases). Overall, non-mathematical attacks outperform mathematical 79, 81, and 101 times out of 336 cases for Bikes&Superbikes, Cats&Dogs, and Men&Women, respectively. Furthermore, we analyzed if such successes are uniformly distributed or centered in some of the DUMB cases. The result is shown in Figure 5. The reader can observe that the higher values are found in C3, C4, C7, and C8, highlighting the fragility of mathematical attacks in cases where surrogate and victims do not share the same model architecture. **Observation 3:**_Simple obfuscations are solid offensive black-box techniques._ ### Models Impact (M-dimension) Demontis et al. (Demontis et al., 2017) observed that adversarial transferability depends on the complexity of the surrogate and victim's model. In particular, low-complexity surrogates produce stronger evasion attacks. Similarly, low-complexity victims' models are more resilient to evensions. Low-complexity models should be preferred by both attackers and defenders since, for the former, models tend to produce stable gradients that better align with victims' ones. For the latter, models tend to produce smaller gradients size. Therefore, we now investigate if we observe similar behavior in our testbed. Due to its effectiveness, we focus on the TIFGSM attack. Figure 6 presents the analysis results by averaging the ASR among the three different datasets. We can first observe that, as expected, the highest ASR corresponds to those cases where the source model \(M_{src}\) and target model \(M_{trg}\) share the same architecture. Second, VGG is the weakest victim model for both AlexNet and ResNet. This is shown by the fact that when VGG is the target model, the ASR is the second highest for all other source models (after the case in which \(M_{src}=M_{trg}\)). Third, ResNet seems to be the most effective surrogate model. Indeed, when using it as a source model, we see that ASR values are relatively low. At the same time, it is particularly effective as a target model when attacking vulnerable architectures such as VGG. We find such results not aligned with what was discussed by Demontis et al. (Demontis et al., 2017), and in agreement with Mao et al. (Mao et al., 2018). Consider the complexity of our models, measured in the number of parameters: 61M for AlexNet, 11M for ResNet, Figure 2. Pipeline of our testing methodology regarding mathematical attacks. and 132M for VGG. Therefore, ResNet and VGG are the lower and higher complexity models, respectively. ``` 0: Simple surrogate models are not always optimal to transfer evasion attacks. ``` **Observation 4**: _Simple surrogate models are not always optimal to transfer evasion attacks._ ### Class Distribution Impact (B-dimension) One of the hypotheses of our work is that attackers and defenders might have different ground-truth distributions. Therefore, we investigate how class balance levels impact the success of a transferable attack. For simplicity, we show the performance of TIFGSM for the Men&Women dataset. Figure 7 shows the results. The reader can observe an opposite behavior in the transferability between Figure 3. ASR for mathematical attacks. Each subfigure corresponds to a different task and contains two different graphs. The external one shows the performance of our attacks in each of the scenarios of our DUMB attacker model through bar charts. The internal one overviews their overall ASR through a spider chart. While with the former the individual ASR of each attack is more clear, the latter shows their overall performance and trends across the different scenarios. The definitions of the scenarios have been clarified in Table 1. minority and majority classes. In particular, attacking a minority class under a 20/80 ratio is always effective in every source condition (first column of Figure 7a). The attack increases its complexity as we reach a balancing equilibrium. Conversely, it appears to be extremely complex to camouflage a majority sample as a minority one (list column of Figure 7b). This observed behavior might be extremely relevant, especially in the context of cybersecurity, where ML classifiers are applied in extremely imbalanced contexts, like malware (Zhu et al., 2017) and hate speech detection (Beng et al., 2017), making such applications weak to transferable attacks. Another important aspect to consider when the source model is trained in an imbalanced dataset is the choice of the perturbation size. As we introduced in Equation 4, we computed the global ASR for both classes for each task. However, as shown in Figure 8, the majority class tends to require more perturbation to be effective, while the minority requires a little. Therefore, attackers that aim to produce optimal attacks while preserving as much as possible the quality of the samples, should create separate hyperparameter tuning processes for each class. ### Sources Impact (DU-dimension) Last, we investigate whether the choice of the dataset impacts the attack transferability. The data source has a non-negligible impact if we find at least one case where the choice of the source produces a varied effect on the attack outcome. For example, we can examine the strong imbalance setting (20/80) for the Cats&Dogs and Men&Women tasks. This scenario is particularly interesting to study since models typically perform well on the former task but struggle with the latter, often achieving an F1-score lower than 90, as previously shown in Table 3. We analyze the ASR obtained Figure 4. ASR for non-mathematical attacks for the Men&Women task. Information is conveyed in the same way as Figure 3. Figure 5. Ratio of non-mathematical outperforming mathematical attacks, grouped by DUMB attacker model cases and tasks. Figure 6. ASR at the varying of source model \(M_{src}\) and target model \(M_{trg}\). Here, the “source model” refers to the model that has been used for adversarial attack generation. In contrast, the “target model” refers to the model on which we test the adversarial samples. using the mathematical attacks by considering data sources mismatch, i.e., a surrogate trained on the Bing dataset used to attack models previously trained on the Google dataset, and vice versa. This corresponds to C5, C6, C7 and C8 of our DUMB attacker model (Table 1). Figure 9 shows the observed distributions. We can notice that for Cats&Dogs, the two curves almost overlap, while there is a partial mismatch in Men&Women. Specifically, regarding the Men&Women task, it appears that attacks directed toward models trained on the Google dataset (and thus generated from a model trained on the Bing dataset) yield better results. This behavior also reflects the baseline evaluation for the two datasets in Table 3, where on the same tasks, models trained on Google had lower F1 scores with respect to the ones trained on Bing. We statistically confirmed what was observed with the Kolmogorov-Smirnov test (two-sided, the null hypothesis is that the two distributions are equal). We reject the null hypothesis in the Cats&Dogs case with a \(p_{\text{real}}=0.01\). We, therefore, conclude that the choice of the dataset impacts the attack transferability. ## 6. Conclusion Transferring evasion attacks among different machine learning models is challenging in a real-world scenario. While the use of surrogate models has been widely studied in the field of adversarial transferability, many more variables must be considered to depict the full picture of its effectiveness. In this work, we fill such a gap by proposing the **DUMB** attacker model. This framework allows analyzing if evasion attacks fail to transfer when the training conditions of surrogate and victim models differ. This framework considers three distinct conditions: **Dataset** so**U**rec, **M**odel architecture, class **B**alance of the dataset. Therefore, surrogate and victim models might vary based on the combinations of these conditions, e.g., surrogate and victim models are trained on the same dataset and ground-truth distribution, but they use different architectures. We evaluated the DUMB attacker model on our novel **DUMB** testbed, consisting of 3 distinct binary computer-vision tasks, with Figure 8. Attacks parameter tuning for Cats&Dogs dataset, in the 20/80 balance level setting. Since TIFGSM and DeepFool use two different types of parameters with different ranges, we use “history” to characterize their level of perturbation. Figure 7. ASR for the minority and majority classes. Here, the “source balancing” refers to the balance level of the model that has been used for the adversarial attack generation. In contrast, the “target balancing” refers to the balance level of the model on which we test the adversarial samples. Figure 9. Probability Density Function of ASR for mismatch sources, over 20/80 balance level. The range of possible ASR is reported on the x-axis, while the y-axis shows the probability density of each ASR. The curve represents the shape of the PDF, where the peak corresponds to the most likely success rate and the width indicates the range of success rates that are probable. two dataset versions each - collected with Bing and Google as sources -, 4 type of imbalance conditions (from balanced to highly imbalanced), and 3 state-of-the-art model architectures. By analyzing 7 well-known evasion attacks and 6 simple image transformations, we explored a total of 13K attacks. _Considerations._ Our extensive evaluation unveiled aspects that were ignored in the literature or not extensively investigated, with the following repercussions: 1. The complexity of the task might have a direct impact on the success of evasion attacks' transferability. As shown in Section 5.1, models showing lower performance on the task appear less robust to adversarial attacks. Future works should better investigate the interplay between performance and robustness. 2. The above point has a direct impact on real-life machine-learning applications. In particular, often, such tools show performance far from being perfect. This results in tools that are more prone to fail in the presence of adversaries. Therefore, the cybersecurity community should utilize both toy-sh and real-world tasks, where with the former, researchers can gain insights about attacks, and with the latter, adapt such insights to complex scenarios. 3. In general, it appears that evasion attacks fail to transfer when the training conditions of surrogate and victim models differ. Future researchers might benefit from both the **DUMB** attacker model and the **DUMB testbed** to analyze the transferability of novel proposed attacks. 4. While the literature extensively covers the effect of model architecture, little is known about the impact of dataset source and class balancing. For the former, the data generation and labeling process might introduce biases that might impact the transferability. For the latter, many tasks are inherently imbalanced (e.g., spam/non-spam, malware/non-malware), and due to data generation processes or undersampling/oversampling strategies, it is likely that attacker and victim datasets present different ground truth distributions. 5. Targeting different classes might lead to different transferable performances. Little attention has been given to the properties of the target class we aim to attack. For instance, when considering the MNIST dataset, the choice seems arbitrary. On the opposite, in cybersecurity tasks, the usual class is the malicious one (e.g., spam, malware). An important property to consider we observed is its numerosity: minority classes of highly imbalanced datasets appear to require limited perturbations to fool (see Figure 8). Future researchers should include such a consideration since many real-life tasks are inherently highly imbalanced, especially those covered by the cybersecurity community. 6. We did not observe a model architecture superior in acting as a surrogate model. Future researchers should better investigate the interplay between complex model architectures and their ability to generate transferable attacks. _Limitation and Future Work._ In this study, we aim to provide a systematic view of factors affecting transferability related to the training of a surrogate model. Therefore, some conclusions remain not fully answered and require further studies. For instance, our proposed testbed is defined by binary tasks, and our conclusions might not be extended to multiclass tasks. Furthermore, our experiment included our novel testbed containing somehow toy-sh tasks, and therefore, far from real conditions. However, our testbed allowed us to clarify different aspects of transferable attacks. Therefore, we believe the proposed testbed might be a precious resource for future researchers conducting analyses in adversarial machine learning. In particular, we believe that both **DUMB** attacker model and testbed can be utilized to extend the analyses of attacks, for instance, from evasion to poisoning. Moreover, we believe that our work can inspire the definition of novel testbeds, considering, for instance, cybersecurity tasks such as spam and malware detection and network intrusion detection systems.
2307.02536
Postmodern Fermi Liquids
We present, in this dissertation, a pedagogical review of the formalism for Fermi liquids developed in [Delacretaz et al., arXiv:220305004] that exploits an underlying algebro-geometric structure described by the group of canonical transformations of a single particle phase space. This infinite-dimensional group governs the space of states of zero temperature Fermi liquids and thereby allows us to write down a nonlinear, bosonized action that reproduces Landau's kinetic theory in the classical limit. Upon quantizing, we obtain a systematic effective field theory as an expansion in nonlinear and higher derivative corrections suppressed by the Fermi momentum $p_F$, without the need to introduce artificial momentum scales through, e.g., decomposition of the Fermi surface into patches. We find that Fermi liquid theory can essentially be thought of as a non-trivial representation of the Lie group of canonical transformations, bringing it within the fold of effective theories in many-body physics whose structure is determined by symmetries. We survey the benefits and limitations of this geometric formalism in the context of scaling, diagrammatic calculations, scattering and interactions, coupling to background gauge fields, etc. After setting up a path to extending this formalism to include superconducting and magnetic phases, as well as applications to the problem of non-Fermi liquids, we conclude with a discussion on possible future directions for Fermi surface physics, and more broadly, the usefulness of diffeomorphism groups in condensed matter physics. Unlike [Delacretaz et al., arXiv:220305004], we present a microscopic perspective on this formalism, motivated by the closure of the algebra of bilocal fermion bilinears and the consequences of this fact for finite density states of interacting fermions.
Umang Mehta
2023-07-05T18:00:02Z
http://arxiv.org/abs/2307.02536v1
# Postmodern Fermi Liquids ###### Abstract We present, in this dissertation, a pedagogical review of the formalism for Fermi liquids developed in [1] that exploits an underlying algebro-geometric structure described by the group of canonical transformations of a single particle phase space. This infinite-dimensional group governs the space of states of zero temperature Fermi liquids and thereby allows us to write down a nonlinear, bosonized action that reproduces Landau's kinetic theory in the classical limit. Upon quantizing, we obtain a systematic effective field theory as an expansion in nonlinear and higher derivative corrections suppressed by the Fermi momentum \(p_{F}\), without the need to introduce artificial momentum scales through, e.g., decomposition of the Fermi surface into patches. We find that Fermi liquid theory can essentially be thought of as a non-trivial representation of the Lie group of canonical transformations, bringing it within the fold of effective theories in many-body physics whose structure is determined by symmetries. We survey the benefits and limitations of this geometric formalism in the context of scaling, diagrammatic calculations, scattering and interactions, coupling to background gauge fields, etc. After setting up a path to extending this formalism to include superconducting and magnetic phases, as well as applications to the problem of non-Fermi liquids, we conclude with a discussion on possible future directions for Fermi surface physics, and more broadly, the usefulness of diffeomorphism groups in condensed matter physics. Unlike [1], we present a microscopic perspective on this formalism, motivated by the closure of the algebra of bilocal fermion bilinears and the consequences of this fact for finite density states of interacting fermions. _To all neurodivergent people, known or unknown, among whom I finally found a sense of community._ ###### Contents * I Introduction * II Review and history of Fermi liquid theory * II.1 "Classical Fermi liquids": Landau's kinetic theory * II.2 "Modern Fermi liquids": Renormalization group * II.3 "Contemporary Fermi liquids": Patch theory and traditional bosonization * II.3 "Fermionic patch theory * II.3 "Quantum field theory": Renormalization group * II.4 "Quantum field theory": Renormalization group * II.5 "Quantum field theory": Renormalization group * II.6 "Quantum field theory": Renormalization group * II.7 "Quantum field theory": Renormalization group * II.8 "Quantum field theory": Renormalization group * II.9 "Quantum field theory": Renormalization group * II.10 "Quantum field theory": Renormalization group * II.11 "Quantum field theory": Renormalization group * II.12 "Quantum field theory": Renormalization group * II.13 "Quantum field theory": Renormalization group * II.14 "Quantum field theory": Renormalization group * II.15 "Quantum field theory": Renormalization group * II.16 "Quantum field theory": Renormalization group * II.17 "Quantum field theory": Renormalization group * II.18 "Quantum field theory": Renormalization group * II.19 "Quantum field theory": Renormalization group * II.20 "Quantum field theory": Renormalization group * II.21 "Quantum field theory": Renormalization group * II.21 "Quantum field theory": Renormalization group * II.22 "Quantum field theory": Renormalization group * II.23 "Quantum field theory": Renormalization group * II.24 "Quantum field theory": Renormalization group * II.25 "Quantum field theory": Renormalization group * II.26 "Quantum field theory": Renormalization group * II.27 "Quantum field theory": Renormalization group * II.28 "Quantum field theory": Renormalization group * II.29 "Quantum field theory": Renormalization group * II.30 "Quantum field theory": Renormalization group * II.31 "Quantum field theory": Renormalization group * II.32 "Quantum field theory": Renormalization group * II.33 "Quantum field theory": Renormalization group * II.34 "Quantum field theory": Renormalization group * II.35 "Quantum field theory": Renormalization group * II.36 "Quantum field theory": Renormalization group * II.37 "Quantum field theory": Renormalization group * II.38 "Quantum field theory": Renormalization group * II.39 "Quantum field theory": Renormalization group * II.31 "Quantum field theory": Renormalization group * II.31 "Quantum field theory": Renormalization group * II.32 "Quantum field theory": Renormalization group * II.33 "Quantum field theory": Renormalization group * II.34 "Quantum field theory": Renormalization group * II.35 "Quantum field theory": Renormalization group * II.36 "Quantum field theory": Renormalization group * II.37 "Quantum field theory": Renormalization group * II.38 "Quantum field theory": Renormalization group * II.39 "Quantum field theory": Renormalization group * II.40 "Quantum field theory": Renormalization group * II.41 "Quantum field theory": Renormalization group * II.42 "Quantum field theory": Renormalization group * II.43 "Quantum field theory": Renormalization group * II.44 "Quantum field theory": Renormalization group * II.45 "Quantum field theory": Renormalization group * II.46 "Quantum field theory": Renormalization group * II.47 "Quantum field theory": Renormalization group * II.48 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.41 "Quantum field theory": Renormalization group * II.41 "Quantum field theory": Renormalization group * II.42 "Quantum field theory": Renormalization group * II.43 "Quantum field theory": Renormalization group * II.44 "Quantum field theory": Renormalization group * II.45 "Quantum field theory": Renormalization group * II.46 "Quantum field theory": Renormalization group * II.47 "Quantum field theory": Renormalization group * II.48 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.41 "Quantum field theory": Renormalization group * II.42 "Quantum field theory": Renormalization group * II.43 "Quantum field theory": Renormalization group * II.44 "Quantum field theory": Renormalization group * II.45 "Quantum field theory": Renormalization group * II.46 "Quantum field theory": Renormalization group * II.47 "Quantum field theory": Renormalization group * II.48 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.41 "Quantum field theory": Renormalization group * II.42 "Quantum field theory": Renormalization group * II.43 "Quantum field theory": Renormalization group * II.44 "Quantum field theory": Renormalization group * II.45 "Quantum field theory": Renormalization group * II.46 "Quantum field theory": Renormalization group * II.47 "Quantum field theory": Renormalization group * II.48 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.41 "Quantum field theory": Renormalization group * II.42 "Quantum field theory": Renormalization group * II.43 "Quantum field theory": Renormalization group * II.44 "Quantum field theory": Renormalization group * II.44 "Quantum field theory": Renormalization group * II.45 "Quantum field theory": Renormalization group * II.46 "Quantum field theory": Renormalization group * II.47 "Quantum field theory": Renormalization group * II.48 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.41 "Quantum field theory": Renormalization group * II.42 "Quantum field theory": Renormalization group * II.43 "Quantum field theory": Renormalization group * II.44 "Quantum field theory": Renormalization group * II.45 "Quantum field theory": Renormalization group * II.46 "Quantum field theory": Renormalization group * II.47 "Quantum field theory": Renormalization group * II.48 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory: Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory: Renormalization group * II.49 "Quantum field theory: Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 ": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 ": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 ": Renormalization group * II.49 ": Renormalization group * II.49 "Quantum field theory": Renormalization group * II.49 ": Renormalization group VI. A road to perturbative non-Fermi liquids A. Scaling in non-Fermi liquidsVII. Spin and BCS extensions A. Spinful Fermi surfaces B. Charged fermion bilinearsVIII. Conclusion and OutlookA. Coadjoint orbit method -- mathematical detailsB. Luttinger liquids from the coadjoint orbit method 1. Chiral anomaly as a linear approximation ###### Acknowledgements. It was the summer of 2008, about a month before the beginning of the school year, and I had just got back home with my backpack full of new textbooks for class 9. The nerd that I was (and still am), all I could think about on the way back home was the excitement of getting to open and read the books that I had just bought; the curious side of me was just excited to absorb all the knowledge I could from them while the competitive side was daydreaming about having preemptive answers to all the questions that my teachers would later ask in class. Having already been mesmerized by science and mathematics from the year before, my hands were drawn to the physics textbook, since it had the best colour palette between itself, chemistry, biology, and maths. I picked up the book, opened the cover, and, energized by that new-book-smell, flipped the pages right past the first chapter on measurement and experimentation to the chapters on linear motion and Newton's laws. In no time I reached the section on the second law of motion, and noticed a footnote that described the inaccuracy of the linear relationship between momentum and velocity at speeds close to the speed of light. The words'special theory of relativity' were mentioned and before I knew it, two whole years had passed with me having read every online resource I could possibly find about special and general relativity and non-Euclidean geometry, convinced that quantum mechanics was not real because "Einstein didn't believe in it". It was in that initial spark of interest that I knew that I wanted to pursue a career in theoretical physics, as unorthodox as something like that would be in the culture I grew up in. I was fortunate enough to have found abundant support for my unusual career choice from my parents Rita and Bharat Mehta, and late grandparents Jaya and Kantilal Mehta, for whom my education always took highest priority. I shall forever be grateful to them for providing me the environment and encouragement to nurture my passion for physics. My father, in particular, has made it a point to read every single paper that I have published, even when it makes no sense to him, and vehemently insists that I send each draft to him for his growing collection, and I will always be glad that my work will, at the very least, be read and appreciated by one person who I admire. I found my first mentor in Shiraz Minwalla at the Tata Institute for Fundamental Research (TIFR), whose wise words I will always carry with me. He instilled in me the courage I needed to not shy away from difficult problems and even enjoy the often long and tedious calculations that accompany them, to the point where I now get excited at the prospect of taking on such challenges. Shiraz's advice was an important contributor to overcoming the many instances of impostor syndrome that I experienced upon being thrown into the melting pot of all the tremendously talented individuals that I encountered throughout my Ph.D. But most importantly, it was on his suggestion that I found my advisor. I couldn't have asked for a better advisor than Dam Thanh Son. I switched from high energy to condensed matter physics upon joining the University of Chicago, and if it was not for his guidance, I would have had a much harder time with the transition. In him I found the perfect mentor whose advising style fit with my learning style like pieces of a jigsaw puzzle. Son's visionary foresight is what ultimately lead to the content in the rest of this thesis and I can only hope to be able to replicate that in the future. I owe a lot to my unofficial mentor, Luca V. Delacretaz, from whom I learned various lessons ranging from the most benign yet consequential tricks to make Mathematica compute integrals when it is being stubborn, to the valuable philosophy behind effective field theory. Luca is and always will be a role model to me for my career and mentorship goals. My Ph.D. experience would not have been half as incredible as it was if not for the extremely friendly and welcoming environment that my office-mates cultivated. I'm grateful to Alex Bogatskiy, Harvey Hsiao, Kyle Kawagoe, Carolyn Zhang, Yuhan Liu, Yi-Hsien Du, Ruchira Mishra, Ege Eren and Davi Costa for all the wonderful times we had together in our little corner office, for all the insightful discussions that helped me grow as a physicist. I also apologize to them for likely being one of the most disruptive and distracting office-mates that they have encountered. Everyone at the Kadanoff Center for Theoretical Physics has been pleasantly affable and never once did I feel like I was not welcome by the professors, postdocs and other graduate students. My thesis committee members, Michael Levin, Jeffrey Harvey, and Woowon Kang, were instrumental in making me think deeply about my work and understand it from various different perspectives. The Center has only become more social over the last six years and as much as I'm looking forward to the next step in my career, it saddens me to have to leave behind my wonderful colleagues and the University of Chicago. Lastly, and perhaps most importantly, I am deeply indebted to my found family, Timothy Hoffman, Claire Baum, and Alex Bogatskiy, (and Bowie Hoffman - Tim's adorable little pupper) with whom I developed a bond so strong I cannot imagine any force that can break it. Between the Ph.D., the pandemic, and personal setbacks, the last few years have been tumultuous and my friends stood by me with all the love and support for which I was often too afraid to ask. Even on our various rock-hounding vacations we couldn't help but discuss physics and I treasure the precious memories we made along the way. It was thanks to their support that I persisted through the most prominent milestone of my life - the day that I discovered that I am neurodivergent. A part of me always knew that I was different but until then I did not have the resources or the labels that I needed to look at it under a positive light. The online neurodivergent community played a major role in this shift of perspective and I am eternally grateful to have found the community and support network built by empathetic neurodivergent strangers who likely will never truly see the scale of the fruits of their efforts. I hope to pay it forward by continuing to advocate for my fellow neurodivergent people. With this discovery, my life came full circle to the realization that theoretical physics has always been a so-called "special interest" for me - a common characteristic of the neurodivergent mind - and will continue to hold that status for the foreseeable future. I owe my passion for physics to my neurodivergence and therefore also a large part of my happiness. Introduction From metals to neutron stars, superconductors to nuclear plasmas, phases of matter described by Fermi surfaces and their instabilities are proliferous. The question "_What are the different possible ways that interacting fermions can behave at macroscopic scales?_" is as easy to pose as it is difficult to answer. The possibilities are endless and ever-growing and stand tall and sturdy as a counterpoint to the traditional reductionist-constructivist hypothesis in physics [2]. To even begin to answer this question, a broad organizing principle is required. One such organizing principle is obtained by counting the number of emergent low energy degrees of freedom that govern the behaviour of such systems. The notion of an energy gap helps categorize many-body systems into three possible classes: gapped, gapless and'very gapless'. _Gapped_ systems do not have any propagating, low energy degrees of freedom. The degrees of freedom here are instead topological in nature and are described by topological quantum field theories1. _Gapless_ systems have a finite number of propagating low energy degrees of freedom. These often describe critical points in phase diagrams or boundaries of topologically nontrivial gapped phases. Footnote 1: A new class of these that are not described by conventional topological field theories have recently been discovered and are collectively called ‘fracton models’ [3; 4; 5]. For a review, see [6; 7]. _'Very gapless'_ systems on the other hand have infinitely many low energy degrees of freedom. In particular, the density of states at zero energies is finite. Systems with extended Fermi surfaces are the canonical example of such phases, where low energy excitations can be hosted anywhere on the Fermi surface. Within the realm of Fermi surface physics, a classification of the possible phases of matter is still elusive, largely due to the many possible instabilities that Fermi surfaces can have. One suitable starting point for getting a picture of the various possibilities is to take a free Fermi gas and turn on interactions between the fermions, allowing them to scatter off of each other. The interactions between fermions can then be put into one of two boxes: short range and long range. Short range interactions are usually mediated by gapped modes. At low energies these can effectively be thought of as point-like interactions between fermions with corrections to this description that do not significantly alter the physical picture. This is the realm of Fermi liquid theory (and its instabilities), one of the pillars of modern condensed matter physics, first developed by Landau [8] in a classic 1956 paper. Landau's key insight was that short range interactions in most situations do not dramatically alter the spectrum of excitations of a free Fermi gas. The excitations of the interacting theory are then very similar to free fermions, and thus the notion of a quasiparticle was born. Landau's Fermi Liquid Theory (LFLT), the _classical_ formalism for describing Fermi liquids, can perhaps be called the first example of an _effective theory_ - a low energy description of a system that is insouciant to microscopic details whose effects are captured by a comparatively small number of parameters2. Footnote 2: I thank Luca V. Delacretaz for this succinct description of effective theories. Despite being rather successful at describing the physics of dense, interacting fermions, LFLT stood out among a plethora of other effective descriptions in many body physics as one of the few theories that was not formulated in the language of the renormalization group (RG) and was classical3 in nature, being described by an equation of motion rather than an action or a Hamiltonian. Progress along these lines was made only in 1990 in [9], which was then formalized in [10; 11] into the _modern_ formalism. Footnote 3: Pun intended. The _effective field theory_ (EFT) obtained from this analysis can be simplified at the cost of losing locality in space [12; 13; 14], so it is not a genuine EFT in that the tower of irrelevant corrections to the scale invariant fixed point cannot be systematically listed, for example through an expansion in spatial and temporal derivatives. An alternate route to a local EFT for Fermi liquids was inspired by the idea of bosonization and pioneered in [15; 16; 17]. But this approach also suffer from the same issue, in that it is unclear how one would construct and classify irrelevant corrections to the scale invariant fixed point. These _contemporary_ formalisms are hence also incomplete and in need for further refinement. Long range interactions, on the other hand, are often mediated by gapless degrees of freedom which cannot be ignored (i.e., integrated out) at any energy scale, and it becomes important to keep track of the additional gapless modes alongside the excitations of the Fermi surface. This can alter the physics of the Fermi surface in ways that are hard to predict, since such interactions often tend to be strong. A celebrated, now solved example of this is the electron-phonon problem [18; 19], which accounts for the resistivity and superconducting instability of conventional metals4. Footnote 4: For recent work on the breakdown of the Migdal-Eliashberg theory of electron-phonon interactions, see [20; 21]. A more violent example of such an interaction is presented in a class of phases dubbed _non-Fermi liquids_ (NFL) (see, e.g., [22] and references therein for a review). The gapless mode that couples to the Fermi surface in these examples is usually either the critical fluctuation of an order parameter or a gauge field in appropriate spatial dimensions. Such interactions trigger an instability of the Fermi surface and the fate of the RG flow is one of the biggest open problems in condensed matter physics. The list of unanswered questions ranges from describing the phase of the end point of the RG flow (metallic NFL or Mott insulator or unconventional superconductor) to developing effective descriptions of the various possibilities and understanding how they compete with one another. Answers to these questions are crucial from an applied physics perspective since the most common occurrence of NFL physics is in high-temperature superconductivity [23; 24] observed in various different layered materials such as cuprates. In many of these materials, the superconducting dome hides a quantum critical point where the metal undergoes a magnetic phase transition, the order parameter fluctuations of which couple to the Fermi surface and drive the instability to a superconductor. The ultimate goal for NFL physics would be to understand the mechanism that causes high temperature superconductivity in order to be able to engineer materials which could enhance this mechanism and raise the critical temperature of the superconducting phase to larger values, possibly even to room temperature. From a theoretical standpoint, Fermi and non-Fermi liquids provide a unique playground to explore unconventional RG flows. Almost all tractable RG flows in physics are between two scale invariant fixed points that have no inherent scales. Fermi and non-Fermi liquids, however, enjoy scale invariance despite the presence of an intrinsic scale - the Fermi momentum \(p_{F}\) - and understanding the RG flow from one to the other hence necessarily requires a broadening of the notion of RG as well as that of a'scale'. Unconventional RG flows have been gaining interest across various disciplines ranging from the study of fractonic and exotic theories [25; 26; 27; 28; 29] to machine learning [30; 31] and even information theory and neuroscience [32; 33], and it is likely that Fermi surface physics can serve as a useful launchpad for generalizing the notion of RG beyond its rigid framework and conventional metanarrative. A fundamental bottleneck to understanding the physics of non-Fermi liquids is the lack of an EFT description for Fermi liquids. Since the scaling behaviour of an NFL can differ dramatically from that of a Fermi liquid, irrelevant corrections to any effective theory of a Fermi liquid can have important consequences for the NFL. A classification of irrelevant corrections to Fermi liquid theory with definite scaling properties, which is missing from the literature so far, would thus hugely benefit the search for an effective description for NFLs. This is precisely the aim of the _postmodern_ formalism developed in [1] and expounded upon in this thesis. We find that LFLT is secretly governed by the geometry of a rather large Lie group - that of canonical transformations of a single-particle phase space. This constrains the structure of the effective theory for Fermi liquids rigidly enough to be able to construct higher order corrections to the contemporary approaches as well as classify their scaling behaviour. The geometric structure underlying the postmodern formalism also allows us to systematically identify and impose symmetries as well as couple to gauge fields. Such diffeomorphism groups are not only important for Fermi liquid theory, but also present themselves as a useful tool across other disciplines in condensed matter physics, such as quantum Hall states, lattices of charged monopoles or superfluid vortices and even skyrmions in ferromagnets [34; 35], suggesting that diffeomorphism groups have the potential to broadly understand and constrain the properties of various many-body phases. The rest of this dissertation is organized as follows: in section II we review the various historic approaches to Fermi liquid theory and comment on the benefits and drawbacks of each of them. In section III we summarize the postmodern formalism and provide an overview that is stripped off of most technical details for simplicity. In section IV we develop the Hamiltonian formalism for Fermi liquids, which is then turned into an action formalism in section V. Section V also presents how this action encodes spacetime, gauge, and emergent symmetries, as well as how it simplifies the calculation of correlation functions in Fermi liquids. Section VI then explores how the postmodern formalism can be used as a stepping stone towards perturbative NFLs. In section VII we then switch gears to present different possible generalizations of the postmodern formalism that account for internal symmetries, conventional superconductivity, and large momentum processes. Finally, we conclude in section VIII with an outlook on the various potential applications of the postmodern formalism. Review and history of Fermi liquid theory We begin by reviewing the various approaches to describing Fermi liquids that have been developed over the last century. This discussion is by no means exhaustive, and we will differ to relevant references for more details. ### "Classical Fermi liquids": Landau's kinetic theory The very first description for Fermi liquids was proposed by Landau in the form of a kinetic equation. Consider first a gas of non-interacting fermions. Owing to Pauli's exclusion principle, its ground state at zero temperature is described by a occupation number function in momentum space \(f_{0}({\bf p})=\Theta(\epsilon_{F}-\epsilon({\bf p}))\) that takes values 1 or 0. \(\epsilon_{F}\) is the Fermi energy and \(\epsilon({\bf p})\) is the dispersion relation for a single fermion. The solution to the equation, \[\epsilon({\bf p})=\epsilon_{F}\,, \tag{1}\] defines the Fermi surface at \[|{\bf p}|=p_{F}(\theta)\,. \tag{2}\] If the dispersion relation is invariant under rotations, the Fermi momentum \(p_{F}\) is a constant independent of the angles \(\theta\) in momentum space. The dynamics of this system is described by a mesoscopic5 one-particle distribution function \(f(t,{\bf x},{\bf p})=f_{0}({\bf p})+\delta f(t,{\bf x},{\bf p})\) that obeys Figure 1: Hokusai’s rendition of a propagating mode in kinetic theory. the collisionless Boltzmann equation: \[\partial_{t}f+\nabla_{\bf p}\epsilon({\bf p})\cdot\nabla_{\bf x}f+{\bf F}_{\rm ext }\cdot\nabla_{\bf p}f=0\,, \tag{3}\] where \({\bf F}_{\rm ext}\) is the external force applied to the free Fermi gas. The dynamics of the free Fermi gas are hence entirely captured by the dispersion relation. For an interacting Fermi liquid, however, the occupation number at every momentum is not a well-defined quantum number, and we cannot characterize its dynamics using the distribution function. Landau's argument to work around this issue was the following: suppose we start with the free Fermi gas and turn on interactions adiabatically. Thanks to Pauli exclusion principle, the available phase space for the fermions to scatter to is significantly smaller the closer they are to the Fermi surface initially. The low energy (\(E\ll\epsilon_{F}\)) part of the interacting many-body spectrum should be continuously deformable to the spectrum of the free theory. Since the spectrum of the free Fermi gas can be constructed from the building block of a single fermion placed outside but close to the Fermi surface (or a single hole inside), this building block should persist as the interactions are adiabatically turned on and also exist in some "dressed" form in the low energy spectrum of the interacting Fermi liquid. The remnant of this building block in the interacting theory is what we call a _quasiparticle_. In situations where this argument holds, we should have an effective single-particle description for the dynamics of interacting Fermi liquids, analogous to the collisionless Boltzmann equation for free fermions. In fact, Fermi liquids are defined retroactively as fermionic phases of matter where this argument holds. The degree of freedom describing the quasiparticle is then also a distribution function: \[f(t,{\bf x},{\bf p})=f_{0}({\bf p})+\delta f(t,{\bf x},{\bf p})\,. \tag{4}\] However, since the quasiparticle only exists as part of the spectrum for momenta close to the Fermi surface, the distribution \(f\) and the fluctuation \(\delta f\) are only well defined in a narrow region \(|{\bf p}|-p_{F}\ll p_{F}\). All that we need in order to describe the low energy dynamics of the interacting Fermi liquid is a dispersion relation \(\epsilon_{\rm qp}\) for the quasiparticle. This is phenomenologically constructed as follows: \[\epsilon_{\rm qp}({\bf x},{\bf p})=\epsilon({\bf p})+\int\frac{d^{d}p^{\prime }}{(2\pi)^{d}}F({\bf p},{\bf p}^{\prime})\delta f({\bf x},{\bf p}^{\prime})\,, \tag{5}\] where \(\epsilon({\bf p})\) is the free fermion dispersion relation, and \(F({\bf p},{\bf p}^{\prime})\) is a phenomenological function that characterizes the interaction contribution to the energy of the quasiparticle at \({\bf p}\) due to quasiparticles at \({\bf p}^{\prime}\). Note that the interaction term in the quasiparticle energy is local in space, which is due to the assumption that any interaction between the quasiparticles is short-ranged. At the risk of being pedantic, we emphasize again that the quasiparticle energy, the interaction function, and the distribution are well-defined only in a small neighbourhood of the Fermi surface. In other words the \(\mathbf{p}\) derivatives of all these quantities are only well-defined at the Fermi surface and constitute the various parameters and degrees of freedom of the effective theory. We can now postulate a collisionless Boltzmann equation that describes the dynamics of the interacting Fermi liquid: \[\partial_{t}f+\nabla_{\mathbf{p}}\epsilon_{\mathrm{qp}}[f]\cdot\nabla_{ \mathbf{x}}f-(\nabla_{\mathbf{x}}\epsilon_{\mathrm{qp}}[f]-\mathbf{F}_{ \mathrm{ext}})\cdot\nabla_{\mathbf{p}}f=0\,. \tag{2.6}\] We will refer to this equation as _Landau's kinetic equation_. One crucial difference between the interacting Fermi liquid and the free Fermi gas is that equation (2.6) is nonlinear in \(\delta f\), while the collisionless Boltzmann equation is linear. The nonlinearity comes from the dependence of the quasiparticle energy on the distribution. This also modifies the dynamics at the linear level, since the interaction results in internal forces \(\nabla_{\mathbf{x}}\epsilon_{\mathrm{qp}}\) acting on the quasiparticles in addition to any external forces. Since the interaction function \(F(\mathbf{p},\mathbf{p}^{\prime})\) is well-defined only near the fermi surface, one often assumes that it only depends on two points on the Fermi surface at the angles \(\theta,\theta^{\prime}\), and an angular expansion of the interaction function defines the so-called Landau parameters, \[F(\theta,\theta^{\prime})\sim\sum_{l}F_{l}P_{l}^{(d)}(\theta,\theta^{\prime})\,, \tag{2.7}\] where \(P_{l}^{(d)}(\theta,\theta^{\prime})\) form a basis of functions in \(d\) dimensions that transform covariantly under the symmetries of the Fermi surface, and \(l\) is a label for the representations of those symmetries. For example, for a spherical Fermi surface \(l=0,1,2,\ldots\) is an 'angular momentum' index, and the basis functions are cosines in \(d=2\) and Legendre polynomials of cosines in \(d=3\). From Landau's kinetic equation we can calculate a plethora of physical quantities from thermodynamic properties to correlation functions, in terms of Landau parameters which encode the microscopic interactions. In order to calculate correlation functions for, e.g., the particle number density and current, we can couple the theory to background electromagnetic fields through the Lorentz force \(\mathbf{F}_{\mathrm{ext}}=\mathbf{E}+\mathbf{v}\times\mathbf{B}\). One finds stability conditions for the theory as lower bounds on \(F_{l}\) which when violated, result in Pomeranchuk instabilities. For certain ranges of the Landau parameters, Fermi liquids also exhibit a collective excitation known as zero sound that propagates faster than the Fermi velocity \(v_{F}=\epsilon^{\prime}(p_{F})\) and is hence distinguishable from the particle-hole continuum \(\omega\leq v_{F}|\mathbf{q}|\) (figure 1). The specific calculations that result in these various properties and more can be found, for example, in [36; 37]. While LFLT describes many aspects of interacting Fermi liquids quite well, it has various drawbacks. Firstly, it is unclear how such a theory would emerge from a microscopic model. Since the kinetic equation is written down 'by hand' it is not even clear when one should expect a microscopic model of interacting fermions to be described by LFLT. Second, being an equation-of-motion based description, LFLT is in effect a classical theory, with the only source of 'quantumness' being Pauli exclusion and the Fermi-Dirac distribution that gives the ground state \(f_{0}\) of the theory. In practice this means that the theory is blind to subleading corrections to physical quantities such as correlation functions and thermodynamic properties. These drawbacks would be at least partially, if not completely be remedied by a field theoretic description - one that is amenable to the renormalization group (RG), unlike LFLT. ### "Modern Fermi liquids": Renormalization group To understand the scaling behaviour of interacting Fermi liquids, we need to pick an RG scheme. The prototypical RG scheme most commonly used in physics, wherein we rescale length to be larger and larger, or equivalently rescale momenta to 0, also shrinks the Fermi surface down to a point! This scheme cannot possibly give physically relevant results since the Fermi surface is an experimentally measurable quantity. We hence need to pick a new scaling scheme.6 Footnote 6: It is important to note that in most commonly studied systems in physics such as quantum or statistical field theories, the symmetries of the system uniquely prescribe the RG scheme that can extract universal information from it. Here, however we encounter a system where this is not immediately obvious, so we need to look for other identifiers for the ‘correct’ prescription. Figure 2: Van Gogh’s visualization of scaling towards a (rectangular) Fermi surface. The most natural RG scheme is one where momenta are rescaled towards the Fermi surface (figure 2). This scheme was introduced in [9; 10; 11] and is commonly referred to as 'Shankar-Polchinski' RG, after the physicists who independently formalized it. In the spirit of effective field theory, we first identify the low energy degrees of freedom. LFLT tells us that these are fermionic quasiparticles. We define an operator \(\psi^{\dagger}(\mathbf{p})\) that creates a quasiparticle with momentum \(\mathbf{p}\). The annihilation operator \(\psi(\mathbf{p})\) creates a hole in the Fermi sea at the point \(-\mathbf{p}\), so that the net momentum of the state with a single hole is \(+\mathbf{p}^{7}\). The free action is given by \[\int\frac{dtd^{d}p}{(2\pi)^{d}}\psi^{\dagger}(\mathbf{p})\left[i\partial_{t}- \left(\epsilon(\mathbf{p})-\epsilon_{F}\right)\right]\psi(-\mathbf{p})\,. \tag{2.8}\] Each point \(\mathbf{p}\) in momentum space can be written as a sum of a vector \(\mathbf{p}_{F}\) on the Fermi surface and another vector \(\mathbf{k}\) orthogonal to the Fermi surface at \(\mathbf{p}_{F}\): \[\mathbf{p}=\mathbf{p}_{F}+\mathbf{k}\,,\qquad d^{d}p=d^{d-1}p_{F}\ dk\,, \tag{2.9}\] where \(d^{d-1}p_{F}\) is a measure for integrating over the Fermi surface. In our RG scheme, \(\mathbf{p}_{F}\) remain invariant under scaling, while \(\mathbf{k}\) get rescaled by a factor of \(s\lesssim 1\) to \(s\mathbf{k}\). The dispersion can be expanded to leading order so that \[\epsilon(\mathbf{p})-\epsilon_{F}=|\mathbf{k}||\mathbf{v}_{F}(\mathbf{p}_{F}) |+\mathcal{O}(k^{2})\,, \tag{2.10}\] and marginality of the free action requires \[[\partial_{t}]=[\mathbf{k}]\,,\qquad[\psi]=-\frac{1}{2}\,. \tag{2.11}\] We then write down all possible terms allowed by symmetries and analyze their scaling behaviour, both at tree level and at loop level. The leading nontrivial term is a quartic interaction that enables nontrivial \(2\to 2\) scattering processes: \[\int_{t}\int_{\mathbf{p}_{1}\mathbf{p}_{2}\mathbf{p}_{3}\mathbf{p}_{4}}V( \mathbf{p}_{F1},\mathbf{p}_{F2},\mathbf{p}_{F3},\mathbf{p}_{F4})\psi^{ \dagger}(\mathbf{p}_{1})\psi(\mathbf{p}_{2})\psi^{\dagger}(\mathbf{p}_{3}) \psi(\mathbf{p}_{4})\delta(\mathbf{p}_{1}+\mathbf{p}_{2}+\mathbf{p}_{3}+ \mathbf{p}_{4})\,. \tag{2.12}\] Immediately, we notice two possibilities for the scaling of the momentum conserving delta function. If the corresponding Fermi momenta \(\mathbf{p}_{Fi}\) sum to zero, the delta function scales non-trivially under our RG scheme, while if they do not, the delta function is (approximately) invariant under the scale transformation. For configurations where \(\sum_{i}\mathbf{p}_{Fi}\neq 0\), we find that the quartic term is strictly irrelevant and hence does not change the scale invariant fixed point. For configurations with \(\sum_{i}\mathbf{p}_{Fi}=0\) on the other hand, the quartic term is marginal. All that remains is find configurations for which the sum vanishes, and check whether loop corrections change the scaling behaviour of the terms corresponding to the relevant configurations. Consider for instance \(d=2\) with a circular Fermi surface. There are two distinct classes of configurations with \(\sum\mathbf{p}_{F}=0\): \[(\mathbf{p}_{F2}=-\mathbf{p}_{F1},\ \mathbf{p}_{F4}=-\mathbf{p}_{F3})\,;\qquad \left(\mathbf{p}_{F3}=-\mathbf{p}_{F1},\ \mathbf{p}_{F4}=-\mathbf{p}_{F2}\right). \tag{2.13}\] The solution with \(\mathbf{p}_{F4}=-\mathbf{p}_{F1}\) is just the first solution with the hole momenta exchanged. The first class of solutions characterize forward scattering, i.e., incoming particles leave with nearly the same or exchanged momenta. These correspond to particle hole pairs with a small net momenta, such as the configuration in figure 2(a). This class of configurations is hence often called the 'particle-hole channel'. The form factor \(F(\mathbf{p}_{F1},\mathbf{p}_{F3})=V(\mathbf{p}_{F1},-\mathbf{p}_{F1},\mathbf{ p}_{F3},-\mathbf{p}_{F3})\) is the corresponding interaction function. The second class of solutions has the two particles as well as the two holes align at antipodal points on the Fermi surface respectively, with an arbitrary angle between them, for instance in figure 2(b). This configuration corresponds to the 'Bardeen-Cooper-Schrieffer (BCS) channel'. The interaction form factor \(g(\mathbf{p}_{F1},\mathbf{p}_{F2})=V(\mathbf{p}_{F1},\mathbf{p}_{F2},- \mathbf{p}_{F1},-\mathbf{p}_{F2})\) for this is independent of the forward scattering interaction, except in one special configuration with \(\mathbf{p}_{F3}=\mathbf{p}_{F2}=-\mathbf{p}_{F1}\) which imposes a constraint \(F(\mathbf{p}_{F},-\mathbf{p}_{F})=g(\mathbf{p}_{F},-\mathbf{p}_{F})\). The marginal quartic terms can then be written schematically as \[\int_{\mathbf{p}_{1}\mathbf{p}_{3}}F(\mathbf{p}_{F1},\mathbf{p}_{F3})[\psi^{ \dagger}\psi\psi^{\dagger}\psi]_{\text{ph}}(\mathbf{p}_{1},\mathbf{p}_{3})+ \int_{\mathbf{p}_{1}\mathbf{p}_{2}}g(\mathbf{p}_{F1},\mathbf{p}_{F2})[\psi^{ \dagger}\psi\psi^{\dagger}\psi]_{\text{BCS}}(\mathbf{p}_{1},\mathbf{p}_{2})\,. \tag{2.14}\] Both interactions are marginal at tree level, but a one-loop calculation shows that while forward scattering remains marginal, the BCS interaction becomes relevant if the coupling is Figure 3: Scattering configurations for marginal interactions at tree level. attractive and irrelevant if the coupling is repulsive. Hence, attractive couplings in the BCS channel trigger a superconducting instability that destroys the Fermi surface. The forward scattering interaction is just the interaction function in LFLT, but the BCS coupling is one to which LFLT is blind. The inclusion of the pairing instability is the most important advantage of the RG approach over LFLT, and exemplifies the power of effective field theory. However, this approach still has its limitations. Ideally in an EFT, any isolated term that can be written from symmetry requirements has a fixed scaling dimension which can be calculated simply by adding the scaling dimensions of its constituents -- a principle known as power counting. But as we saw above, understanding the scaling properties of the quartic term was a significantly more complicated task than that, and becomes even more complicated in higher dimensions where the number of possible configurations with \(\sum\mathbf{p}_{F}=0\) is even larger. This procedure becomes all the more gruesome for Fermi surfaces of more complicated geometry such as those for conduction electrons in metals. In general, any given term in this EFT that can be written from invariance under symmetries does not have a fixed scaling dimension and additional work needs to be done to decompose it into a sum of terms that do. Even then one can find constraints relating one term to another in special cases, such as the configuration \(\mathbf{p}_{F1}=-\mathbf{p}_{F2}=-\mathbf{p}_{F3}=\mathbf{p}_{F4}\) where the exactly marginal forward scattering coupling is identical to the marginally relevant or irrelevant BCS coupling. These constraints need to be kept track of by hand and do not immediately follow from any symmetry principle. Instead, the forward scattering - BCS constraint is a consequence of hacing to decompose a single local operator into different scattering channels that are scaling covariant, but at the cost of an added redundancy. Furthermore, while coupling LFLT to background gauge fields was a straightforward task, it is much less obvious how one couples this EFT to background gauge fields, given that the EFT lives in momentum space, where no standard minimal coupling procedure exists. Two remedies for the former issue have been considered, which we will collectively refer to as the 'contemporary' formalism, which we review next. Alternate functional RG schemes for Fermi surfaces which hope to capture physics beyond Shankar-Polchinski RG have also recently been developed in [38; 39]. ### "Contemporary Fermi liquids": Patch theory and traditional bosonization One of the key takeaways of the Shankar-Polchinski RG scheme is that, barring BCS interactions, particle-hole pairs have a significant impact on low energy physics only when they are sufficiently close to each other in momentum space (compared to \(p_{F}\)). This suggests that one potential workaround to the issue of interactions not having fixed scaling dimensions is the following: we can discretize the Fermi surface to a number of patches of the same size, labelled by a discrete index \(\eta\) (figure 4), and subsequently separate interactions into intra-patch and inter-patch scattering. The free fermion action Fourier transformed back to coordinate space can be written as a sum over patches, \[S=\sum_{\eta}\int d^{d-1}x_{\parallel}\int dtdx_{\perp}\Psi_{\eta}^{\dagger} \,\left(x_{\perp}\right)\left(\partial_{t}+v_{F\eta}\partial_{x_{\perp}} \right)\Psi_{\eta}(x_{\perp})\,, \tag{15}\] where \(x_{\perp}\) is a coordinate that is Fourier-conjugate to \(\mathbf{k}\), the momentum vector orthogonal to the Fermi surface, \(\mathbf{x}_{\parallel}\) are coordinates conjugate to the transverse directions within a patch, and \(\Psi_{\eta}\) is the fermion on each patch defined by \[\psi(\mathbf{x})=\sum_{\eta}e^{i\mathbf{p}_{F\eta}\cdot\mathbf{x}}\Psi_{\eta} (x_{\perp})\,, \tag{16}\] Figure 4: Dali’s self-portrait under a patch decomposition. the same path \(\eta\), while inter-patch scattering terms couple two different patches \(\eta\neq\eta^{\prime}\). If we restrict our attention to a single patch \(\eta_{0}\), the effect of the latter is simply a logarithmic renormalization of the field strength of \(\Phi_{\eta_{0}}\) as well as its dispersion relation, so inter-patch interactions can be ignored. Intra-patch coupling can be analyzed in the usual way under rescaling of momenta toward the Fermi surface, transverse to the patch. Since the width of the patch is not rescaled in this procedure, the number of patches does not change under rescaling. #### ii.2.1 Fermionic patch theory The patch theory in the Shankar-Polchinski RG scheme has an important drawback. Discretizing the Fermi surface makes it so that each patch is effectively flat at low energies. To see this, consider the leading irrelevant correction to the quadratic action, which comes from the curvature of the Fermi surface within the patch, \[S=\int d^{d-1}x_{\parallel}\int dtdx_{\perp}\Psi^{\dagger}\,\left(x_{\perp} \right)\left(\partial_{t}+v_{F}\partial_{x_{\perp}}+\frac{\kappa}{2}\nabla_{ \parallel}^{2}\right)\Psi(x_{\perp})\,, \tag{2.17}\] where we have dropped the patch index \(\eta_{0}\). Since \(\mathbf{x}_{\parallel}\) does not scale under the Shankar-Polchinski RG scheme, the curvature \(\kappa\) scales to zero and we lose crucial information about the shape of the Fermi surface. An alternate RG scheme that is more suitable to the patch description [13; 14](see, e.g., [40] for a pedagogical description) and preserves the curvature of the Fermi surface is one where the coordinates \(\mathbf{x}_{\parallel}\) scale like \((x_{\perp})^{1/2}\). The curvature term is now scale invariant under this scale transformation, at the expense of the width of the patch scaling down to zero, Figure 5: A single Fermi surface patch resulting in a proliferation of the number of patches at the scale invariant fixed point. But if we are only concerned with the low energy properties of fermions within a single patch, we can ignore this drawback. As far as I am aware, so systematic analysis of the consequences of the proliferation of the number of patches exists in the literature, and in particular it is unclear whether this blow up modifies the RG flow of a single patch in any significant way. One can show that intra-patch scattering from contact interactions under patch scaling is strictly irrelevant in all dimensions, which provides some evidence for the stability of Fermi liquids. Inter-patch couplings can at most logarithmically renormalize the field strength of the patch fermion and the Fermi velocity, and are often ignored. The only interactions that can modify the RG flow are then those that are mediated by a gapless mode. Fermionic patch theory is hence often used as an effective description for non-Fermi liquids, since it provides an RG scheme where other interactions between patch fermions can be safely ignored, in favour of interactions mediated by the gapless mode which couples most strongly to patches that are tangential to its momentum [12; 41]. Fermionic patch theory has a few more drawbacks. Firstly, in restricting the theory to a single patch, we loose locality in position space. Secondly, single-patch theory cannot accomodate BCS interactions either, which raises questions about the validity of RG flows derived from it. The usual expectation and/or hope is that the NFL fixed point obtained from patch theory would have its own superconducting instability, which would lead it to a superconducting fixed point with the same universal properties as the infrared (IR) fixed point of the physical RG flow without restricting to patches. Lastly, patch theory can only be used for understanding RG flows, but not for calculating physical quantities such as transport properties, for which we need to sum over all patches and be mindful about the proliferation of patches in the IR. Furthermore, the resistance of the Shankar-Polchinski EFT to gauging persists in fermionic patch theory as well. Additionally, even though fermionic patch theory has attractive properties under RG and simplifies the calculation of scaling dimensions for various operators, the scaling behaviour of correlation functions calculated from patch theory is still not transparent. Various cancellations among diagrams can occur [42; 43] that alter the IR scaling form of the correlation functions and invalidate power counting arguments. We will discuss this in more detail in section V.5 and demonstrate how the postmodern formalism resolves this difficulty. #### v.3.2 Bosonization of patch fermions Another approach that starts with the description in terms of patchwise chiral fermions but tries to preserve locality in position space is inspired by bosonization in 1+1d [44]. This approach was developed independently by Haldane [15] and by Castro-Neto and Fradkin [16], and further developed by Houghton, Kwon and Marston [17]. Since each patch fermion is a 1+1d chiral fermion, it can be independently bosonized into a collection of chiral bosons to give the following effective action: \[S=-p_{F}^{d-3}\sum_{\eta}\int dtd^{d}x\,\left(\mathbf{p}_{F\eta}\cdot\nabla_{ \mathbf{x}}\phi_{\eta}\right)\left(\partial_{t}+v_{F\eta}\mathbf{p}_{F\eta} \cdot\nabla_{\mathbf{x}}\right)\phi_{\eta}\,. \tag{2.18}\] Although this formalism is local in position space, it suffers from the same drawback as patch theory under Shankar-Polchinski scaling -- it cannot accomodate nonlinear-in-\(\phi_{\eta}\) corrections from Fermi surface curvature and the dispersion relation. This has serious consequences, since even though the nonlinear corrections are irrelevant in Shankar-Polichinski scaling, they contribute at leading order to various higher point correlation functions, which traditional bosonization sans higher order corrections incorrectly suggests would vanish. For instance, the particle number density in traditional bosonization is linear in \(\phi\), and since the action is quadratic in \(\phi\), the density (\(n>2\))-point functions calculated from this action are strictly zero, which certainly is not the case even for free fermions. In order to solve this issue, various authors appealed to a more algebro-geometric picture underlying the interpretation of Fermi liquid theory as describing the dynamics of droplets in phase space [45; 46; 47; 48; 49; 50] similar to quantum Hall droplets on the lowest Landau level in the plane [51; 52; 53]. This approach is an early precursor to the postmodern formalism described in this dissertation. ## III Postmodern Fermi Liquids: A Conceptual Overview The starting point for our theory is the observation that the operator algebra constructed from microscopic fermions \(\psi({\bf x})\) has a sub-algebra that is closed under commutators. This is the algebra of operators spanned by (anti-Hermitian) charge 0 fermion bilinears (see section IV for details and precise definitions), \[T({\bf x},{\bf y})\sim i\psi^{\dagger}({\bf x})\psi({\bf y})\,. \tag{18}\] For theories whose Hamiltonian can be written entirely in terms of these bilinears, the closure of the sub-algebra guarantees that we can restrict our attention to the dynamics of operators in this sub-algebra in the Heisenberg picture, or classes of states distinguished only by expectation values of such operators in the Schrodinger picture. What remains is to find a convenient parametrization for this large space of operators, or equivalently, for the dual space of of states, and figure out how to identify states with Fermi surfaces, to which the next two sections are dedicated. While this is straightforward in principle, some assumptions and approximations need to be made to make it useful in practice. These will be elucidated in the following section. Conveniently, the question of how to parametrize a Lie algebra and its dual space has a well-established answer in mathematical literature, known as the coadjoint orbit method Figure 6: An artificial intelligence’s impression of postmodern Fermi liquid theory. [54; 55; 56]. This method was historically developed as a procedure for finding representations of Lie groups, but can also be interpreted as a means of setting up a dynamical system on a Lie group in the Hamiltonian formalism, and then turning that Hamiltonian formalism into an action. The Hamiltonian/action describe time evolution on the Lie algebra, which in our case is the space of fermion bilinears, in the Heisenberg picture, or equivalently on its dual space, which is the space of states, in the Schrodinger picture8. Footnote 8: Quantization of this action then gives representations of the Lie group under consideration. ### The Lie algebra of fermion bilinears Fermion bilinears \(T(\mathbf{x},\mathbf{y})\) form a basis for our Lie algebra, which we will call \(\mathfrak{g}\). A general element of this algebra is a linear combination, \[O_{F}\equiv\int d^{d}xd^{d}y\ F(\mathbf{x},\mathbf{y})T(\mathbf{x},\mathbf{y} )\sim i\int d^{d}xd^{d}y\ F(\mathbf{x},\mathbf{y})\psi^{\dagger}(\mathbf{x}) \psi(\mathbf{y})\,, \tag{3.2}\] where \(F(\mathbf{x},\mathbf{y})\) is a generic function of two variables. It will be more convenient for us to work with the Wigner transform of the generators: \[T(\mathbf{x},\mathbf{p})\equiv\int d^{d}y\ T\left(\mathbf{x}+\frac{\mathbf{y} }{2},\mathbf{x}-\frac{\mathbf{y}}{2}\right)e^{i\mathbf{p}\cdot\mathbf{y}}\,, \tag{3.3}\] in which case, a general element of the Lie algebra, \[O_{F}\equiv\int\frac{d^{d}xd^{d}p}{(2\pi)^{d}}F(\mathbf{x},\mathbf{p})T( \mathbf{x},\mathbf{p})\,, \tag{3.4}\] is characterized instead by a function \(F(\mathbf{x},\mathbf{p})\) of coordinates and momenta instead. The function \(F(\mathbf{x},\mathbf{p})\) can be thought of as the components of the Lie algebra vector \(O_{F}\), with \(\mathbf{x},\mathbf{p}\) being indices. Since we have already picked a preferred basis for \(\mathfrak{g}\), we will often refer to the the function \(F\) itself as the Lie algebra vector by a slight abuse of terminology. Using the anti-commutation relations for the fermion creation and annihilation operators, one can show that the commutator of two Lie algebra vectors corresponding to functions \(F(\mathbf{x},\mathbf{p})\) and \(G(\mathbf{x},\mathbf{p})\) takes the following form: \[[O_{F},O_{G}]=O_{\{\!\{F,G\}\!\}}\,, \tag{3.5}\] where the operation in the subscript of the right hand side is the Moyal bracket of two functions, \[\{\!\{F,G\}\!\}(\mathbf{x},\mathbf{p})\equiv 2\ F(\mathbf{x},\mathbf{p}) \sin\left(\frac{\overleftarrow{\nabla}_{\mathbf{x}}\cdot\overrightarrow{ \nabla}_{\mathbf{p}}-\overleftarrow{\nabla}_{\mathbf{p}}\cdot\overrightarrow{ \nabla}_{\mathbf{x}}}{2}\right)G(\mathbf{x},\mathbf{p})\,. \tag{3.6}\] Note that up until this point, all of our formulas are exact. So far we are working in the full quantum theory, despite the simultaneous occurrence of both position and momentum. This is essentially achieved by a quantization scheme that is different from but equivalent to canonical quantization, known as Weyl quantization (or deformation quantization for more general phase spaces). Our Lie algebra can hence be characterized as the set of all functions of a single-particle phase space, equipped with the Moyal bracket, \[\mathfrak{g}_{\text{Moyal}}\equiv(\{F(\mathbf{x},\mathbf{p})\};\{\!\{\!\}., \,\mathbb{J}\}). \tag{3.7}\] We will refer to this as the Moyal algebra or the Weyl algebra9. The associated Lie group consists of the exponents of the bilinear operators \(e^{\mathcal{O}_{F}}\). The coadjoint orbit method can be applied directly to the Moyal algebra to yield a formal action that would in principle exactly describe Fermi surfaces, but this action is unwieldy in practice, owing to the fact that the Moyal bracket in equation (3.6) is only defined in a power series in phase space derivatives, with convergence of the power series having been established only for limited classes of functions [57]. Footnote 9: The Weyl algebra is actually a subalgebra of the Moyal algebra, consisting of only polynomial functions. To ameliorate this issue, we can consider a truncation of the Moyal algebra to leading order in the series expansion, which gives the Poisson bracket, \[\begin{split}\{\!\{F,G\}\!\}&=\{F,G\}+\mathcal{O}( \nabla_{\mathbf{x}},\nabla_{\mathbf{p}})^{3}\,,\\ \{F,G\}&\equiv\nabla_{\mathbf{x}}F\cdot\nabla_{ \mathbf{p}}G-\nabla_{\mathbf{p}}F\cdot\nabla_{\mathbf{x}}G\,,\end{split} \tag{3.8}\] providing an approximate, semi-classical, action-based description of Fermi liquids via the coadjoint orbit method applied to the truncated Lie algebra of the set of functions of a single-particle phase space, equipped with the Poisson bracket instead of the Moyal bracket, \[\mathfrak{g}=(\{F(\mathbf{x},\mathbf{p})\};\{\!\{..,.\}\})\,. \tag{3.9}\] We will refer to this as the Poisson algebra. Importantly, this is the only truncation of the Moyal algebra that preserves the Jacobi identity. We emphasize that the Poisson algebra is _not_ a sub-algebra of the Moyal algebra, but rather a truncation of the Lie bracket. The Poisson algebra has a useful physical interpretation that can be assigned to it: it is the Lie algebra of infinitesimal canonical transformations of the single-particle phase space. A typical element \(F(\mathbf{x},\mathbf{p})\) of the Poisson algebra generates a canonical transformation in the following way: we can define new coordinates, \[\begin{split}\mathbf{x}^{\prime}&=\mathbf{x}- \nabla_{\mathbf{p}}F\,,\\ \mathbf{p}^{\prime}&=\mathbf{p}+\nabla_{\mathbf{x}}F \,.\end{split} \tag{3.10}\] We can verify that the transformed coordinates \(\mathbf{x}^{\prime},\mathbf{p}^{\prime}\) are canonical pairs. This transformation can be understood as Hamiltonian evolution for infinitesimal time under the Hamiltonian \(F({\bf x},{\bf p})\), and we can also verify that the commutator of two such infinitesimal transformations parametrized by functions \(F({\bf x},{\bf p})\) and \(G({\bf x},{\bf p})\) is an infinitesimal transformation parametrized by the Poisson bracket \(\{F,G\}({\bf x},{\bf p})\). The quickest way to see this is to note that the infitesimal transformation is generated by the phase space vector field: \[X_{F}=\nabla_{\bf x}F\cdot\nabla_{\bf p}-\nabla_{\bf p}F\cdot\nabla_{\bf x}\,, \tag{3.11}\] and then evaluating the commutator of two vector fields \([X_{F},X_{G}]\) viewed as differential operators acting on test functions. It is not hard to see that \[[X_{F},X_{G}]\cdot K({\bf x},{\bf p})=X_{\{F,G\}}\cdot K({\bf x},{\bf p})\,, \tag{3.12}\] for any function \(K({\bf x},{\bf p})\). The corresponding Lie group is naturally that of canonical transformations under finite time. For each element \(F({\bf x},{\bf p})\in{\mathfrak{g}}\) of the Poisson algebra, we will define the exponent map, denoted by \(\exp\) that associates with \(F\) the canonical transformation \(U\) obtained by time evolving under \(F\) for unit time. The set of all such \(U\)'s is the group of canonical transformations that we are concerned with (known in the math literature as the group of Hamiltonian symplectomorphisms), \[{\cal G}\equiv\{U=\exp F\ |\ F\in{\mathfrak{g}}\}\,. \tag{3.13}\] Note that the exponent map \(\exp F\) from the Lie algebra to the Lie group is different from the point-wise exponential of the function \(e^{F}=1+F+F^{2}/2+\ldots\). To avoid confusion, we will restrict ourselves to using \(\exp\) for the Lie-algebra-to-Lie-group exponent map instead of writing it as \(e^{F}\). The truncation of the Moyal algebra to the Poisson algebra is subtle and requires some more scrutiny. We will revisit this in section IV and clarify the consequences of this truncation, including a discussion on which properties this approximation succesfully captures and which ones it misses out on. Having understood the operator algebra of concern, we now move on to describing the corresponding space of states that we will be interested in. ### The space of states In any quantum mechanical system, states are described by density matrices \(\rho\), which can be thought of as linear maps acting on operators to give the expectation value of the operator in the chosen state, \[\rho[{\cal O}]\equiv\langle{\cal O}\rangle_{\rho}={\rm Tr}(\rho{\cal O})\,. \tag{3.14}\] In principle, if we have access to every operator in the theory, each state is uniquely determined by the list of expectation values of every operator in that state. But since we are only concerned with the subalgebra of charge-neutral fermion bilinears, we inevitably end up being unable to distinguish all microscopic states from each other, but instead are restricted to equivalence classes of microscopic states, where equivalence is established by requiring identical expectation values of all fermion bilinears. A typical representative of any such equivalence class can be described as follows. Having chosen the basis \(T(\mathbf{x},\mathbf{p})\) for the space of fermion bilinear, we can pick a dual basis to it, which we will denote by operators \(W(\mathbf{x},\mathbf{p})\), which have the orthogonality property: \[\mathrm{Tr}\left(W(\mathbf{x}^{\prime},\mathbf{p}^{\prime})T(\mathbf{x}, \mathbf{p})\right)=\delta(\mathbf{x}-\mathbf{x}^{\prime})(2\pi)^{d}\delta( \mathbf{p}-\mathbf{p}^{\prime})\,. \tag{3.15}\] A representative of the equivalence class of states can be expanded in this dual basis with the 'coefficients' given by a function of \(\mathbf{x},\mathbf{p}\), \[\rho_{f}=\int\frac{d^{d}xd^{d}p}{(2\pi)^{d}}f(\mathbf{x},\mathbf{p})W(\mathbf{ x},\mathbf{p})\,. \tag{3.16}\] In this state, the expectation value of a bilinear operator \(O_{F}\) simplifies to \[\mathrm{Tr}(\rho_{f}O_{F})=\int\frac{d^{d}xd^{d}p}{(2\pi)^{d}}f(\mathbf{x}, \mathbf{p})F(\mathbf{x},\mathbf{p})\equiv\langle f,F\rangle\,. \tag{3.17}\] Naturally, this set of equivalence classes is the set of linear maps from \(\mathfrak{g}_{\mathrm{Moyal}}\) to \(\mathbb{C}\), also known as the dual space of \(\mathfrak{g}_{\mathrm{Moyal}}\), which we will denote by \(\mathfrak{g}^{*}\). \[\begin{split}\mathfrak{g}^{*}&\equiv\left\{f( \mathbf{x},\mathbf{p})\right\},\\ f[F]&\equiv\langle f,F\rangle\equiv\int\frac{d^{d}xd^ {d}p}{(2\pi)^{d}}f(\mathbf{x},\mathbf{p})F(\mathbf{x},\mathbf{p})\,,\end{split} \tag{3.18}\] where the second line defines the action of the linear map \(f\) on an element \(F\) of \(\mathfrak{g}_{\mathrm{Moyal}}\). Note that the dual space is independent of the Lie bracket. Hence, the Moyal algebra and the Poisson algebra share the same dual space \(\mathfrak{g}^{*}\). Ordinarily in physics, vector spaces and their dual spaces are not distinguished between, since they are isomorphic to each other for finite dimensional vector spaces. However, for our purposes we find it crucial to make this pedantic distinction, since the Lie algebra and its dual space will take different physical interpretations and consequently will be equipped with different mathematical structures later. That the expectation values of operators \(O_{F}\) in a state \(\rho_{f}\) can be written in the form of equation (3.17) provides the following interpretation for the functions \(F(\mathbf{x},\mathbf{p})\) and \(f(\mathbf{x},\mathbf{p})\) in the semiclassical limit: the function \(F(\mathbf{x},\mathbf{p})\) that characterizes the linear combination of fermion bilinears will be understood as a single-particle observable, while the function \(f(\mathbf{x},\mathbf{p})\) characterizing the state is the effective single-particle phase space distribution function (or simply the distribution for brevity) that enters the Boltzmann equation. This connection to the Boltzmann equation will become more precise as we develop the Hamiltonian formalism later in section IV, whose equation of motion in the semi-classical limit is precisely the collisionless Boltzmann equation (or Landau's kinetic equation for interacting Fermi liquids). The pairing or innder product \(\langle f,F\rangle\) between elements of \(\mathfrak{g}^{*}\) and \(\mathfrak{g}\) is then just the average value of the single-particle observable \(F(\mathbf{x},\mathbf{p})\) in the distribution \(f(\mathbf{x},\mathbf{p})\). ### Schematic overview of the coadjoint orbit method Equipped with the Lie algebra \(\mathfrak{g}\) consisting of single-particle observables and its dual space \(\mathfrak{g}^{*}\) consisting of distribution functions, the coadjoint orbit method provides us an algorithm to derive an action for our theory in broadly two steps. First, we set up a dynamical system describing time evolution on \(\mathfrak{g}^{*}\) via a prescribed Hamiltonian. The choice of Hamiltonian must be governed by microscopics as well as principles of effective field theory, especially since we want to describe the theory via the truncated Poisson algebra instead of the exact Moyal algebra. We will see that these considerations allow us to automatically obtain Landau's kinetic equation for interacting Fermi liquids as the equation of motion, along with systematic higher order corrections to Landau's phenomenological theory. Second, we attempt to Legendre transform the Hamiltonian into an action. Performing this Legendre transform is a highly non-trivial task, since it turns out that we have to restrict our state space \(\mathfrak{g}^{*}\) further in order to achieve this. This restriction, however, is natural, since the set of all possible configurations of the distribution function \(f(\mathbf{x},\mathbf{p})\) is too large of a set to describe sharp Fermi surfaces at zero temperature. We need only consider functions that take values of either \(0\) or \(1\), with the boundary between the two values being the Fermi surface. These functions must also have fixed phase space volume due to Luttinger's theorem. It turns out that restricting \(\mathfrak{g}^{*}\) to such states is precisely what is needed to Legendre transform the Hamiltonian to an action. This restriction, therefore, is both physically motivated and mathematically necessary, and we will find that Luttinger's theorem is automatically built into our formalism. Consequently, the postmodern formalism for Fermi liquids essentially describes the dynamics of a fluctuating codimension one surface in phase space whose topology is \(\mathbb{R}^{d}\times S^{d-1}\), i.e. that of a sphere at every point \(\mathbf{x}\) (figure 6). The next two sections are devoted to the two respective steps described above, and a survey of the necessary approximations and consequent validity/invalidity of these steps. The operator algebra and the Hamiltonian formalism Before developing the Hamiltonian formalism, we first survey the algebra of fermion bilinears more carefully. We will make a small modification to our definition of the generators and define them instead in center of mass and relative coordinates as \[T(\mathbf{x},\mathbf{y})\equiv\frac{i}{2}\left[\psi^{\dagger}\left(\mathbf{x}+ \frac{\mathbf{y}}{2}\right)\psi\left(\mathbf{x}-\frac{\mathbf{y}}{2}\right)- \psi\left(\mathbf{x}-\frac{\mathbf{y}}{2}\right)\psi^{\dagger}\left(\mathbf{x }+\frac{\mathbf{y}}{2}\right)\right]\,. \tag{4.1}\] Canonical anti-commutation relations for the fermion operators \([\psi(\mathbf{x}),\psi^{\dagger}(\mathbf{y})]_{+}=i\delta(\mathbf{x}-\mathbf{ y})\) implies that this definition only differs from equation (3.1) by a delta function which serves to regulate the coincidence limit \(T(\mathbf{x},0)\). Furthermore, the Hermitian conjugate takes the form, \[T^{\dagger}(\mathbf{x},\mathbf{y})=-T(\mathbf{x},-\mathbf{y})\,. \tag{4.2}\] The various Fourier transforms of this generator will be useful for later: \[\begin{split} T(\mathbf{x},\mathbf{y})&\equiv \frac{i}{2}\left[\psi^{\dagger}\left(\mathbf{x}+\frac{\mathbf{y}}{2}\right) \psi\left(\mathbf{x}-\frac{\mathbf{y}}{2}\right)-\psi\left(\mathbf{x}-\frac{ \mathbf{y}}{2}\right)\psi^{\dagger}\left(\mathbf{x}+\frac{\mathbf{y}}{2} \right)\right]\,,\\ T(\mathbf{q},\mathbf{p})&\equiv\frac{i}{2}\left[ \psi^{\dagger}\left(\frac{\mathbf{q}}{2}+\mathbf{p}\right)\psi\left(\frac{ \mathbf{q}}{2}-\mathbf{p}\right)-\psi\left(\frac{\mathbf{q}}{2}-\mathbf{p} \right)\psi^{\dagger}\left(\frac{\mathbf{q}}{2}+\mathbf{p}\right)\right]\,,\\ T(\mathbf{x},\mathbf{p})&\equiv\int_{\mathbf{y}}T( \mathbf{x},\mathbf{y})e^{i\mathbf{p}\cdot\mathbf{y}}=\int_{\mathbf{q}}T( \mathbf{q},\mathbf{p})e^{-i\mathbf{q}\cdot\mathbf{x}}\,,\\ T(\mathbf{q},\mathbf{y})&\equiv\int_{\mathbf{x}, \mathbf{p}}T(\mathbf{x},\mathbf{p})e^{i\mathbf{q}\cdot\mathbf{x}}e^{-i \mathbf{p}\cdot\mathbf{y}}=\int_{\mathbf{x}}T(\mathbf{x},\mathbf{y})e^{i \mathbf{q}\cdot\mathbf{x}}=\int_{\mathbf{p}}T(\mathbf{q},\mathbf{p})e^{-i \mathbf{p}\cdot\mathbf{y}}\,,\end{split} \tag{4.3}\] where integrals over momenta \(\mathbf{q}\) and \(\mathbf{p}\) are defined with an implicit factor of \(1/(2\pi)^{d}\). Our convention for the fermion annihilation operator \(\psi(\mathbf{k})\) in momentum space is that \(\psi(\mathbf{k})\) is simply the Fourier transform of \(\psi(\mathbf{x})\). When acting on the Fermi surface it creates a state with momentum \(\mathbf{k}\). Therefore, it creates a hole at the point \(-\mathbf{k}\) in the Fermi sea. This is different from the usual convention in condensed matter physics, where the annihilation operator \(c_{\mathbf{k}}\) is defined so that it creates a hole at the point \(\mathbf{k}\), thereby creating a state with total momentum \(-\mathbf{k}\). It is worth emphasizing that in the notation we have chosen above, \(\mathbf{x}\) is the center of mass coordinate of the particle-hole pair described by the fermion bilinear, \(\mathbf{y}\) is the relative coordinate or the separation between them. Analogously, given that \(\psi(\mathbf{k})\) creates a hole with momentum \(-\mathbf{k}\), the Fourier conjugate \(\mathbf{q}\) to the center of mass coordinate \(\mathbf{x}\) measures the momentum of the particle-hole pair, which is the difference of the individual momenta of the particle and hole. The Fourier conjugate \(\mathbf{p}\) to the separation \(\mathbf{y}\) is the average of the individual momenta of the particle and the hole, so the average location of the particle hole pair in momentum space (figure 7). We shall restrict ourselves to using this notation convention throughout this thesis, so the arguments of the generator and their specific order should make it clear to which Fourier transform we are referring. All of the above Fourier transforms are traceless in a fermionic Fock space. Additionally, our definitions imply that \(T({\bf x},{\bf p})\), in particular, is anti-Hermitian, \[T^{\dagger}({\bf x},{\bf p})=-T({\bf x},{\bf p})\,. \tag{4.4}\] The commutator of these generators closes, and we find \[\begin{split}&[T({\bf q},{\bf y}),T({\bf q}^{\prime},{\bf y}^{ \prime})]=2\sin\left(\frac{{\bf q}^{\prime}\cdot{\bf y}-{\bf q}\cdot{\bf y}^{ \prime}}{2}\right)T({\bf q}+{\bf q}^{\prime},{\bf y}+{\bf y}^{\prime})\,,\\ &[T({\bf x},{\bf p}),T({\bf x}^{\prime},{\bf p}^{\prime})]=2\sin \left(\frac{\nabla_{\bf x}\cdot\nabla_{{\bf p}^{\prime}}-\nabla_{{\bf x}^{ \prime}}\cdot\nabla_{{\bf p}}}{2}\right)[\delta({\bf x}-{\bf x}^{\prime}) \delta({\bf p}-{\bf p}^{\prime})T({\bf x},{\bf p})]\.\end{split} \tag{4.5}\] The coefficient functions or differential operators on the right-hand side are the "structure constants" of the Lie algebra \(\mathfrak{g}_{\rm Moyal}\), whose typical element is a general linear combination \[O_{F}\equiv\int_{{\bf x}{\bf p}}F({\bf x},{\bf p})T({\bf x},{\bf p})\,, \tag{4.6}\] where \(F({\bf x},{\bf p})\) is an arbitrary function, to be thought of a the set of coefficients of the vector \(O_{F}\) in the basis \(T({\bf x},{\bf p})\), with \(({\bf x},{\bf p})\) playing the role of "incidces" in this expansion. This results in the Moyal bracket for the commutator of generic linear combinations, \[\begin{split}&[O_{F},O_{G}]=O_{\{\!\{F,G\}\}}\,,\\ &\{\!\{F,G\}\!\}=2\ F\sin\left(\frac{\overleftarrow{\nabla}_{\bf x }\cdot\overrightarrow{\nabla}_{\bf p}-\overleftarrow{\nabla}_{\bf p}\cdot \overrightarrow{\nabla}_{\bf x}}{2}\right)G\,.\end{split} \tag{4.7}\] Our generators also obey orthogonality relations: \[\begin{split}&\operatorname{Tr}[T({\bf x},{\bf p})T({\bf x}^{ \prime},{\bf p}^{\prime})]=2\delta({\bf x}-{\bf x}^{\prime})(2\pi)^{d}\delta( {\bf p}-{\bf p}^{\prime})\,,\\ &\operatorname{Tr}[T({\bf q},{\bf y})T({\bf q}^{\prime},{\bf y}^{ \prime})]=2(2\pi)^{d}\delta({\bf q}+{\bf q}^{\prime})\delta({\bf y}+{\bf y}^{ \prime})\,,\end{split} \tag{4.8}\] Figure 7: Particle-hole configuration in the parametrization of equation (4.3). where the trace is taken in the fermionic Fock space. The space of all charge-0 bosonic operators hence forms an infinite dimensional Lie algebra, known as the Moyal algebra. We will restrict ourselves to a class of microscopic Hamiltonians that can be expanded in a polynomial expansion in the generators of this algebra, \[\begin{split} H_{\text{micro}}&=\int_{\mathbf{p}} \epsilon(\mathbf{p})\psi^{\dagger}(\mathbf{p})\psi(-\mathbf{p})\\ &+\int_{\mathbf{p}_{1},\mathbf{p}_{2},\mathbf{p}_{3},\mathbf{p}_ {4}}V(\mathbf{p}_{1},\mathbf{p}_{2},\mathbf{p}_{3},\mathbf{p}_{4})\psi^{ \dagger}(\mathbf{p}_{1})\psi(\mathbf{p}_{2})\psi^{\dagger}(\mathbf{p}_{3}) \psi(\mathbf{p}_{4})\delta(\mathbf{p}_{1}+\mathbf{p}_{2}+\mathbf{p}_{3}+ \mathbf{p}_{4})\\ &+\mathcal{O}(\psi^{\dagger}\psi)^{3}\,,\end{split} \tag{4.9}\] where \(\epsilon(\mathbf{p})\) is the free particle dispersion, \(V(\mathbf{p}_{1},\mathbf{p}_{2},\mathbf{p}_{3},\mathbf{p}_{4})\) characterizes \(2\to 2\) scattering processes, and so on for higher order terms. ### Semi-classical truncation of the Moyal algebra While the discussion so far has been exact, in practice, using the Moyal algebra can be extremely tedious since the star product and the Moyal bracket are defined as series expansions. A remedy for this is provided by the Poisson truncation discussed in section III.1, \[\begin{split}\{\!\{F,G\}\!\}&=\{F,G\}+\mathcal{O}( \nabla_{\mathbf{x}},\nabla_{\mathbf{p}})^{3}\,,\\ \{F,G\}&\equiv\nabla_{\mathbf{x}}F\cdot\nabla_{ \mathbf{p}}G-\nabla_{\mathbf{p}}F\cdot\nabla_{\mathbf{x}}G\,.\end{split} \tag{4.10}\] The Poisson bracket is, in fact, the only truncation of the Moyal bracket that satisfies the Jacobi identity. This truncation, however, comes at a cost, and limits the validity of the theory to regimes where the Poisson bracket is a good approximation to the Moyal bracket. This is only true when \[\nabla_{\mathbf{x}}\cdot\nabla_{\mathbf{p}}\ll 1\,, \tag{4.11}\] which can be rephrased in three other ways by Fourier transforming \(\mathbf{x}\) and/or \(\mathbf{p}\): \[\nabla_{\mathbf{x}}\cdot\nabla_{\mathbf{p}}\ll 1\,,\quad\Leftrightarrow\quad \mathbf{q}\cdot\mathbf{y}\ll 1\,,\quad\Leftrightarrow\quad\nabla_{\mathbf{x}} \cdot\mathbf{y}\ll 1\,,\quad\Leftrightarrow\quad\mathbf{q}\cdot\nabla_{ \mathbf{p}}\ll 1\,. \tag{4.12}\] Recall that \(\mathbf{x}\) corresponds to the center of mass coordinate of a particle-hole pair, \(\mathbf{y}\) is the separation, \(\mathbf{q}\) measures the net momentum of the particle-hole excitation, and \(\mathbf{p}\) is the average of the momenta of the particle and the hole. With these in mind, equation (4.12) implies that the Poisson truncation of the Moyal algebra of fermion bilinears is applicable in situations where we have a separation of scales, with \((\mathbf{x},\mathbf{q})\) characterizing the long distance or infrared (IR) scale, and \((\mathbf{y},\mathbf{p})\) characterizing the short distance or ultraviolet (UV) scale. In position space, this means that we are restricting ourselves to probing physics at length-scales much larger than the typical separation of a particle-hole pair. In momentum space, a typical particle-hole excitation over a Fermi surface has \(|\mathbf{p}|\sim p_{F}\), and the Poisson truncation is valid for pairs whose net momentum is much smaller than that, i.e., \[|\mathbf{q}|\ll p_{F}\,. \tag{4.13}\] The corrections to the Poisson truncation can then be thought of as a derivative expansion with higher derivatives terms being suppressed owing to the fact that \[\nabla_{\mathbf{x}}\cdot\nabla_{\mathbf{p}}\sim\frac{|\nabla_{\mathbf{x}}|}{p_ {F}}\ll 1\,. \tag{4.14}\] With this analysis in mind, let us try to understand what consequences the Poisson truncation has for interactions between the fermions. We will consider the quartic term in the microscopic Hamiltonian, which can be written in the following way: \[\begin{split} H^{\text{int}}_{\text{micro}}&=\int_ {\mathbf{q},\mathbf{p};\mathbf{q}^{\prime},\mathbf{p}^{\prime}}V(\mathbf{q}, \mathbf{p};\mathbf{q}^{\prime},\mathbf{p}^{\prime})\psi^{\dagger}\left( \frac{\mathbf{q}}{2}+\mathbf{p}\right)\psi\left(\frac{\mathbf{q}}{2}-\mathbf{ p}\right)\psi^{\dagger}\left(\frac{\mathbf{q}^{\prime}}{2}+\mathbf{p}^{\prime} \right)\psi\left(\frac{\mathbf{q}^{\prime}}{2}-\mathbf{p}^{\prime}\right) \delta(\mathbf{q}+\mathbf{q}^{\prime})\\ &\simeq\int_{\mathbf{q},\mathbf{p};\mathbf{q}^{\prime},\mathbf{p }^{\prime}}V(\mathbf{q},\mathbf{p};\mathbf{q}^{\prime},\mathbf{p}^{\prime})T( \mathbf{q},\mathbf{p})T(\mathbf{q}^{\prime},\mathbf{p}^{\prime})\delta( \mathbf{q}+\mathbf{q}^{\prime})\,,\end{split} \tag{4.15}\] where the symbol \(\simeq\) means that we have ignored the quadratic terms generated upon replacing \(\psi^{\dagger}(\mathbf{k}_{1})\psi(\mathbf{k}_{2})\) with its antisymmetrized version. The above Hamiltonian characterizes \(2\to 2\) scattering processes. In general, the momenta \((\mathbf{q},\mathbf{p};\mathbf{q}^{\prime},\mathbf{p}^{\prime})\) could take any values allowing for generic scattering configurations on the Fermi surface. However, the semi-classical limit captures those configurations with \(|\mathbf{p}|\sim|\mathbf{p}^{\prime}|\sim p_{F}\), and \(\mathbf{q},\mathbf{q}^{\prime}\ll p_{F}\). This corresponds to particle-hole pairs close to the Fermi surface with small net momentum, such as the configuration in figure (a)a. Higher derivative corrections to the semiclassical limit then systematically account for particle-hole pairs with a larger separation in momentum space. ### Constructing the Hamiltonian formalism To recap the discussion in section III, we find a Lie algebra in the operator algebra, whose generators are fermion bilinears \(T(\mathbf{x},\mathbf{p})\), whose structure constants can be read off from the commutation relations, \[\begin{split}[T(\mathbf{q},\mathbf{y}),T(\mathbf{q}^{\prime}, \mathbf{y}^{\prime})]&=2\sin\left(\frac{\mathbf{q}^{\prime}\cdot \mathbf{y}-\mathbf{q}\cdot\mathbf{y}^{\prime}}{2}\right)T(\mathbf{q}+\mathbf{q }^{\prime},\mathbf{y}+\mathbf{y}^{\prime})\\ &=(\mathbf{q}^{\prime}\cdot\mathbf{y}-\mathbf{q}\cdot\mathbf{y}^{ \prime})\,T(\mathbf{q}+\mathbf{q}^{\prime},\mathbf{y}+\mathbf{y}^{\prime})+ \mathcal{O}(\mathbf{q},\mathbf{y})^{3}\,.\end{split} \tag{4.16}\] The pair \((\mathbf{q},\mathbf{y})\) or its Fourier conjugate \((\mathbf{x},\mathbf{p})\) can be thought of as a Lie algebra index. Generic elements of the Lie algebra are linear combinations of the generators, \[O_{F}=\int_{\mathbf{x},\mathbf{p}}F(\mathbf{x},\mathbf{p})T(\mathbf{x}, \mathbf{p})\,, \tag{4.17}\] characterized by functions \(F({\bf x},{\bf p})\). The commutator of two such functions specifies the Lie bracket, \[[O_{F},O_{G}]=O_{\{F,G\}}=O_{\{F,G\}}+{\cal O}(\nabla_{\bf x},\nabla_{\bf p})^{3}\,, \tag{4.18}\] and we can succinctly define the (truncated) Lie algebra as the set of functions of \({\bf x}\) and \({\bf p}\) equipped with the Poisson bracket: \[\begin{split}\mathfrak{g}&\equiv\{F({\bf x},{\bf p })\}\,,\\ \{F,G\}&=\nabla_{\bf x}F\cdot\nabla_{\bf p}G-\nabla_{ \bf p}F\cdot\nabla_{\bf x}G\,.\end{split} \tag{4.19}\] The corresponding Lie group consists of the set of exponentials \(e^{O_{F}}\) of the operators \(O_{F}\), and in the semi-classical limit takes on the interpretation of canonical transformations \(U\) of the single-particle phase space \(\mathbb{R}^{2d}\) generated by the function \(F\) viewed as a Hamiltonian. \[{\cal G}\equiv\{U=\exp F\ |\ F\in\mathfrak{g}\}\,. \tag{4.20}\] We also saw in section III.2 that the space of states was given by the dual space \(\mathfrak{g}^{*}\), whose elements are also functions \(f({\bf x},{\bf p})\) which are interpreted as quasiprobability distribution functions, which act on elements of the Lie algebra to give the average value of a single-particle observable \(F({\bf x},{\bf p})\) in the state \(f({\bf x},{\bf p})\). \[\begin{split}\mathfrak{g}^{*}&\equiv\{f({\bf x},{ \bf p})\}\,,\\ \langle f,F\rangle&\equiv\int_{{\bf x},{\bf p}}F({\bf x },{\bf p})f({\bf x},{\bf p})\,.\end{split} \tag{4.21}\] \(\mathfrak{g}^{*}\) is the effective phase space for Fermi liquids and we need to define a Hamiltonian and a Poisson structure on this to get an equation of motion. In order to do so, let us first define the action of the Lie group and Lie algebra on the Lie algebra and its dual space. #### iv.1.1 Adjoint and coadjoint representations The Lie bracket furnishes a natural action of the Lie algebra on itself, known as the Lie algebra adjoint action: \[\begin{split}\operatorname{ad}_{F}\ :\ \mathfrak{g}\to \mathfrak{g}\,,\\ \operatorname{ad}_{F}& G\equiv\{F,G\}\,.\end{split} \tag{4.22}\] This can be exponentiated to obtain an action of the Lie group on the Lie algebra, called the Lie group adjoint action: \[\begin{split}\operatorname{Ad}_{U}\ :\ \mathfrak{g}\to \mathfrak{g}\,,\\ \operatorname{Ad}_{U=\exp F}& G\equiv UGU^{-1}\equiv e ^{\operatorname{ad}_{F}}G=G+\{F,G\}+\frac{1}{2!}\{F,\{F,G\}\}+\ldots\ \.\end{split} \tag{4.23}\] We will often use \(UGU^{-1}\) as alternate notation for the adjoint action to make it clear that intuition from quantum mechanics (and matrix Lie groups) applies more or less straightforwardly to our case as well. The action of the Lie group and Lie algebra on the Lie algebra are called the adjoint representation. From the above, we can also define the action of the Lie algebra and Lie group on the dual space \(\mathfrak{g}^{*}\), known as the coadjoint actions: \[\begin{split}\mathrm{ad}_{F}^{*},\;\mathrm{Ad}_{U}^{*}& :\;\mathfrak{g}^{*}\rightarrow\mathfrak{g}^{*}\,,\\ \mathrm{ad}_{F}^{*}f&\equiv\{F,f\}\,,\\ \mathrm{Ad}_{U=\exp F}^{*}f\equiv UfU^{-1}&\equiv \mathrm{e}^{\mathrm{ad}_{F}^{*}}f&=f+\{F,f\}+\frac{1}{2!}\{F,\{F,f \}\}+\ldots\quad.\end{split} \tag{4.24}\] Together these define the coadjoint representation. #### iv.2.2 Lie-Poisson structure and Hamiltonian Next, we need a Poisson structure for functionals of \(\mathfrak{g}^{*}\). This requires a bilinear map that takes in two functionals \(\mathscr{F}[f]\) and \(\mathscr{G}[f]\), and spits out a third functional \(\mathscr{H}[f]\) in a way consistent with the product rule as well as with the Jacobi identity. Such a structure is provided by the Lie-Poisson bracket, defined as follows: \[\{\mathscr{F},\mathscr{G}\}_{\mathrm{LP}}[f]\equiv\left\langle f,\{\delta \mathscr{F}|_{f},\delta\mathscr{G}|_{f}\}_{\mathrm{Poisson}}\right\rangle\,. \tag{4.25}\] The above formula is dense, so let us unpack it in a few sentences. \(\mathfrak{g}^{*}\) is a vector space. A typical point in this vector space is the function \(f(\mathbf{x},\mathbf{p})\). Being a vector space, the tangent space \(T_{f}\mathfrak{g}^{*}\) to \(\mathfrak{g}^{*}\) at the point \(f\) is isomorphic to \(\mathfrak{g}^{*}\). Therefore any tangent vector at a point in \(\mathfrak{g}^{*}\) can be equivalently thought of as an element of \(\mathfrak{g}^{*}\). Analogously, the cotangent space \(T_{f}^{*}\mathfrak{g}^{*}\) to \(\mathfrak{g}^{*}\) at the point \(f\) is isomorphic to the space \(\mathfrak{g}^{**}\cong\mathfrak{g}\) that is dual to \(\mathfrak{g}^{*}\), which is just the Lie algebra. So cotangent vectors at a point are elements of \(\mathfrak{g}\). The variation \(\delta\equiv\delta/\delta f\) of a functional \(\mathscr{F}\) is an exterior derivative of a function of \(\mathfrak{g}^{*}\). Therefore \(\delta\mathscr{F}\) is a cotangent field on \(\mathfrak{g}^{*}\). Its value \(\delta\mathscr{F}|_{f}\) at the point \(f\), being a cotangent vector, is an element of the Lie algebra. The same holds for \(\delta\mathscr{G}|_{f}\). Since these are both elements of the Lie algebra, i.e., functions of \((\mathbf{x},\mathbf{p})\), we can take their Lie bracket, which in our case is the Poisson bracket. The resulting function, when paired with \(f\) using our inner product, gives us the value of the Lie-Poisson bracket functional \(\{\mathscr{F},\mathscr{G}\}_{\mathrm{LP}}\) evaluated at the point \(f\). That the Lie-Poisson bracket obeys the product rule and Jacobi identity follows from the fact that the Poisson bracket itself obeys both. All that remains is to construct a Hamiltonian functional \(H[f]\). Instead of deriving this from the microscopic Hamiltonian in equation (4.9), we will use effective field theory to write down a Hamiltonian in a systematic expansion. We will assume translation and rotational invariance in the continuum limit, even though the requirement of rotational invariance can be relaxed further to account for materials with more complicated electronic Fermi surfaces. Our Hamiltonian will take the form of a double expansion, one in nonlinearities in \(f(\mathbf{x},\mathbf{p})\), and the other in spatial derivatives. The latter will be justified by the semi-classical limit (4.14), since derivatives must be suppressed by the Fermi momentum. To justify the former, we must organize our Hamiltonian in a polynomial expansion in fluctuations around the ground state, \[f_{0}(\mathbf{p})=\Theta(p_{F}-|\mathbf{p}|)\,. \tag{4.26}\] Defining fluctuations around this reference state as \[\delta f(\mathbf{x},\mathbf{p})\equiv f(\mathbf{x},\mathbf{p})-f_{0}(\mathbf{ p})\,, \tag{4.27}\] we can write the most general effective Hamiltonian as follows \[H[f] =\int_{\mathbf{x}\mathbf{p}}\epsilon(\mathbf{p})f(\mathbf{x}, \mathbf{p}) \tag{4.28}\] \[+\frac{1}{2}\int_{\mathbf{x}\mathbf{p}\mathbf{p}^{\prime}}F^{(2,0 )}(\mathbf{p},\mathbf{p}^{\prime})\delta f(\mathbf{x},\mathbf{p})\delta f( \mathbf{x},\mathbf{p}^{\prime})+\mathbf{F}^{(2,1)}(\mathbf{p},\mathbf{p}^{ \prime})\cdot\left(\frac{\nabla_{\mathbf{x}}}{p_{F}}\delta f(\mathbf{x}, \mathbf{p})\right)\delta f(\mathbf{x},\mathbf{p}^{\prime})+\ldots\] \[+\frac{1}{3}\int_{\mathbf{x}\mathbf{p}\mathbf{p}^{\prime}\mathbf{ p}^{\prime\prime}}F^{(3,0)}(\mathbf{p},\mathbf{p}^{\prime},\mathbf{p}^{\prime \prime})\delta f(\mathbf{x},\mathbf{p})\delta f(\mathbf{x},\mathbf{p}^{\prime })\delta f(\mathbf{x},\mathbf{p}^{\prime\prime})+\ldots\] \[+\ \ldots\ \ \.\] In the above, \(\epsilon(\mathbf{p})\) is the free fermion dispersion relation and the various coefficient functions \(F^{(m,n)}\) parametrize interactions. In our notation, the \(m\)-index of \(F^{(m,n)}\) labels the nonlinearity of the interaction, while the \(n\)-index labels the number of \(\mathbf{x}\)-derivatives in that coupling. Of course, there can be multiple independent terms or order \((m,n)\) in which case additional indices are required to distinguish their coefficient functions. The various couplings \((\epsilon,F^{(m,n)})\) are functional analogues of Wilson coefficients in an effective field theory, and we will often refer to them as Wilson coefficients by a slight abuse of terminology, or Wilson coefficient functions if we want to be precise. #### iv.1.3 Equation of motion Armed with the Lie-Poisson structure (4.25) as well as the Hamiltonian (4.28), we can write down Hamilton's equation of motion for our system on \(\mathfrak{g}^{*}\), \[\partial_{t}f=\{f,H\}_{\mathrm{LP}}[f]\,. \tag{4.29}\] The Lie-Poisson bracket can be evaluated from its definition in terms of the Poisson bracket, by using the fact that \(\delta f({\bf x},{\bf p})/\delta f({\bf x}^{\prime},{\bf p}^{\prime})=\delta({\bf x }-{\bf x}^{\prime})\delta({\bf p}-{\bf p}^{\prime})\) and integrating by parts, to obtain \[\partial_{t}f(t,{\bf x},{\bf p})+\left\{f(t,{\bf x},{\bf p}),\frac{\delta H}{ \delta f(t,{\bf x},{\bf p})}\right\}_{\rm Poisson}=0\,. \tag{105}\] The variation of the Hamiltonian can be calculated straightforwardly, and defines the quasiparticle dispersion relation, \[\epsilon_{\rm qp}[f]\equiv\frac{\delta H}{\delta f}=\epsilon({\bf p})+\int_{ {\bf p}^{\prime}}F^{(2,0)}({\bf p},{\bf p}^{\prime})\delta f(t,{\bf x},{\bf p} ^{\prime})+\dots\quad, \tag{106}\] in terms of which the equation of motion turns into Landau's kinetic equation (6): \[\partial_{t}f+\nabla_{\bf p}\epsilon_{\rm qp}[f]\cdot\nabla_{\bf x}f-\nabla_{ \bf x}\epsilon_{\rm qp}[f]\cdot\nabla_{\bf p}f=0\,. \tag{107}\] We see that \(F^{(2,0)}({\bf p},{\bf p}^{\prime})\) is simply Landau's interaction function, but we also find an infinite series of higher order corrections to the quasiparticle energy. The study of the algebra of fermion bilinears, paired with EFT philosophy, hence provides a a formalism that captures LFLT as well as higher derivative corrections to LFLT in a systematic expansion. Note that the formalism and equation of motion itself applies generally to any state \(f({\bf x},{\bf p})\), irrespective of whether it describes the excitations of a Fermi surface at zero temperature. The only place that the Fermi surface has entered in this discussion so far is in justifying the series expansion of the Hamiltonian (103). For other systems, a different choice of Hamiltonian should suffice, as long as time evolution in such a system can be described by canonical transformations. #### iv.2.4 An alternate route to the Hamiltonian formalism An alternate way to arrive at the Hamiltonian formalism described in this section, without relying on the algebra of fermion bilinears, is the following: Landau's kinetic equation is simply a non-linear modification of the collisionless Boltzmann equation. Time evolution as determined by the collisionless Boltzmann equation not only preserves volume in the single-particle phase space, as shown by Liouville's theorem, but also preserves the symplectic form (or equivalently Poisson brackets) in the single-particle phase space. This implies that any solution \(f(t,{\bf x},{\bf p})\) to the collisionless Boltzmann equation can be described as the action of a one-parameter family of canonical transformations, parametrized by \(t\), acting on the initial state \(f(t=0,{\bf x},{\bf p})\). The dynamical system described by the collisionless Boltzmann equation is hence equivalent to a dynamical system on the Lie group of canonical transformations, since the solutions to the equations of motion are simply curves on the group manifold. The method described in the above section is a well-established method to formulate dynamical systems on Lie groups [54; 58], and hence automatically applies to our case [59]. This formalism requires a prescribed Hamiltonian to describe time evolution, and the most natural one is the double expansion (4.28). As we have already seen, this immediately gives us LFLT at the equation of motion. In [1], this was the perspective that was primarily presented in the main body, with the connection to fermion bilinears being relegated to the appendices. In this section, we have instead surveyed in detail the more microscopic approach to constructing the Hamiltonian, with the aim to clarify the connection to microscopics as well as expound upon what approximations and assumptions are required at the microscopic level in order to obtain this effective description. While we have largely appealed to EFT philosophy in order to construct the effective Hamiltonian (4.28), it remains to see whether it is possible to derive the effective Hamiltonian for certain classes of microscopic Hamiltonians such as the ones in equation (4.9), using the properties of the fermion bilinear algebra. Effective action from the coadjoint orbit method The second step towards obtaining an action description for Fermi liquids is to Legendre transform the Hamiltonian. Let us briefly described how this is usually achieved for a Hamiltonian system on a general phase space manifold \(\Gamma\), equipped with some choice of Poisson brackets. Defining \(\partial_{I}\) as a derivative on the phase space manifold, the Poisson bracket of two functions \(F\) and \(G\) on \(\Gamma\) can always be locally written in the following way: \[\{F,G\}=\Pi^{IJ}\partial_{I}F\partial_{J}G\,, \tag{5.1}\] where \(\Pi^{IJ}\) is an anti-symmetric rank 2 tensor on \(\Gamma\), known as the Poisson bi-vector. To switch from a Hamiltonian formalism to an action formalism, we invert the Poisson bivector to obtain a closed, anti-symmetric, non-degenerate symplectic form: \[\omega=\Pi^{-1}\,,\qquad\omega_{IJ}\Pi^{JK}=\delta^{K}_{I} \tag{5.2}\] The symplectic form allows us to write down a '\(p\dot{q}\)' term in the following way: introduce an extra dimension \(s\in[0,1]\) in addition to time \(t\) so that \(s=1\) corresponds to physical time, and use boundary conditions in \(s\) so that all degrees of freedom vanish at \(s=0\). Let \(\phi^{I}\) be coordinates on phase space, i.e., the phase space degree of freedom. The \(p\dot{q}\) term is then given by \[\int dt\int_{0}^{1}ds\ \omega(\partial_{t}\phi,\partial_{s}\phi)=\int dt\int_{0 }^{1}ds\ \omega_{IJ}\partial_{t}\phi^{I}\partial_{s}\phi^{J}\,, \tag{5.3}\] with an additional spatial integral involved if \(\phi^{I}\) are fields in space10. The Legendre transform of the Hamiltonian \(H[\phi]\) is then Footnote 10: The symplectic form is closed (\(d\omega=0\)) by definition, or as a consequence of the Jacobi identity for the Poisson bracket. This implies that the \(\mathbf{p}\dot{q}\) term is independent of the choice of “bulk” extension. \[S=\int dt\int_{0}^{1}ds\ \omega(\partial_{t}\phi,\partial_{s}\phi)-\int dtH[ \phi]\,. \tag{5.4}\] This entire construction relies on the ability to invert the Poisson bi-vector. However, this invertibility is, in general, not guaranteed by the definition of the Poisson bracket, and when it is not, we cannot find an action that gives the same equation of motion without changing the phase space either by finding a description in terms of different degrees of freedom or by eliminating redundant ones. This is the case for the Hamiltonian formalism described in section IV, so the Legendre transformation is not as straightforward as we could have hoped for. Before describing the remedy for this barrier, let us first revisit the microscopic description of the space of states from section III.2. ### Fermi surface states and their excitations To recap the discussion in section III.2, the space of states \(\mathfrak{g}^{*}\) is given by the vector space dual to the algebra of fermion bilinears. These are equivalence classes of density matrices that cannot be distinguished by the expectation values of fermion bilinears. A typical representative of such an equivalence class is characterized by the distribution function \(f(\mathbf{x},\mathbf{p})\) in the following way: \[\rho_{f}=\int_{\mathbf{x},\mathbf{p}}f(\mathbf{x},\mathbf{p})W(\mathbf{x}, \mathbf{p})\,, \tag{5.5}\] where \(W(\mathbf{x},\mathbf{p})\) is the basis dual to \(T(\mathbf{x},\mathbf{p})\), defined by \[\mathrm{Tr}[W(\mathbf{x},\mathbf{p})T(\mathbf{x}^{\prime},\mathbf{p}^{\prime} )]=\delta(\mathbf{x}-\mathbf{x}^{\prime})(2\pi)^{d}\delta(\mathbf{p}-\mathbf{ p}^{\prime})\,. \tag{5.6}\] The expectation value of a general operator \(O_{F}=\int_{\mathbf{x}\mathbf{p}}F(\mathbf{x},\mathbf{p})T(\mathbf{x}, \mathbf{p})\) in the state \(\rho_{f}\) can be written as \[\langle O_{F}\rangle_{\rho_{f}}=\mathrm{Tr}[\rho_{f}O_{F}]=\int_{\mathbf{x}, \mathbf{p}}F(\mathbf{x},\mathbf{p})f(\mathbf{x},\mathbf{p})=\langle f,F \rangle\,, \tag{5.7}\] and the distribution function \(f_{\rho}(\mathbf{x},\mathbf{p})\) that represents any given state \(\rho\) itself can be obtained from the state as \[f(\mathbf{x},\mathbf{p})=\left\langle T(\mathbf{x},\mathbf{p})\right\rangle_{ \rho}\,. \tag{5.8}\] Of course, this is generically true for any (pure or mixed) state, not just states with a Fermi surface. The distribution functions corresponding to these correspond to a subset of \(\mathfrak{g}^{*}\). Consider, for instance, a spherical Fermi surface with Fermi momentum \(p_{F}\). The state that describes is a pure state obtained by filling every momentum within the spherical Fermi surface with a fermion, \[|\mathrm{FS}\rangle=\prod_{|\mathbf{k}|\leq p_{F}}\psi^{\dagger}(\mathbf{k}) \,|0\rangle\,\,, \tag{5.9}\] where \(|0\rangle\) is the vacuum. It is straightforward to show using fermion anticommutation relations that \[f_{0}(\mathbf{p})=\langle\mathrm{FS}|T(\mathbf{x},\mathbf{p})|\mathrm{FS} \rangle=\frac{1}{2}\mathrm{sign}(p_{F}-|\mathbf{p}|)\,. \tag{5.10}\] For later convenience, let us define instead the distribution function of a state as \[f(\mathbf{x},\mathbf{p})=\langle T(\mathbf{x},\mathbf{p})\rangle_{\rho}+\frac {1}{2}\,, \tag{5.11}\] so that \[f_{0}(\mathbf{p})=\Theta(p_{F}-|\mathbf{p}|)\,, \tag{5.12}\] is the occupation number function for a spherical Fermi surface11. This shift also ensures that the integral used to define the pairing \(\langle f,F\rangle\) converges for states with a sharp Fermi surface, since the domain of integration is effectively bounded in momentum space. Excitations on top of the Fermi surface take the form of particle-hole pairs, which are created by the action of fermion bilinears on \(|{\rm FS}\rangle\). A state with a single particle hole excitation is then given by \[|{\bf k}_{1};{\bf k}_{2}\rangle\equiv\psi^{\dagger}({\bf k}_{1})\psi(-{\bf k}_{2 })\,|{\rm FS}\rangle. \tag{5.13}\] Fermion anticommutation relations ensure that this state is different from \(|{\rm FS}\rangle\) only if \({\bf k}_{1}\notin{\rm FS}\) and \({\bf k}_{2}\in{\rm FS}\). Antisymmetrizing over the particle and the hole to regulate the coincidence singularity \({\bf k}_{1}\rightarrow{\bf k}_{2}\), and Wigner transforming allows us to write such states in an alternate basis: \[|{\bf x};{\bf p}\rangle\equiv T({\bf x},{\bf p})\,|{\rm FS}\rangle. \tag{5.14}\] In the semi-classical limit, where \(|\nabla_{\bf x}|\ll{\bf p}\sim p_{F}\), the state \(|{\bf x};{\bf p}\rangle\) is interpreted as a particle hole pair created at the point \({\bf p}\) on the Fermi surface, locally in a mesoscopic region of size \(1/p_{F}\) at the position labelled by the spatial coordinate \({\bf x}\). The momentum \({\bf p}\) has no relation to the net momentum \({\bf q}\) of the particle-hole pair, and only labels on which 'patch' of the Fermi surface the particle-hole pair lives. Another equivalent basis that will be more convenient for us is that of coherent states defined as \[|F({\bf x},{\bf p})\rangle\equiv e^{\int_{{\bf x}{\bf p}}F({\bf x},{\bf p})T({ \bf x},{\bf p})}\,|{\rm FS}\rangle\, \tag{5.15}\] whose distribution function is given by the following: \[f_{F}({\bf x},{\bf p})=f_{0}({\bf p})+\{\!\{F,f_{0}\}\!\}+\frac{1}{2!}\{\!\{F, \{\!\{F,f_{0}\}\!\}\!\}\!\}+\ldots\ . \tag{5.16}\] This is just the coadjoint action of \(F({\bf x},{\bf p})\) on \(f_{0}({\bf p})\) in the Moyal algebra! The set of unitary operators \(U_{F}=e^{\int FT}\) form the corresponding group and we find that particle-hole coherent states of a Fermi surface is obtained by the action of all possible group transformations on the spherical Fermi surface. This applies to the parametrization of the states in terms of their distribution functions as well, in that the distribution function for a particle-hole coherent state is obtained by acting on the spherical Fermi surface distribution with a group transformation. In the semi-classical limit, the Moyal brackets are replaced by Poisson brackets and the semi-classical distribution function for a coherent state is given by \[f_{F}={\rm Ad}^{*}_{\exp F}f_{0}=f_{0}+\{F,f_{0}\}+\frac{1}{2!}\{F,\{F,f_{0} \}\}+\ldots\ \, \tag{5.17}\] which is interpreted as the action of the canonical transformation \(U=\exp F\) on the spherical Fermi surface state. An intuitive picture for this is the following: take all the points within the Fermi surface. The canonical transformation \(U\) maps each one of these to a new point. Being a smooth coordinate transformation, this preserves the proximity of points and transforms the initial spherical swarm of points into a new shape that is topologically equivalent to a filled sphere (see figure 8). The precise shape of boundary of this region can be parametrized by a function \(p_{F}(\mathbf{x},\theta)\), where \(\theta\) are angular coordinates in momentum space. We then have \[f_{F}(\mathbf{x},\mathbf{p})=\Theta(p_{F}(\mathbf{x},\theta)-|\mathbf{p}|)\,, \tag{5.18}\] which is entirely characterized by a shape in phase space. The space of states for particle-hole excitations is then just the space of closed surfaces in phase space [16]. This space of states is described mathematically by what is called a coadjoint orbit, which we define below. #### 5.2.1 Coadjoint orbits and the Kirillov-Kostant-Souriau form As we saw above, the space of states relevant for zero temperature Fermi surface physics is not all of \(\mathfrak{g}^{*}\), but a subset of it consisting of functions that take values 1 or 0 separated by a closed surface. This restriction is formally achieved by picking a reference state, \(f_{0}(\mathbf{p})\) in our case, and acting on it via all possible canonical transformations. Canonical transformations act on \(\mathfrak{g}^{*}\) via the coadjoint action, so the set generated from this procedure is known as the coadjoint orbit of \(f_{0}\): \[\mathcal{O}_{f_{0}}\equiv\{f=\mathrm{Ad}_{U}^{*}f_{0}\in\mathfrak{g}^{*}\ |\ U\in\mathcal{G}\}\,. \tag{5.19}\] Two different canonical transformations acting on the same reference state can indeed generate the same element of the coadjoint orbit, owing to the fact that there is a nontrivial subgroup Figure 8: Fermi surface states from canonical transformations that leaves \(f_{0}\) invariant, called the stabilizer subgroup of \(f_{0}\), which we will denote by \(\mathcal{H}\). \[\begin{split}\mathcal{H}&\equiv\{V\in\mathcal{G}\ |\ \mathrm{Ad}_{V}^{*}f_{0}=f_{0}\}\\ &=\{V=\exp\alpha\ |\ \alpha\in\mathfrak{g},\ \mathrm{ad}_{\alpha}^{*}f_{ 0}=0\}\,.\end{split} \tag{5.20}\] So the canonical transformations \(U\) and \(UV\) create the same state from \(f_{0}\), since \[\mathrm{Ad}_{UV}^{*}f_{0}=UVf_{0}(U_{V})^{-1}=U(Vf_{0}V^{-1})U^{-1}=\mathrm{ Ad}_{U}^{*}f_{0}\,. \tag{5.21}\] Each state \(f\) in the coadjoint orbit is hence represented by a left coset \(U\mathcal{H}\), and the coadjoint orbit is then the left coset space, \[\mathcal{O}_{f_{0}}\cong\mathcal{G}/\mathcal{H}\,. \tag{5.22}\] Since every element of the coadjoint orbit is related to every other by canonical transformations, we find an important result for time evolution under any Hamiltonian \(H[f]\). Infinitesimal time evolution occurs by the action of the infinitesimal canonical transformation \(\delta H|_{f}\in\mathfrak{g}\), while finite time evolution occurs by exponentiating the sequence of infinitesimal canonical transformations, which itself is a canonical transformation. Therefore, time evolution takes an initial state to another state in the _same coadjoint orbit_ as the initial state. The coadjoint orbit \(\mathcal{O}_{f_{0}}\) is hence preserved by time evolution, and can hence be thought of as a reduced phase space for Fermi liquids. The Hamiltonian and Lie-Poisson structure can both be restricted to the coadjoint orbit with complete consistency, and the entire Hamiltonian formalism can be defined solely for \(\mathcal{O}_{f_{0}}\) instead of all of \(\mathfrak{g}^{*}\). Unlike the Lie-Poisson structure for \(\mathfrak{g}^{*}\), however, the Lie-Poisson structure restricted to \(\mathcal{O}_{f_{0}}\) is invertible, and permits the definition of a closed, non-degenerate symplectic form, known as the Kirillov-Kostant-Souriau (KKS) form. Being a 2-form, it is defined by its action on a pair of vectors tangent to the coadjoint orbit at any given point. Consider the point \(f\in\mathcal{O}_{f_{0}}\). Since the coadjoint orbit is a submanifold of \(\mathfrak{g}^{*}\), the tangent space \(T_{f}\mathcal{O}_{f_{0}}\) to \(\mathcal{O}_{f_{0}}\) at the point \(f\) is a subspace of the tangent space \(T_{f}\mathfrak{g}^{*}\) to \(\mathfrak{g}^{*}\). Tangent vectors of \(\mathfrak{g}^{*}\) can be thought of as elements of \(\mathfrak{g}^{*}\), so defining the KKS form amounts to defining its action \(\omega_{\mathrm{KKS}}(g,k)\) on any two arbitrary functions \(g,k\in\mathfrak{g}^{*}\) which are tangent to \(\mathcal{O}_{f_{0}}\). It can be shown that the tangents \(g\) and \(k\) at the point \(f\) can be obtained from the coadjoint action of two Lie algebra elements \(G,K\in\mathfrak{g}\) on \(f\) (see, for instance, [58]), i.e., \[\mathrm{ad}_{G}^{*}f=g,\qquad\mathrm{ad}_{K}^{*}f=k\,. \tag{5.23}\] \(G\) and \(K\) are not uniquely determined by \(g\) and \(k\) respectively, but rather representatives of equivalence classes of Lie algebra elements. The KKS form is then defined in terms of \(G\) and \(K\) as follows: \[\omega_{\mathrm{KKS}}(g,k)\equiv\langle f,\{G,K\}_{\mathrm{Poisson}}\rangle. \tag{5.24}\] The pairing of the Poisson bracket with \(f\) makes it clear that any other choice of representative of the equivalence classes of \(G\) and \(K\) respectively gives the same answer, using the fact that if \(G\) and \(G^{\prime}\) are two elements of the same equivalence class, then \(\mathrm{ad}^{*}_{G-G^{\prime}}f=0\). To show that the KKS form is closed, note that the differential \(d\omega_{\mathrm{KKS}}\) acts on three instead of two tangents, and it is not difficult to show that \[d\omega_{\mathrm{KKS}}(g,k,l)=\langle f,\{\{G,K\},L\}\rangle+\text{cyclic permutations}\,, \tag{5.25}\] where \(L\in\mathfrak{g}\) is such that \(\mathrm{ad}^{*}_{L}f=l\in\mathfrak{g}^{*}\). The right hand side then vanishes due to the Jacobi identity. Armed with the Kirillov form, we can formally write down an action for Fermi liquids in terms of the field \(f\in\mathcal{O}_{f_{0}}\), which looks like \[\begin{split} S_{\mathrm{FL}}[f]&=S_{\mathrm{WZW}}[ f]-\int dt\ H[f]\,,\\ S_{\mathrm{WZW}}[f]&=\int dt\int_{0}^{1}ds\ \omega_{\mathrm{KKS}}\left( \partial_{t}f,\partial_{s}f\right)\,,\end{split} \tag{5.26}\] where \(S_{\mathrm{WZW}}\) is the Wess-Zumino-Witten (WZW) term, \(H[f]\) is the Hamiltonian in equation (4.28), and \(f\) obeys the following boundary conditions on the \((t,s)\)-strip: \[f(t,s=1)=f(t)\,,\qquad f(t,s=0)=0\,. \tag{5.27}\] ### The Wess-Zumino-Witten term and the effective action The action (5.26), while exact (in the semi-classical limit corrected by the derivative expansion) is written in a rather formal way that cannot really be used for calculations. In order to make it more useful, we need to find a convenient parametrization of the coadjoint orbit. The simplest one is obtained directly from the definition of the orbit, i.e., by acting on the reference state \(f_{0}\) by all possible canonical transformations, generated by the field \(-\phi(\mathbf{x},\mathbf{p})\in\mathfrak{g}\). In this parametrization, the field \(\phi(\mathbf{x},\mathbf{p})\) is our degree of freedom. The minus sign is conventional and chosen for later convenience. Elements \(f(\mathbf{x},\mathbf{p})\) of the coadjoint orbit can be parametrized as follows: \[\begin{split} f_{\phi}(\mathbf{x},\mathbf{p})=\mathrm{Ad}^{*}_{ \mathrm{exp}(-\phi)}f_{0}&=f_{0}+\{\phi,f_{0}\}+\frac{1}{2!}\{ \phi,\{\phi,f_{0}\}\}+\ldots\\ &=\Theta(p_{F}-|\mathbf{p}|)+(\mathbf{n}_{\theta}\cdot\nabla_{ \mathbf{x}}\phi)\delta(|\mathbf{p}|-p_{F})+\ldots\quad,\end{split} \tag{5.28}\] where \(\mathbf{n}_{\theta}\) is the unit normal to the spherical Fermi surface at the angular coordinates \(\theta\) in momentum space. The stabilizer \(\mathcal{H}\) of \(f_{0}\) can be described by its Lie subalgebra \(\mathfrak{h}\) which corresponds to functions \(\alpha(\mathbf{x},\mathbf{p})\in\mathfrak{g}\) that obey the following condition: \[\begin{split}\mathrm{ad}^{*}_{\alpha}f_{0}&=\{\alpha,f_{0}\}=0\,,\\ \implies(\mathbf{n}_{\theta}\cdot\nabla_{\mathbf{x}}\alpha)|_{| \mathbf{p}|=p_{F}}&=0\,.\end{split} \tag{5.29}\] Consequently, the canonical transformation \(\exp\alpha\) leaves \(f_{0}\) invariant, \[\text{Ad}^{*}_{\exp\alpha}f_{0}=e^{\text{ad}^{*}_{\alpha}}f_{0}=f_{0}\,. \tag{5.30}\] The equivalence \(U\simeq UV\) then leads to an equivalence relation for \(\phi\), \[\begin{split}\exp(-\phi)&\simeq\exp(-\phi)\exp( \alpha)\,,\\ \implies\phi&\simeq\phi-\alpha+\frac{1}{2}\{\phi, \alpha\}+\dots\quad,\end{split} \tag{5.31}\] which allows us to "gauge fix" \(\phi\) to be independent of the radial momentum coordinate, \[\phi=\phi(\mathbf{x},\theta)\,, \tag{5.32}\] where \(\theta\) are angular coordinates in momentum space. A suitable choice of \(\alpha\) that achieves this, for example, at leading order in the transformation (5.31), is \[\alpha_{\phi}(\mathbf{x},\mathbf{p})=\phi(\mathbf{x},\mathbf{p})-\phi(\mathbf{ x},\theta)|_{|\mathbf{p}|=p_{F}}\,. \tag{5.33}\] It is easy to check that \(\{\alpha_{\phi},f_{0}\}=0\). While we use a \(|\mathbf{p}|\)-independent parametrization of our degree of freedom for convenience, any other choice is equally valid and will result in the same physical quantities, with the various choices being related by field redefinitions. What remains is to write down the WZW term in terms of this field to obtain an action description for Fermi liquids. The definition of the KKS form requires that we find functions \(G\) and \(K\) such that \[\text{ad}^{*}_{G}f_{\phi}=\partial_{t}f\,,\qquad\text{ad}^{*}_{K}f_{\phi}= \partial_{s}f\,. \tag{5.34}\] Using the fact that \(f_{\phi}=\text{Ad}^{*}_{U}f_{0}=Uf_{0}U^{-1}\) where \(U=\exp(-\phi)\), we can show that the required functions are12 Footnote 12: The simplest way to do this is to pretend that \(U,f_{0},f\) are all matrices, replace all Poisson brackets with matrix commutators, simplify the expressions and finally replace all commutators back with Poisson brackets. \[G=\partial_{t}UU^{-1}\,,\qquad K=\partial_{s}UU^{-1}\,, \tag{5.35}\] so that the KKS form evaluates to \[\begin{split}\omega_{\text{KKS}}(\partial_{t}f,\partial_{s}f)& =\left\langle f,\{\partial_{t}UU^{-1},\partial_{s}UU^{-1}\} \right\rangle\\ &=\left\langle f_{0},\{U^{-1}\partial_{t}U,U^{-1}\partial_{s}U \}\right\rangle\,,\end{split} \tag{5.36}\] with boundary conditions \(\phi(t,s=1)=\phi(t)\) and \(\phi(t,s=0)=0\). The above expression can be simplified to a sum of total \(s\)- and \(t\)-derivatives, which allows us the write the WZW term as \[S_{\text{WZW}}=\int dt\ \langle f_{0},U^{-1}\partial_{t}U\rangle\,. \tag{5.37}\] This is a subtle point, since it suggests that the KKS form is necessarily exact, which is not true generally for a Lie group, especially for a coadjoint orbit with non-trivial topology. Since the group of canonical transformations is a diffeomorphism group, the topology of its coadjoint orbits is unknown, and it is unclear whether the KKS form on the coadjoint orbit \(\mathcal{O}_{f_{0}}\) is exact or not. The expression (5.36), on the other hand, is exact, owing to the fact that we are describing a generic point \(f\) on the coadjoint orbit as a canonical transformation \(U\) acting on the reference state \(f_{0}\). Furthermore, we are restricting ourselves to canonical transformations that are connected to the identity by expressing \(U\) as the exponent of a Lie algebra element \(-\phi\). This parametrization of the coadjoint orbit is hence incomplete, and only captures the largest possible patch of the coadjoint orbit around \(f_{0}\), missing out on information about disconnected components of the orbit as well as the global topology of the component containing \(f_{0}\). This choice of parametrization suffices, however, to describe a perturbative expansion around the reference state \(f_{0}\), since all states accessible to such a perturbative expansion necessarily live in a patch around \(f_{0}\), making the choice of the reference state somewhat crucial for this method to work. To account for nonperturbative properties of Fermi liquids, a different parametrization of the coadjoint orbit is required, which we leave to future work. Finally, we obtain a perturbative action that describes Fermi liquids, \[S_{\text{FL}}=\int dt\ \langle f_{0},U^{-1}\partial_{t}U\rangle-\int dt\ H[f_{ \phi}=Uf_{0}U^{-1}]\,, \tag{5.38}\] with \(U=\exp(-\phi)\). The action can be expanded order by order in \(\phi\), and we will find that higher order terms are suppressed by powers of \(p_{F}\), which takes on the role of the UV cutoff of the theory. Of course, since this action is just the Legendre transformation of the Hamiltonian (4.28), the equation of motion is guaranteed to be equation (4.32). But this can also be verified directly by varying the action under \[U\to U^{\prime}=\exp\delta\phi\cdot U\,, \tag{5.39}\] with \(\delta\phi(t,\mathbf{x},\mathbf{p})\in\mathfrak{g}\). To linear order in \(\delta\phi\), we have \[\delta[U^{-1}\partial_{t}U]=U^{-1}(\partial_{t}\delta\phi)U\,,\qquad\delta H[ Uf_{0}U^{-1}]=\langle f_{\phi},\{\epsilon_{\text{qp}}[f_{\phi}],\delta\phi\}_{ \text{Poisson}}\rangle\, \tag{5.40}\] where \(\epsilon_{\text{qp}}[f]=\delta H/\delta f\) is the quasiparticle energy. This gives us the following result for the variation of the action: \[\delta S=-\int dt\left\langle\partial_{t}f_{\phi}+\{f_{\phi},\epsilon_{\text{ qp}}[f_{\phi}]\}_{\text{Poisson}}\,,\delta\phi\right\rangle\,, \tag{5.41}\] from which we can read off the equation of motion, \[\partial_{t}f_{\phi}+\{f_{\phi},\epsilon_{\text{qp}}[f_{\phi}]\}=0\,, \tag{5.42}\] which is, as expected, identical to equation (4.32). ### Symmetries in the postmodern formalism This geometric perspective for Fermi liquids, in part, powerful because of how it encodes symmetries through the algebra of canonical transformations. We will categorize the symmetries we want to introduce into the formalism into three different groups: spacetime, gauge and internal symmetries. The last of these three requires an extension of the algebra of canonical transformations and will hence be dealt with later in section VII.1. Let us first discuss some key aspects of how symmetries act in the postmodern formalism, and focus in particular on the unintuitive consequences of the fact that the algebra of canonical transformations is in fact a diffeomorphism algebra as opposed to a global symmetry algebra. Recall that the coadjoint orbit \(\mathcal{O}_{f_{0}}\cong\mathcal{G}/\mathcal{H}\) is the left coset space of the group of canonical transformations. Therefore every state \(f\in\mathcal{O}_{f_{0}}\) is identical to an equivalence class of canonical transformations under the equivalence relation, \[U\simeq UV\,,\quad V\in\mathcal{H}\,. \tag{5.43}\] The explicit map from \(\mathcal{G}/\mathcal{H}\) to \(\mathcal{O}_{f_{0}}\) is given by13 Footnote 13: The discussion below equation (5.36) of the subtlety of not being able to capture every state in the coadjoint orbit does not apply here since we are not requiring \(U\in G\) to be the exponent of any Lie algebra element. \[f_{U}\equiv Uf_{0}U^{-1}\,. \tag{5.44}\] Now the group of canonical transformations \(G\) can itself act on the coset in one of two different ways, called the left and right actions, respectively given by the transformations \[U\stackrel{{\text{left}}}{{\longrightarrow}}WU\,,\qquad U \stackrel{{\text{right}}}{{\longrightarrow}}UW\,,\qquad W\in \mathcal{G}\,. \tag{5.45}\] Both of these induce transformations on the coadjoint orbit as follows: \[f_{U}\stackrel{{\text{left}}}{{\longrightarrow}}Wf_{U}W^{-1}\,, \qquad f_{U}\stackrel{{\text{right}}}{{\longrightarrow}}UWf_{0}W ^{-1}U^{-1}\,, \tag{5.46}\] but only the left action can be naturally and directly written as a transformation of \(\mathcal{G}\) on the coadjoint orbit, independent of the choice of reference state \(f_{0}\). Therefore symmetries must act on the coset space via the left action. The right action instead is reserved for transformations by elements \(V\) of the stabilizer \(\mathcal{H}\), resulting in a _coset redundancy_ that is a gauge symmetry of our theory (not to be confused with the gauge symmetry when we couple to background \(U(1)\) gauge fields later). The WZW term is invariant under a larger gauge symmetry of all canonical transformations under the right action, since these simply pick out a different reference state to parametrize the coadjoint orbit, but the Hamiltonian breaks this \(\mathcal{G}\) gauge symmetry down to a \(\mathcal{H}\) gauge symmetry by uniquely picking \(f_{0}\) as the ground state. Note also that the WZW term is invariant under the left action of every canonical transformation that does not depend on time, since \[(WU)^{-1}\partial_{t}(WU)=U^{-1}\partial_{t}U\,, \tag{5.47}\] but the Hamiltonian is not. The rule of thumb for imposing symmetries on this theory will be the following: * Identify the subalgebra of canonical transformations that generates the symmetry * If the symmetry being considered is a spacetime symmetry, impose invariance of the action under the transformation \(U\to WU\) * If the symmetry in consideration is a gauge symmetry, turn on background fields that make the state \(f\) invariant under the transformation \(WfW^{-1}\). The last point is unusual and not how we typically gauge a theory, and will be discussed in more detail later. But before imposing any symmetry on our theory, let us describe a global symmetry that does not act on the state \(f\), but is instead a consequence of our choice of parametrization of the coadjoint orbit. Recall that we chose to define the canonical transformation \(U\) that generates \(f\) as the exponent of a Lie algebra element, \[U=\exp(-\phi)\,,\qquad\phi({\bf x},{\bf p})\in\mathfrak{g}\,. \tag{5.48}\] Elements of the Lie algebra have a symmetry built into them, which corresponds to constant shifts14: Footnote 14: The more mathematically inclined reader might worry that in order for the pairing \(\langle f,F\rangle\) between \(\mathfrak{g}^{*}\) and \(\mathfrak{g}\) to be well-defined, suitable boundary conditions need to be imposed on functions which a constant shift would violate. However, this shift symmetry can be interpreted as a transformation of the boundary conditions to make the pairing well-defined. \[\phi({\bf x},{\bf p})\to\phi({\bf x},{\bf p})+c\,. \tag{5.49}\] These shifts preserve the action of the canonical transformation on any state, since \(f_{\phi}\) only depends on \(\phi\) through its derivatives. While such shifts leave \(f\) invariant, they will not leave the WZW term invariant if \(c\) is promoted to a function of time, and it is not difficult to show that \[\delta S_{\rm WZW}=\int dt\ \langle f,\partial_{t}c(t)\rangle=-\int dt\ \langle \partial_{t}f,c(t)\rangle\,. \tag{5.50}\] Noether's theorem then tells us that we must then have \[\partial_{t}\int_{{\bf x},{\bf p}}f({\bf x},{\bf p})=0\,, \tag{5.51}\] i.e., the total particle number, \[N=\int_{{\bf x},{\bf p}}f({\bf x},{\bf p})\,, \tag{5.52}\] is conserved. Galilean invariance As an example of a spacetime symmetry, let us demonstrate how invariance under Galilean boosts constrains our action. The first step is to identify the subalgebra of canonical transformations that generates Galilean boosts. A typical elements of this algebra is given by the time-dependent function, \[B_{v}={\bf v}\cdot\left({\bf p}t-m{\bf x}\right), \tag{100}\] with \(W=\exp B_{v}\) being the corresponding canonical transformation. Under this transformation, we have \[f({\bf x},{\bf p})\rightarrow({\rm Ad}_{W}^{*}f)({\bf x},{\bf p})=f({\bf x}-{ \bf v}t,{\bf p}-m{\bf v})\,, \tag{101}\] as can be obtained by observing that the expansion of the coadjoint action takes the form of a Taylor series and then resumming the Taylor series. Let us first evaluate the constraint on the free fermion action obtained from Galilean invariance. The action can be written as follows: \[S_{\rm free\ fermion}=\int dt\left\langle f_{0},U^{-1}\partial_{t}U\right\rangle -\int dt\left\langle f,\epsilon\right\rangle\,. \tag{102}\] The WZW term transforms to \[\left\langle f_{0},U^{-1}W^{-1}\partial_{t}(WU)\right\rangle=\left\langle f_{ 0},U^{-1}\partial_{t}U\right\rangle+\left\langle f,W^{-1}\partial_{t}W\right \rangle\,, \tag{103}\] while the Hamiltonian term becomes \[\left\langle WfW^{-1},\epsilon\right\rangle=\left\langle f,W^{-1}\epsilon W \right\rangle\,, \tag{104}\] so the change in the action is given by the following \[\delta S=\int dt\left\langle f,W^{-1}(\partial_{t}-\epsilon)W-\epsilon\right\rangle\,. \tag{105}\] Invariance under boosts then requires that \[W^{-1}\partial_{t}W=W^{-1}\epsilon W-\epsilon\,, \tag{106}\] where \(W^{-1}\epsilon W={\rm Ad}_{W^{-1}}^{*}\epsilon=\epsilon({\bf p}+m{\bf v})\) owing to the fact that \(W^{-1}=\exp(-B_{v})=\exp B_{-v}\). The left hand side can now be expanded using the following formula, \[W^{-1}\partial_{t}W=\partial_{t}B_{v}+\frac{1}{2!}\{\partial_{t}B_{v},B_{v}\} +\ldots\quad, \tag{107}\] and compared order by order in \({\bf v}\) with the Taylor expansion of the right hand side to obtain the following: \[{\bf p}=m\nabla_{\bf p}\epsilon \tag{108}\] which tells us that the dispersion relation must be quadratic: \[\epsilon(\mathbf{p})=\frac{p^{2}}{2m}+\text{constant}\,. \tag{5.62}\] This exactly what is expected for a free fermion with Galilean invariance. Next, we derive the effective mass of Landau quasiparticles by imposing Galilean invariance on the interacting theory truncated to quadratic order in the fluctuation \(\delta f=f-f_{0}\): \[H[f]=\int_{\mathbf{x}\mathbf{p}}\epsilon(\mathbf{p})f(\mathbf{x},\mathbf{p})+ \frac{1}{2}\int_{\mathbf{x}\mathbf{p}\mathbf{p}^{\prime}}F^{(2,0)}(\mathbf{p}, \mathbf{p}^{\prime})\delta f(\mathbf{x},\mathbf{p})\delta f(\mathbf{x}, \mathbf{p}^{\prime})+\mathcal{O}(\delta f^{3},\nabla_{\mathbf{x}})\,. \tag{5.63}\] We have already seen that the transformation of the WZW term under a Galilean boost is cancelled by the transformation of a linear-in-\(f\) Hamiltonian term with the dispersion \(\epsilon=p^{2}/2m\). Therefore, invariance of the interacting theory can be achieved by demanding invariance of the shifted Hamiltonian: \[\tilde{H}[f]=\int_{\mathbf{x}\mathbf{p}}\left(\epsilon(\mathbf{p})-\frac{p^{ 2}}{2m}\right)f(\mathbf{x},\mathbf{p})+\frac{1}{2}\int_{\mathbf{x}\mathbf{p} \mathbf{p}^{\prime}}F^{(2,0)}(\mathbf{p},\mathbf{p}^{\prime})\delta f( \mathbf{x},\mathbf{p})\delta f(\mathbf{x},\mathbf{p}^{\prime})+\mathcal{O}( \delta f^{3},\nabla_{\mathbf{x}})\,. \tag{5.64}\] To obtain constraints from boost invariance, it suffices to consider infinitesimal transformations, \[f\to f+\{B_{v},f\}+\mathcal{O}(v^{2})=f-\mathbf{v}\cdot(t\nabla_{ \mathbf{x}}+m\nabla_{\mathbf{p}})f+\mathcal{O}(v^{2})\,, \tag{5.65}\] under which the fluctuation transforms as \[\delta f\rightarrow-\,m\mathbf{v}\cdot\nabla_{\mathbf{p}}f_{0}+\delta f- \mathbf{v}\cdot(t\nabla_{\mathbf{x}}+m\nabla_{\mathbf{p}})\delta f\,. \tag{5.66}\] Note that the transformation of the fluctuation \(\delta f\) is inhomogeneous in \(\delta f\). In particular, it can reduce the degree of a monomial by up to \(1\). This results in constraints that mix the various Wilson coefficient functions, so that \(F^{(m,n)}\) will be constrained by \(F^{(m-1,n)}\). The transformation of the shifted Hamiltonian under a boost is given by \[\begin{split}\tilde{H}\rightarrow\tilde{H}&-m \mathbf{v}\cdot\int_{\mathbf{x}\mathbf{p}}\left(\epsilon-\frac{p^{2}}{2m} \right)\nabla_{\mathbf{p}}f_{0}\\ &+m\mathbf{v}\cdot\int_{\mathbf{x}\mathbf{p}}\left(\nabla_{ \mathbf{p}}\epsilon-\frac{\mathbf{p}}{m}-\int_{\mathbf{p}^{\prime}}F^{(2,0)}( \mathbf{p},\mathbf{p}^{\prime})\nabla_{\mathbf{p}^{\prime}}f_{0}(\mathbf{p}^ {\prime})\right)\delta f(\mathbf{x},\mathbf{p})\\ &+\mathcal{O}(\delta f^{2},\nabla_{\mathbf{x}})\,.\end{split} \tag{5.67}\] Rotational invariance kills the term in the first line, while the second line gives us a non-trivial constraint, \[\nabla_{\mathbf{p}}\epsilon-\frac{\mathbf{p}}{m}=\int_{\mathbf{p}^{\prime}}F ^{(2,0)}(\mathbf{p},\mathbf{p}^{\prime})\nabla_{\mathbf{p}^{\prime}}f_{0}( \mathbf{p}^{\prime})\,,\qquad||\mathbf{p}|-p_{F}|\ll p_{F}\,. \tag{5.68}\] The requirement of \(\mathbf{p}\) being sufficiently close to \(p_{F}\) comes from the fact that \(\delta f\) must be localized near the Fermi surface for a perturbative expansion in \(\delta f\) to be valid. It suffices to set \(\mathbf{p}\) to a point \(p_{F}\mathbf{n}_{\theta}\) on the Fermi surface and write \(\nabla_{\mathbf{p}}\epsilon|_{p_{F}}=p_{F}\mathbf{n}_{\theta}/m^{*}\), where is the effective mass of the quasiparticle. Furthermore, the \(\nabla_{{\bf p}^{\prime}}f_{0}\) term in the integral sets \({\bf p}^{\prime}\) to be on the Fermi surface as well, and we can expand the Landau interaction function in angular channels using rotational covariance. For example, in \(d=2\), we write \[F^{(2,0)}(p_{F}{\bf n}_{\theta},p_{F}{\bf n}^{\prime}_{\theta})=\frac{8\pi^{2}v_ {F}}{p_{F}^{2}}\sum_{l\geq 0}F_{l}\cos l(\theta-\theta^{\prime})\,, \tag{100}\] to simplify the boost invariance constraint to \[p_{F}\left(\frac{1}{m^{*}}-\frac{1}{m}\right){\bf n}_{\theta}=-v_{F}F_{1}{\bf n }_{\theta}\,. \tag{101}\] Solving for the effective mass in terms of the Galilean boost parameter \(m\) and the first Landau parameter \(F_{1}\), we find the known result: \[m^{*}=m(1+F_{1})\,. \tag{102}\] #### v.4.2 Coupling to \(U(1)\) gauge fields As mentioned briefly before, the procedure for coupling our theory to background gauge fields is very different from the usual procedure of gauging a global symmetry. A systematic procedure for coupling Fermi liquids to a gauge field has been difficult to achieve in the past owing to the fact that effective theories live in momentum space, and here we present a new approach that provides a solution. The key observation is that the set of gauge transformations, characterized by functions \(\lambda(t,{\bf x})\), forms a subalgebra of infinitesimal canonical transformations. All such functions Poisson-commute with each other, since they do not depend on \({\bf p}\), so this subalgebra is abelian. It is not difficult to show that under the canonical transformation \(W=\exp\lambda\), we have \[({\rm Ad}_{W}^{*}f)({\bf x},{\bf p})=f({\bf x},{\bf p}+\nabla_{\bf x}\lambda)\,. \tag{103}\] These then act on the coset representative \(U=\exp(-\phi)\) as \[U\to WU\,,\qquad\phi\to\phi-\lambda+\frac{1}{2}\{\lambda,\phi\}+\dots\quad. \tag{104}\] The above transformation makes it clear why the usual procedure of gauging the global \(U(1)\) symmetry (102) by promoting the transformation to depend on space and time is ambiguous when applied to the current theory, since simply promoting the transformation parameter to a function misses out on the nonlinear corrections in the Baker-Campbell-Haussdorff formula. The minimal coupling procedure then is blind to nonlinear couplings to the gauge field as well as contact terms required to ensure gauge invariance. Naturally, the Fermi liquid action is not invariant under these transformations, so we need to turn on background gauge fields \(A_{\mu}(t,{\bf x})\) that transform under the gauge transformation as \[A_{\mu}(t,{\bf x})\to W^{-1}(A_{\mu}-\partial_{\mu})W=A_{\mu}(t,{\bf x})- \partial_{\mu}\lambda(t,{\bf x})\,, \tag{5.74}\] where \(\mu=(t,{\bf x})\) is a spacetime index. The WZW term and the Hamiltonian can be made invariant separately under gauge transformations. Let us start with the WZW term, whose transformation is given by \[\begin{split} U^{-1}\partial_{t}U&\to U^{-1} \partial_{t}U+U^{-1}(W^{-1}\partial_{t}W)U\,,\\ \implies\delta_{\lambda}S_{\text{WZW}}&=\int dt \left\langle f_{0},U^{-1}(\partial_{t}\lambda)U\right\rangle\,.\end{split} \tag{5.75}\] Evidently, making this invariant amounts to modifying it to the following: \[S_{\text{WZW}}[\phi,A_{0}]=\int dt\left\langle f_{0},U^{-1}(\partial_{t}-A_{0} )U\right\rangle\,, \tag{5.76}\] which is now invariant under the simultaneous transformation \[U\to WU\,,\qquad A_{0}\to W^{-1}(A_{0}-\partial_{t})W\,. \tag{5.77}\] Next, to make the Hamiltonian invariant, it suffices to ensure the invariance of \(f\) under gauge transformations by coupling it to the background gauge fields. One can see that the appropriate modification is \[f_{A}(t,{\bf x},{\bf p})\equiv f(t,{\bf x},{\bf p}+{\bf A}(t,{\bf x}))\,, \tag{5.78}\] where \({\bf A}\) is the spatial part of the gauge field. Since \({\bf x}\) does not transform at all under the gauge transformation, the transformation of \({\bf p}\) is cancelled by the gauge transformation of \({\bf x}\). While \(f_{A}\) is now gauge invariant, its spatial derivatives are not, since \[(\nabla_{{\bf x}}f)({\bf x},{\bf p})\to(\nabla_{{\bf x}}f)({\bf x},{\bf p}+ \nabla_{{\bf x}}\lambda)+\{\nabla_{{\bf x}}\lambda,f\}({\bf x},{\bf p}+\nabla_ {{\bf x}}\lambda)\,. \tag{5.79}\] But this is straightforwardly remedied by replacing partial derivatives by covariant derivatives: \[D_{{\bf x}}f\equiv\nabla_{{\bf x}}f-\{{\bf A},f\}\,. \tag{5.80}\] While \(f\) transforms covariantly under canonical transformations, the fluctuation \(\delta f=f-f_{0}\) does not, so it is convenient to re-expand the Hamiltonian in \(f\) instead of \(\delta f\), with modified Wilson coefficient functions \(\bar{F}^{(m,n)}\) that can be related straightforwardly to the original ones in equation (4.28). The modified gauge-invariant Hamiltonian is then \[\begin{split} H_{\text{gauged}}[f,{\bf A}]=H[f_{A}]& =\int_{{\bf x}{\bf p}}\epsilon({\bf p})f({\bf x},{\bf p}+{\bf A}) \\ &+\frac{1}{2}\int_{{\bf x}{\bf p}{\bf p}^{\prime}}\bar{F}^{(2,0)} ({\bf p},{\bf p}^{\prime})f({\bf x},{\bf p}+{\bf A})f({\bf x},{\bf p}^{\prime} +{\bf A})\\ &+\frac{1}{2}\int_{{\bf x}{\bf p}{\bf p}^{\prime}}\bar{\bf F}^{(2, 1)}({\bf p},{\bf p}^{\prime})\cdot(D_{{\bf x}}f)({\bf x},{\bf p}+{\bf A})f({ \bf x},{\bf p}^{\prime}+{\bf A})\\ &+\dots\quad,\end{split} \tag{5.81}\] and the gauge invariant action can be written as \[S[\phi;A_{0},\mathbf{A}]=S_{\rm WZW}[\phi,A_{0}]-\int dt\ H_{\rm gauged}[f_{\phi}, \mathbf{A}]\,. \tag{5.82}\] As a test of the validity of this procedure, let us work out the equation of motion for the gauged action for free fermions and show that it is just the gauged Boltzmann equation. The free fermion action can be written as \[S_{\rm free}[\phi;A_{0},\mathbf{A}]=\int dt\left\langle f_{0},U^{-1}\left[ \partial_{t}-A_{0}-\epsilon(\mathbf{p}-\mathbf{A})\right]U\right\rangle\,. \tag{5.83}\] Under the variation \(U\rightarrow\exp\delta\phi\cdot U\), we find \[\delta S_{\rm free}=-\int dt\left\langle\partial_{t}f+\{f,\epsilon(\mathbf{p} -\mathbf{A})+A_{0}\},\delta\phi\right\rangle+\mathcal{O}(\delta\phi^{2})\,, \tag{5.84}\] which tells us that the equation of motion must take the form, \[\partial_{t}f+\{f,\epsilon(\mathbf{p}-\mathbf{A})+A_{0}\}=0\,, \tag{5.85}\] which, upon expanding the Poisson bracket and defining the group velocity \(\mathbf{v}_{\mathbf{p}}[\mathbf{A}]=\nabla_{\mathbf{p}}\epsilon(\mathbf{p}+ \mathbf{A})\) reduces to \[\partial_{t}f+\mathbf{v}_{\mathbf{p}}\cdot\nabla_{\mathbf{x}}f+v_{\mathbf{p} }^{i}\partial_{j}A_{i}\partial_{\mathbf{p}}^{j}f+\nabla_{\mathbf{x}}A_{0} \cdot\nabla_{\mathbf{p}}f=0\,. \tag{5.86}\] This does not look like the gauged Boltzmann equation, since it is an equation for a distribution function \(f(\mathbf{x},\mathbf{p})\) that is not gauge invariant, i.e., is evaluated at the canonical momentum \(\mathbf{p}\) instead of the gauge invariant momentum \(\mathbf{k}=\mathbf{p}+\mathbf{A}\). To bring it to a more familiar form, we make a field redefinition, \[f_{A}(t,\mathbf{x},\mathbf{k})=f(t,\mathbf{x},\mathbf{k}+\mathbf{A})\,, \tag{5.87}\] which turns the equation of motion into the familiar form of the gauged Boltzmann equation with the Lorentz force term: \[\partial_{t}f_{A}+\mathbf{v}_{\mathbf{k}}\cdot\nabla_{\mathbf{x}}f_{A}+\left( \mathbf{E}\cdot\nabla_{\mathbf{k}}+F_{ij}v_{\mathbf{k}}^{i}\partial_{\mathbf{ k}}^{j}\right)f=0\,, \tag{5.88}\] where \(v_{\mathbf{k}}=\nabla_{\mathbf{k}}\epsilon(\mathbf{k})\) is the gauge invariant group velocity. #### v.2.3 Emergent symmetries Fermi liquids are known to have a tremendously large number of emergent symmetries [60], corresponding to the conservation of not only the total particle number, but also the particle number at every point on the Fermi surface. This is a consequence of the limited amount of phase space available for quasiparticles to scatter to at low energies. Free fermions have an even larger symmetry group, since the lack of interactions as well as conservation of momentum imply that the occupation number at every momentum is conserved. These symmetries can be described in the coadjoint orbit formalism as well, by coupling to background gauge fields that make the action invariant under _all_ canonical transformations. We begin with the observation that the adjoint and coadjoint action of a general, time-dependent canonical transformation \(W=\exp\lambda(t,\mathbf{x},\mathbf{p})\) can be written as a coordinate transformation, \[\begin{split}(\mathrm{Ad}_{W}F)(\mathbf{x},\mathbf{p})& =F(\mathbf{x}^{W},\mathbf{p}^{W})\,,\\ (\mathrm{Ad}_{W}^{*}f)(\mathbf{x},\mathbf{p})&=f( \mathbf{x}^{W},\mathbf{p}^{W})\,,\end{split} \tag{5.89}\] where the transformed coordinates \(\mathbf{x}^{W}\) and \(\mathbf{p}^{W}\) are given by \[\begin{split}\mathbf{x}^{W}&=\mathbf{x}+W\nabla_{ \mathbf{p}}W^{-1}\,,\\ \mathbf{p}^{W}&=\mathbf{p}-W\nabla_{\mathbf{x}}W^{-1 }\,.\end{split} \tag{5.90}\] In order to make the action invariant under these, we will turn on background gauge fields in phase space \(A_{0}(t,\mathbf{x},\mathbf{p})\), \(\mathbf{A_{x}}(t,\mathbf{x},\mathbf{p})\) and \(\mathbf{A_{p}}(t,\mathbf{x},\mathbf{p})\). \(\mathbf{A_{x}}\) and \(\mathbf{A_{p}}\) are the respectively the position and momentum components of the phase space gauge fields. Using \(I=(\mathbf{x},\mathbf{p})\) to denote a phase space index, we require that the gauge fields transform in the following way: \[A_{0}\to W^{-1}(A_{0}-\partial_{t})W\,,\qquad A_{I}\to W^{-1}(A_{I}- \partial_{I})W\,. \tag{5.91}\] Unlike \(U(1)\) gauge fields, these gauge fields are non-abelian. Making the action invariant under all canonical transformations, however, follows the same steps as for \(U(1)\) gauge transformations. The WZW term gets modified to \[S_{\mathrm{WZW}}[\phi;A_{0}]=\int dt\left\langle f_{0},U^{-1}[\partial_{t}-A_{ 0}]U\right\rangle\,, \tag{5.92}\] which is invariant under the transformation \(U\to WU\) simultaneously with the gauge transformation of \(A_{0}\). To make the Hamiltonian invariant, we look for a gauge invariant modification of the distribution \(f\). It is not difficult to see that distribution function evaluated on shifted coordinates, \[f_{A}(\mathbf{x},\mathbf{p})=f(\mathbf{x}-\mathbf{A_{p}},\mathbf{p}+\mathbf{A _{x}})\,, \tag{5.93}\] does the trick. That this new distribution is gauge invariant can be seen as follows. Define \[\tilde{A}_{I}=W^{-1}(A_{I}-\partial_{I})W=A_{I}(\mathbf{x}^{W^{-1}},\mathbf{p }^{W^{-1}})-W^{-1}\partial_{I}W\,. \tag{5.94}\] The transformation of the modified distribution is given by \[f_{A}(\mathbf{x},\mathbf{p})\to f_{\tilde{A}}(\mathbf{x}^{W},\mathbf{p}^{W})= f\left(\mathbf{x}^{W}-\tilde{\mathbf{A}}_{\mathbf{p}}(\mathbf{x}^{W},\mathbf{p}^{W}), \mathbf{p}^{W}+\tilde{\mathbf{A}}_{\mathbf{x}}(\mathbf{x}^{W},\mathbf{p}^{W}) \right)\,. \tag{5.95}\] Now, the gauged transformed \(A_{I}\) evaluated at the transformed coordinates \((\mathbf{x}^{W},\mathbf{p}^{W})\) can be simplified in the following way: \[\tilde{A}_{I}(\mathbf{x}^{W},\mathbf{p}^{W})=W\tilde{A}_{I}(\mathbf{x}, \mathbf{p})W^{-1}=W[W^{-1}(A_{I}-\partial_{I})W]W^{-1}=A_{I}(\mathbf{x}, \mathbf{p})+W\partial_{I}W^{-1}\,, \tag{5.96}\] so that the arguments of \(f\) after the transformation reduce to \[\begin{split}\mathbf{x}^{W}-\tilde{\mathbf{A}}_{\mathbf{p}}(\mathbf{x} ^{W},\mathbf{p}^{W})&=\mathbf{x}+W\nabla_{\mathbf{p}}W^{-1}- \mathbf{A}_{\mathbf{p}}(\mathbf{x},\mathbf{p})-W\nabla_{\mathbf{p}}W^{-1}= \mathbf{x}-\mathbf{A}_{\mathbf{p}}(\mathbf{x},\mathbf{p})\,,\\ \mathbf{p}^{W}+\tilde{\mathbf{A}}_{\mathbf{x}}(\mathbf{x}^{W}, \mathbf{p}^{W})&=\mathbf{p}-W\nabla_{\mathbf{x}}W^{-1}+\mathbf{A }_{\mathbf{x}}(\mathbf{x},\mathbf{p})+W\nabla_{\mathbf{x}}W^{-1}=\mathbf{p}+ \mathbf{A}_{\mathbf{x}}(\mathbf{x},\mathbf{p})\,.\end{split} \tag{5.97}\] As a result, we find that the modified distribution is indeed gauge invariant: \[f_{A}(\mathbf{x},\mathbf{p})\to f(\mathbf{x}-\mathbf{A}_{\mathbf{p}}, \mathbf{p}+\mathbf{A}_{\mathbf{x}})=f_{A}(\mathbf{x},\mathbf{p})\,. \tag{5.98}\] Phase space gradients of \(f_{A}\), however, do not transform covariantly under canonical transformations, but covariant derivatives do, \[\begin{split} D_{I}f&\equiv\partial_{I}f-\left\{A_ {I},f\right\},\\ (D_{I}f)&\to W(D_{I}f)W^{-1}\,,\end{split} \tag{5.99}\] which we can then make invariant by evaluating it on shifted coordinates: \[(D_{I}f)_{A}(\mathbf{x},\mathbf{p})\equiv(D_{I}f)(\mathbf{x}-\mathbf{A}_{ \mathbf{p}},\mathbf{p}+\mathbf{A}_{\mathbf{x}})\rightarrow(D_{I}f)(\mathbf{x} -\mathbf{A}_{\mathbf{p}},\mathbf{p}+\mathbf{A}_{\mathbf{x}})\,. \tag{5.100}\] The Hamiltonian can then be made invariant be re-arranging it in an expansion in \(f\) instead of the fluctuation \(\delta f=f-f_{0}\), and replacing the distribution and its derivatives by their invariant counterparts, \[\begin{split} H_{\text{gauged}}[f;A_{I}]\equiv H[f_{A}]& =\int_{\mathbf{x}\mathbf{p}}\epsilon(\mathbf{p})f_{A}(\mathbf{x},\mathbf{p})\\ &+\frac{1}{2}\int_{\mathbf{x}\mathbf{p}\mathbf{p}^{\prime}}\tilde {F}^{(2,0)}(\mathbf{p},\mathbf{p}^{\prime})f_{A}(\mathbf{x},\mathbf{p})f_{A^{ \prime}}(\mathbf{x},\mathbf{p}^{\prime})\\ &+\frac{1}{2}\int_{\mathbf{x}\mathbf{p}\mathbf{p}^{\prime}}\tilde {\mathbf{F}}^{(2,1)}(\mathbf{p},\mathbf{p}^{\prime})(D_{\mathbf{x}}f)_{A}( \mathbf{x},\mathbf{p})f_{A^{\prime}}(\mathbf{x},\mathbf{p}^{\prime})\\ &+\ldots\quad,\end{split} \tag{5.101}\] where \(f_{A^{\prime}}(\mathbf{x},\mathbf{p}^{\prime})=f(\mathbf{x}-\mathbf{A}_{ \mathbf{p}}(\mathbf{x},\mathbf{p}^{\prime}),\mathbf{p}+\mathbf{A}_{\mathbf{x} }(\mathbf{x},\mathbf{p}^{\prime}))\). The gauge invariant action is given by \[S_{\text{gauged}}[\phi;A_{0},A_{I}]=S_{\text{WZW}}[\phi;A_{0}]-\int dt\ H_{ \text{gauged}}[f_{\phi};A_{I}]\,. \tag{5.102}\] A couple of comments are in order. First, for the case of free fermions, the action can be made independent of \(\mathbf{A}_{\mathbf{p}}\) by a change of integration variables \(\mathbf{x}\rightarrow\mathbf{x}+\mathbf{A}_{\mathbf{p}},\mathbf{p}\to \mathbf{p}-\mathbf{A}_{\mathbf{x}}\) in the Hamiltonian: \[\int_{\mathbf{x}\mathbf{p}}\epsilon(\mathbf{p})f(\mathbf{x}-\mathbf{A}_{ \mathbf{p}},\mathbf{p}+\mathbf{A}_{\mathbf{x}})=\int_{\mathbf{x}\mathbf{p}} \epsilon(\mathbf{p}-\mathbf{A}_{\mathbf{x}})f(\mathbf{x},\mathbf{p})\,. \tag{5.103}\] But this does not work for the interacting theory since the various factors of the invariant distribution \(f_{A}\) are evaluated at the same \(\mathbf{x}\) but at different momenta \(\mathbf{p},\mathbf{p}^{\prime}\), etc. Second, it is tempting to identify \(\mathbf{A}_{\mathbf{x}}\) with a \(U(1)\) gauge field and \(\mathbf{A}_{\mathbf{p}}\) with a Berry connection, but this is incorrect due to the fact that they depend on both \(\mathbf{x}\) as well as \(\mathbf{p}\) and their gauge transformations are non-abelian. The precise encoding of the electromagnetic potentials and the Berry connection in the phase space gauge fields is an interesting question that we leave for future work. One way to think about these phase space gauge fields is the following. Our theory lives not just in spacetime, but in phase space. Phase space is naturally a noncommutative space owing to the canonical commutation relation, \[\{x^{i},p_{j}\}=\delta^{i}_{j}\,. \tag{5.104}\] Therefore, gauge fields that live in this space are more akin to those in noncommutative field theory (see, e.g. [61] for a review) than to gauge fields in commutative spacetime. In fact, gauging a global \(U(1)\) on a noncommutative space results a nonabelian group of gauge transformations, where the commutator of two gauge transformations is given by the Moyal bracket. Our phase space gauge fields are precisely noncommutative \(U(1)\) gauge fields in the Poisson limit. How does the'maximally gauged' action (5.102) encode emergent symmetries? The answer to this question lies in the Ward identity for canonical transformations. The infinitesimal transformation of the phase space gauge fields can be written as \[\delta_{\lambda}A_{M}=-\ \partial_{M}\lambda-\{\lambda,A_{M}\}+\mathcal{O}( \lambda^{2})\,, \tag{5.105}\] where \(M\) is an index that collectively represents time and phase-space components. The variation of the action under this transformation necessarily takes the form \[\delta_{\lambda}S_{\text{gauged}}=-\int dt\left\langle\mathcal{J}^{M},\delta_{ \lambda}A_{M}\right\rangle\,, \tag{5.106}\] thus defining the phase space current \(\mathcal{J}^{M}\). The components of this current are given by \[\mathcal{J}^{0}=f\,,\qquad\mathcal{J}^{x^{i}}=f\partial_{p_{i}}\epsilon({\bf p }-{\bf A_{x}})+\dots\quad,\qquad\mathcal{J}^{p_{j}}=0+\dots\quad, \tag{5.107}\] where the ellipses denote the contribution of the interacting terms in the Hamiltonian. The Ward identity then becomes \[\partial_{M}\mathcal{J}^{M}+\{\mathcal{J}^{M},A_{M}\}=0\,. \tag{5.108}\] This takes the form of a (non-)conservation law \[\partial_{\mu}\mathcal{J}^{\mu}+\{\mathcal{J}^{\mu},A_{\mu}\}=-\partial_{p_{i} }\mathcal{J}^{p_{i}}-\{\mathcal{J}^{p_{i}},A_{p_{i}}\}\,. \tag{5.109}\] Let us momentarily turn off the background fields, so that the Ward identity turns into \[\partial_{\mu}\mathcal{J}^{\mu}=-\partial_{p_{i}}\mathcal{J}^{p_{i}}\,. \tag{5.110}\] The source term on the right-hand-side is, in general, non-zero. It can also not typically be written as a total spacetime divergence which prevents us from absorbing it into the spacetime components of the current. This means that even though the Ward identity signifies the conservation of a current in phase space, it does not always reduce to the conservation of a current in space. So the'symmetry' of canonical transformations is not really a global symmetry in that it does not lead to a conservation law. This is just a roundabout way of saying that the action without phase space gauge fields is not invariant under the group of canonical transformations. Rather, the group / algebra of canonical transformations is to be thought of as an organizing principle for the set of operators in Fermi liquid theory.15 Footnote 15: This is similar to the Virasoro algebra in \(1+1\)d conformal field theories, which also does not generally commute with the Hamiltonian of the theory. This analogy between the Virasoro algebra and the algebra of canonical transformations goes even further since minimal models in \(1+1\)d can be obtained using the coadjoint orbit method to quantize the Virasoro group [62]. However, despite not being a conservation law, the Ward identity can still be useful for discovering emergent or hidden symmetries. The trivial example is that of free fermions which do not couple to \(\mathbf{A_{p}}\) at all. The Ward identity for free fermions then looks like a conservation law at every point \(\mathbf{p}\) in momentum space, \[\partial_{\mu}\mathcal{J}^{\mu}_{\text{free}}(t,\mathbf{x},\mathbf{p})=0\,, \tag{5.111}\] from which we can identify the \(U(1)^{\infty}\) symmetry of the free Fermi gas. Next, we consider interacting Fermi liquids. Let us look at the leading interaction: \[H_{\text{int}}[f;A]=\frac{1}{2}\int_{\mathbf{x}\mathbf{p}\mathbf{p}^{\prime}} \tilde{F}^{(2,0)}(\mathbf{p},\mathbf{p}^{\prime})f(\mathbf{x}-\mathbf{A_{p}},\mathbf{p}+\mathbf{A_{x}})f(\mathbf{x}-\mathbf{A_{p}^{\prime}},\mathbf{p}+ \mathbf{A_{x}^{\prime}})\,. \tag{5.112}\] Its contribution to the momentum space current is given by \[\mathcal{J}^{p_{i}}(\mathbf{x},\mathbf{p})|_{A=0}=-\frac{\delta H_{\text{int }}}{\delta A_{p_{i}}}=\int_{\mathbf{p}^{\prime}}\tilde{F}^{(2,0)}(\mathbf{p}, \mathbf{p}^{\prime})(\partial_{x^{i}}f)(\mathbf{x},\mathbf{p})f(\mathbf{x}, \mathbf{p}^{\prime})\,, \tag{5.113}\] making the source term in the Ward identity reduce to \[\partial_{\mu}\mathcal{J}^{\mu}=-\nabla_{\mathbf{p}}\cdot\int_{\mathbf{p}^{ \prime}}\tilde{F}^{(2,0)}(\nabla_{\mathbf{x}}f)f^{\prime}\,, \tag{5.114}\] where we have used \(f\) and \(f^{\prime}\) as shorthand for \(f(\mathbf{x},\mathbf{p})\) and \(f(\mathbf{x},\mathbf{p}^{\prime})\) to make the expression compact. The source term is evidently neither vanishing nor a total spacetime derivative, so Landau interactions necessarily break the \(U(1)^{\infty}\) symmetry, which should be expected. However, is we now linearize the Ward identity in fluctuations \(\delta f=f-f_{0}\) around the spherical Fermi surface, the Ward identity simplifies to \[\partial_{\mu}\mathcal{J}^{\mu}=-\nabla_{\mathbf{x}}\cdot\nabla_{\mathbf{p}} \left(\delta f\int_{\mathbf{p}^{\prime}}\tilde{F}^{(2,0)}f_{0}(\mathbf{p}^{ \prime})\right)+\mathcal{O}(\delta f^{2})\,, \tag{5.115}\] and the source term does indeed become a total derivative and can be absorbed into a redefinition of the spatial current, \[\mathcal{J}^{x^{i}}\to\mathcal{J}^{x^{i}}+\partial_{p_{i}}\left(\delta f\int_{ \mathbf{p}^{\prime}}\tilde{F}^{(2,0)}f_{0}(\mathbf{p}^{\prime})\right)+ \mathcal{O}(\delta f^{2})\,. \tag{5.116}\] Of course, linearization is only justified when the fluctuation \(\delta f\) is supported in a small region around the Fermi surface, so the linearized Ward identity can be treated as a conservation law only at points on the Fermi surface. This gives us the well known emergent symmetry of Fermi liquids that corresponds to the conservation of particle number at every point on the Fermi surface, from the linearization of the Ward identity for canonical transformations. Else, Thorngren and Senthil [60] formalized the study of this symmetry by identifying the symmetry group in \(2+1\)d as the loop group \(LU(1)\) of maps from a circle to \(U(1)\) with point-wise multiplication, with a 't Hooft anomaly when coupled to background gauge fields. The current four dimensional \(j^{M}(t,\mathbf{x},\theta)\) lives in spacetime as well as on the Fermi surface, with \(M=t,\mathbf{x},\theta\). The background gauge field \(A_{M}(t,\mathbf{x},\theta)\) also lives in the same space and the anomalous conservation law is given by \[\partial_{M}j^{M}=\frac{\kappa}{8\pi^{2}}\epsilon^{ABCD}\partial_{A}A_{B} \partial_{C}A_{D}\,, \tag{5.117}\] with \(\kappa\) being an integer that evaluates to \(\pm 1\) for Fermi liquids. Since this is an emergent symmetry, the background gauge field can be activated against our will, which does in fact happen for Fermi liquids \[A_{M}(t,\mathbf{x},\theta)=\delta^{i}_{M}p_{Fi}(\theta)\,. \tag{5.118}\] \(A_{\theta}\) is the Berry connection, which we will set to zero. We have seen that the \(LU(1)\) symmetry in the absence of background fields emerges as a consequence of linearizing the Ward identity for canonical transformations. Now, let us demonstrate how linearizing the Ward identity also gives the anomaly. For simplicity, we restrict ourselves to free fermions and set \(\mathbf{A}_{\mathbf{x}}=0\). The free fermion Ward identity reduces to \[\begin{split}\partial_{\mu}\mathcal{J}^{\mu}+\{\mathcal{J}^{\mu},A_{\mu}\}&=0\,,\\ \partial_{t}\mathcal{J}^{0}+\partial_{i}\mathcal{J}^{i}+\nabla_{ \mathbf{x}}\mathcal{J}^{0}\cdot\nabla_{\mathbf{p}}A_{0}&=\nabla_ {\mathbf{p}}\mathcal{J}^{0}\cdot\nabla_{\mathbf{x}}A_{0}\,,\end{split} \tag{5.119}\] with \(A_{0}(t,\mathbf{x},\mathbf{p})\) being the time component of our phase space gauge field. We now expand the current around the spherical Fermi surface, \[\mathcal{J}^{0}=f_{0}(\mathbf{p})+\delta\mathcal{J}^{0}\,,\qquad\mathcal{J}^ {i}=\delta\mathcal{J}^{i}\,, \tag{5.120}\] and linearize the Ward identity in \(\delta\mathcal{J}^{\mu}\) and \(A_{0}\) to find that it takes the form, \[\partial_{t}\delta\mathcal{J}^{0}+\partial_{i}\delta\mathcal{J}^{i}=-\delta( |\mathbf{p}|-p_{F})(\mathbf{n}_{\theta}\cdot\nabla_{\mathbf{x}}A_{0})\,. \tag{5.121}\] Integrating over the radial momentum \(|{\bf p}|\) allows us to identify the Ward identity with the \(LU(1)\) anomalous conservation equation by equating \[j^{0}=\int\frac{pdp}{(2\pi)^{2}}\delta{\cal J}^{0}(t,{\bf x},{\bf p})\,,\qquad j ^{i}=\int\frac{pdp}{(2\pi)^{2}}\delta{\cal J}^{i}(t,{\bf x},{\bf p})\,,\qquad A_ {0}^{LU(1)}=A_{0}|_{|{\bf p}|=p_{F}}\,, \tag{112}\] which turns the linearized Ward identity into the \(LU(1)\) anomaly, \[\partial_{\mu}j^{\mu}=-\frac{1}{4\pi^{2}}p_{F}({\bf n}_{\theta}\cdot\nabla_{ \bf x}A_{0}^{LU(1)})\,. \tag{113}\] The same holds for interacting Fermi liquids as well, since as we saw before the linearized source term can be absorbed into a redefinition of the spatial current so that the anomalous conservation law retains the same form as that for free fermions. However, nonlinear corrections to the Ward identity violate both the conservation law in the absence of background fields as well as the anomaly16. Footnote 16: This can be seen from the fact that the current has a diamagnetic contribution even for free fermions with \({\cal J}^{x^{i}}=f\partial_{p_{i}}\epsilon({\bf p}+{\bf A_{x}})\). The algebra of canonical transformations allows us to systematically characterize the violation of the anomalous \(LU(1)\) conservation law due to nonlinearities and interactions, the structure of which is somewhat rigidly constrained by the fact that it must descend from a conservation law in phase space. ### Perturbative expansion and scaling So far we have been able to extract a lot of 'kinematic' information from the formal action (106) and the algebra of canonical transformations that underlies it, without needing to expand it in the bosonic field \(\phi\). In order to calculate correlation functions and understand the renormalization group flow of Fermi liquids, however, we will need to perform the expansion. We start with the WZW term, \[\begin{split} S_{\rm WZW}&=\int dt\left\langle f_{ 0},U^{-1}\partial_{t}U\right\rangle\\ &=\int dt\left\langle f_{0},-\dot{\phi}+\frac{1}{2!}\{\dot{\phi},\phi\}-\frac{1}{3!}\{\{\dot{\phi},\phi\},\phi\}+\dots\right\rangle\,,\end{split} \tag{114}\] where \(\dot{\phi}\) stands for the time derivative of \(\phi\). The first term is a total time derivative and hence vanishes. The second term is quadratic and contributes to the Gaussian part of the action, \[S_{\rm WZW}^{(2)}=-\frac{p_{F}^{d-1}}{2}\int_{t{\bf x}\theta}({\bf n}_{\theta }\cdot\nabla_{\bf x}\phi)\ \dot{\phi}\,, \tag{115}\] where \(\int_{t{\bf x}\theta}=\int dtd^{d}xd^{d-1}\theta/(2\pi)^{d}\), while the third term is cubic and gives rise to a 3 point vertex for \(\phi\), \[S_{\rm WZW}^{(3)}=-\frac{p_{F}^{d-2}}{3!}\int_{t{\bf x}\theta}({\bf n}_{ \theta}\cdot\nabla_{\bf x}\phi)\left[({\bf s}_{\theta}^{i}\cdot\nabla_{\bf x }\phi)\partial_{\theta^{i}}\dot{\phi}-({\bf s}_{\theta}^{i}\cdot\nabla_{\bf x }\dot{\phi})\partial_{\theta^{i}}\phi\right]\,, \tag{116}\] where \({\bf s}^{i}_{\theta}=\partial_{\theta^{\prime}}{\bf n}_{\theta}\) are tangent vectors on the spherical Fermi surface. We now focus on the Hamiltonian part, \[\begin{split} S_{H}[\phi]&=-\int dt\ H[f_{\phi}]\,,\\ f_{\phi}&=f_{0}-\{\phi,f_{0}\}+\frac{1}{2!}\{\phi,\{ \phi,f_{0}\}\}-\frac{1}{3!}\{\phi,\{\phi,\{\phi,f_{0}\}\}\}+\ldots\quad,\end{split} \tag{5.127}\] with the interacting Hamiltonian from equation (4.28), \[\begin{split} H[f]&=\int_{\bf xp}\epsilon({\bf p})f( {\bf x},{\bf p})\\ &+\frac{1}{2}\int_{\bf xpp^{\prime}}F^{(2,0)}({\bf p},{\bf p}^{ \prime})\delta f({\bf x},{\bf p})\delta f({\bf x},{\bf p}^{\prime})+{\bf F}^{ (2,1)}({\bf p},{\bf p}^{\prime})\cdot\left(\frac{\nabla_{\bf x}}{p_{F}}\delta f ({\bf x},{\bf p})\right)\delta f({\bf x},{\bf p}^{\prime})+\ldots\\ &+\frac{1}{3}\int_{\bf xpp^{\prime}p^{\prime\prime}}F^{(3,0)}({ \bf p},{\bf p}^{\prime},{\bf p}^{\prime\prime})\delta f({\bf x},{\bf p})\delta f ({\bf x},{\bf p}^{\prime})\delta f({\bf x},{\bf p}^{\prime\prime})+\ldots\\ &+\ \ldots\quad.\end{split} \tag{5.128}\] The higher derivative interaction \({\bf F}^{(2,1)}\) is evidently suppressed compared to \(F^{(2,0)}\) so we will ignore it for simplicity. The fluctuation \(\delta f\) is at least linear in \(\phi\), so only the first two lines of the Hamiltonian contribute to the quadratic action, \[\begin{split} S^{(2)}_{H}=&-\frac{p_{F}^{d-1}}{2} \int_{\tt tx\theta^{\prime}}v_{F}(\nabla_{n}\phi)^{2}\\ &-\frac{p_{F}^{d-1}}{2}\int_{t\tt tx\theta^{\prime}}v_{F}F^{(2,0) }(\theta,\theta^{\prime})(\nabla_{n}\phi)(\nabla_{n}\phi)^{\prime}\,,\end{split} \tag{5.129}\] where \(v_{F}=\epsilon^{\prime}(p_{F})\), \(F^{(2,0)}(\theta,\theta^{\prime})=p_{F}^{d-1}F^{(2,0)}(p_{F}{\bf n}_{\theta}, p_{F}{\bf n}_{\theta}^{\prime})/v_{F}\) is defined to be dimensionless, \(\phi^{\prime}=\phi(t,{\bf x},\theta^{\prime})\) and \(\int_{t\tt x\theta\theta^{\prime}}\) is defined with a factor of \((2\pi)^{2d}\) in the denominator. We have also introduced the notation \(\nabla_{n}\phi={\bf n}_{\theta}\cdot\nabla_{\bf x}\phi\) for compactness and \((\nabla_{n}\phi)^{\prime}\) is the same quantity evaluated at \(\theta^{\prime}\). For the cubic part of the action, we get contributions from all three lines of the Hamiltonian and we find \[\begin{split} S^{(3)}_{H}=&-\frac{p_{F}^{d-2}}{3!} \int_{t\tt tx\theta}\left(\frac{d-1}{2}v_{F}+p_{F}\epsilon^{\prime\prime} \right)(\nabla_{n}\phi)^{3}\\ &-\frac{p_{F}^{d-2}}{2}\int_{t\tt x\theta\theta^{\prime}}F^{(2,0) }_{1}(\theta,\theta^{\prime})\left[(\nabla_{n}\phi)^{2}(\nabla_{n}\phi)^{\prime }+(\theta\leftrightarrow\theta^{\prime})\right]\\ &-\frac{p_{F}^{d-2}}{2}\int_{t\tt x\theta\theta^{\prime}}v_{F}F^{(2, 0)}(\theta,\theta^{\prime})\Big{[}\big{[}(\nabla^{i}_{s}\nabla_{n}\phi)( \partial_{\theta^{i}}\phi)-(\partial_{\theta^{i}}\nabla_{n}\phi)(\nabla^{i}_{ s}\phi)\big{]}(\nabla_{n}\phi)^{\prime}+(\theta\leftrightarrow\theta^{\prime}) \Big{]}\\ &-\frac{p_{F}^{d-2}}{3}\int_{t\tt x\theta\theta^{\prime}\theta^{ \prime\prime}}F^{(3,0)}(\theta,\theta^{\prime},\theta^{\prime\prime})(\nabla_{n} \phi)(\nabla_{n}\phi)^{\prime}(\nabla_{n}\phi)^{\prime\prime}\,,\end{split} \tag{5.130}\] where \(\nabla_{s}^{i}\phi=\mathbf{s}_{\theta}^{i}\cdot\nabla_{\mathbf{x}}\phi\), and \(F_{1}^{(2,0)}\) is the derivative \(\mathbf{n}_{\theta}\cdot\nabla_{\mathbf{p}}F^{(2,0)}\) evaluated at the Fermi surface and appropriately rescaled to make it dimensionless. \(F^{(3,0)}\) has also similarly been evaluated at the Fermi surface and rescaled, and \(\epsilon^{\prime\prime}\) is the second derivative of the dispersion with respect to the radial momentum evaluated at the Fermi surface. Collecting everything, we can write down the interacting action up to cubic order: \[\begin{split} S=&-\frac{p_{F}^{d-1}}{2}\int_{ \text{rc}\theta}\nabla_{n}\phi\left(\dot{\phi}+v_{F}\nabla_{n}\phi+v_{F}\int_ {\theta^{\prime}}F^{(2,0)}(\theta,\theta^{\prime})(\nabla_{n}\phi)^{\prime} \right)\\ &-\frac{p_{F}^{d-2}}{3!}\int_{\text{rc}\theta}\nabla_{n}\phi \left[(\nabla_{s}^{i}\phi)(\partial_{\theta^{\prime}}\dot{\phi})-(\nabla_{s}^ {i}\dot{\phi})(\partial_{\theta^{\prime}}\phi)\right]+\left(\frac{d-1}{2}v_{F} +p_{F}\epsilon^{\prime\prime}\right)(\nabla_{n}\phi)^{3}\\ &-\frac{p_{F}^{d-2}}{2}\int_{\text{rc}\theta\theta^{\prime}}v_{F} F^{(2,0)}(\theta,\theta^{\prime})\Big{[}\big{[}(\nabla_{s}^{i}\nabla_{n}\phi)( \partial_{\theta^{\prime}}\phi)-(\partial_{\theta^{i}}\nabla_{n}\phi)(\nabla_{ s}^{i}\phi)\big{]}(\nabla_{n}\phi)^{\prime}+(\theta\leftrightarrow\theta^{ \prime})\Big{]}\\ &-\frac{p_{F}^{d-2}}{2}\int_{\text{rc}\theta\theta^{\prime}}F_{1} ^{(2,0)}(\theta,\theta^{\prime})\left[(\nabla_{n}\phi)^{2}(\nabla_{n}\phi)^{ \prime}+(\theta\leftrightarrow\theta^{\prime})\right]\\ &-\frac{p_{F}^{d-2}}{3}\int_{\text{rc}\theta\theta^{\prime}\theta ^{\prime\prime}}F^{(3,0)}(\theta,\theta^{\prime},\theta^{\prime\prime})( \nabla_{n}\phi)(\nabla_{n}\phi)^{\prime}(\nabla_{n}\phi)^{\prime\prime}\\ &+\mathcal{O}(\phi^{4})\,.\end{split} \tag{5.131}\] The first line is the Gaussian part of the action, which includes the Landau parameters \(F^{(2,0)}(\theta,\theta^{\prime})\). The second line is the free fermion contribution to the cubic part of the action, and the remaining three lines are cubic contributions with three independent Wilson coefficient functions. The quadratic part of the action is almost identical to the action obtained from multidimensional bosonization (2.18)[15; 16; 17], with one crucial difference: the angular coordinates \(\theta\) in our case are genuinely continuous variables as opposed to discrete labels for patches on the Fermi surface. Furthermore, the nonlinear and higher derivative corrections that the coadjoint orbit formalism provides can be interpreted as corrections coming from the curvature of the Fermi surface, nonlinearities in the dispersion relation, as well as intra-patch and inter-patch scattering. Since the coadjoint orbit method does not require a discretization of the Fermi surface to begin with, the corrections in our action do not distinguish between intra-patch and inter-patch effects and treat them collectively in an expansion in \(\mathbf{x}\) and \(\theta\) derivatives. Note that the cubic terms are suppressed compared to the quadratic ones by a factor of \(\nabla_{\mathbf{x}}/p_{F}\), owing to the scaling properties of the Poisson bracket described in the discussion before equation (4.14). The expansion in nonlinearities in \(\phi\) hence makes our action an effective field theory with a derivative expansion suppressed by the UV cutoff \(p_{F}\). With the expanded action in hand, we can study its properties under scaling of space \(\mathbf{x}\to s^{-1}\mathbf{x}\) with \(s\lesssim 1\). In principle we have a choice to make for how \(\theta\) scales, e.g., compared to the angle on some external observable. Different choice of this scaling result in different scaling of our theory under RG. The choice that we will make to to leave \(\theta\) invariant under scaling. This is consistent with the RG scheme of Shankar and Polchinski [10; 11] where they scale all momenta toward the Fermi surface without changing the angle between them. The quadratic part of the action then tells us that time scales the same way as space, so the dynamical scaling exponent of our theory is \(z=1\). The scaling dimension of \(\phi\) can be obtained by requiring the quadratic part of the action to be marginal: \[[\phi]=\frac{d-1}{2}\,. \tag{5.132}\] The Landau parameters \(F^{(2,0)}(\theta,\theta^{\prime})\) are marginal as expected. The cubic terms all have an additional factor of \[[\nabla\phi]=\frac{d+1}{2}\,, \tag{5.133}\] compared to the quadratic terms, which makes them all strictly irrelevant in any number of dimensions, as is necessary for any effective field theory. The same holds for higher order terms in the expansion in \(\phi\), as well as interactions. Note that the interaction terms with Wilson coefficient functions \(\epsilon(\mathbf{p}),F^{(m,n)}(\mathbf{p}_{1},\ldots,\mathbf{p}_{m})\) in the Hamiltonian (4.28) do not have fixed scaling dimensions. Instead, they characterize a tower of coefficient functions that do have a fixed scaling dimension, given by the various derivatives of \(F^{(m,n)}(\mathbf{p}_{i})\) with respect to \(|\mathbf{p}_{i}|\) evaluated at the Fermi surface. The first derivative of the dispersion \(\epsilon(\mathbf{p})\) at the Fermi surface is the Fermi velocity and shows up at the quadratic level, whereas the \(n\)th derivative \(\partial_{p}^{n}\epsilon|_{p_{F}}\) shows up as a Wilson coefficient at order \(\phi^{n+1}\). Each \(|\mathbf{p}|\) derivative increases the scaling dimension of the corresponding operator by 1, so that \(\partial_{p}^{k}F^{(m,n)}\) scales in the same way as \(F^{(m+l,n+k-l)}\) for all non-negative integer values of \(l\leq k\). General observables can be constructed in the EFT from the operator \(\phi\) by constructing all possible terms with the required quantum numbers and taking a linear combination of them with arbitrary coefficients that are determined by matching correlation functions with experiments or microscopics, as is common in EFT. Any bosonic operator that is charge neutral can be constructed in this form, while fermionic operators are absent in this EFT. A special operator is the particle number current, which can be obtained from the gauged action (5.82). The current depends on the Wilson coefficient functions of the theory, but the density is universal since \(A_{0}\) only couples to the WZW term, \[\begin{split}\rho[\phi](t,\mathbf{x})&=\frac{ \delta S_{\text{WZW}}}{\delta A_{0}(t,\mathbf{x})}=\int_{\mathbf{p}}f_{\phi}( t,\mathbf{x},\mathbf{p})=\int d^{d-1}\theta\ \rho[\phi](t,\mathbf{x},\theta)\,.\\ \rho[\phi](t,\mathbf{x},\theta)&=\frac{\delta S_{ \text{WZW}}}{\delta A_{0}(t,\mathbf{x},\theta,|\mathbf{p}|=p_{F})}=\int\frac {p^{d-1}dp}{(2\pi)^{d}}f_{\phi}(t,\mathbf{x},\theta,p)\,,\end{split} \tag{5.134}\] where \(\rho[\phi](t,\mathbf{x},\theta)\) is the _angle-resolved density_, which is the density for the emergent LU(1) symmetry. The explicit expression for the total density in terms of expansions in \(\phi\) is \[\rho-\frac{p_{F}^{d}}{d(2\pi)^{d}}=\frac{p_{F}^{d-1}}{(2\pi)^{d}}\int_{\theta} \left(\nabla_{n}\phi+\frac{1}{2p_{F}}\nabla_{s}^{i}\left(\partial_{\theta^{i} }\phi\nabla_{n}\phi\right)+\mathcal{O}(\phi^{3})\right)\,, \tag{5.135}\] where we have subtracted the background density coming from the spherical Fermi surface and \(\int_{\theta}=\int d^{d-1}\theta\). We will refer to the fluctuation in the density as \(\rho\) from here onward and drop the constant background density. Note that the density can be written as a spatial divergence to all orders in phi, since \[\int_{\mathbf{x}}\rho-\rho_{0}=\int_{\mathbf{x}\mathbf{p}}f-f_{0}=0\,, \tag{5.136}\] where the last equality follows from the fact that corrections to \(f_{0}\) that determine \(f\) are all total phase space derivatives. We explicitly calculate the two point and three point density correlation function in the next section and a demonstration of the technical advantages of the postmodern formalism. ### Linear and nonlinear response Before calculating correlation functions, we derive a scaling form for \(n\)-point density correlators from the scaling analysis above as well as the \(\phi\)-propagator (with Landau parameters set to zero for simplicity), \[\langle\phi_{\theta}\phi_{\theta^{\prime}}\rangle(\omega,\mathbf{q})=i\frac{( 2\pi)^{d}}{p_{F}^{d-1}}\frac{\delta^{d-1}(\theta-\theta^{\prime})}{q_{n}( \omega-v_{F}q_{n})}\,, \tag{5.137}\] where \(q_{n}=\mathbf{n}_{\theta}\cdot\mathbf{q}\). The action takes a schematic expansion of the form \[\begin{split} S\sim p_{F}^{d-1}&\int_{\mathbf{x} \mathbf{\theta}}\dot{\phi}\left(\nabla\phi+\frac{1}{p_{F}}(\nabla\phi)^{2}+ \frac{1}{p_{F}^{2}}(\nabla\phi)^{3}+\ldots\right)\\ &+v_{F}\nabla\phi\left(\nabla\phi+\frac{1}{p_{F}}(\nabla\phi)^{2} +\frac{1}{p_{F}^{2}}(\nabla\phi)^{3}+\ldots\right)\,,\end{split} \tag{5.138}\] where we only highlight the dependence of the action on factors that scale. The density has a similar expansion: \[\rho\sim p_{F}^{d-1}\int_{\theta}\nabla\phi+\frac{1}{p_{F}}(\nabla\phi)^{2}+ \frac{1}{p_{F}^{3}}(\nabla\phi)^{3}+\ldots\quad. \tag{5.139}\] The nonlinear terms in the action as well as the density activate higher-point correlation functions for the density17 and we can check from the scaling properties of the vertices, the propagator, as well as the nonlinear corrections to the density that all tree level diagrams (e.g. the ones in figure 9) that contribute to the density \(n\)-point function scale like \(q^{0}\). Their scaling with respect to \(p_{F}\) and \(v_{F}\) can also similarly be determined, and we find the following scaling form for the correlators: Footnote 17: This was anticipated in the context of traditional bosonization in [63]. \[\langle\rho(\omega,\mathbf{q}_{1})\ldots\rho(\omega,\mathbf{q}_{n})\rangle= \frac{p_{F}^{d+1-n}}{v_{F}^{n-1}}g_{n}\left(\frac{\omega_{i}}{\omega_{j}}, \frac{v_{F}\mathbf{q}_{i}}{\omega_{j}}\right)\delta(\Sigma_{i}\omega_{i}) \delta(\Sigma_{i}\mathbf{q}_{i})+\mathcal{O}\left(\frac{\omega_{i}}{v_{F}p_{F} },\frac{\mathbf{q}_{i}}{p_{F}}\right)\,, \tag{5.140}\] where the subleading corrections come from loops as well as higher-derivative interactions in the Hamiltonian. This scaling form is also apparent from kinetic theory (see appendix F of [1]), but highly counter-intuitive in the fermionic approach where the leading behaviour is given by a single fermion loop with \(n\) external legs. Scaling arguments in the Shankar-Polchinski scheme tell us then that a given 1-loop diagram must scale like \(p_{F}^{d-1}/q_{\perp}^{n-2}\), where \(q_{\perp}\) is the component of the momentum orthogonal to the Fermi surface. This is incorrect and what rescues the calculation is a subtle cancellation that occurs upon symmetrizing the external legs of the diagram [42; 43]. Not only can we derive the scaling form of the density correlators from the postmodern formalism, but also determine which Wilson coefficients will contribute to scaling function \(g_{n}\). These are interactions that give vertices of order up to \((\nabla\phi)^{n}\). For free fermions, these coefficients are \[\epsilon^{(m\leq n)}=\partial_{p}^{m}\epsilon|_{p_{F}}\,, \tag{108}\] while the interaction functions that contribute to the \(n\)-point function are given by \[F_{(l)}^{(m,k)}|_{p_{F}}=(\partial_{p}^{l}F^{(m,k)})|_{p_{F}}\,,\qquad(m+k+l) \leq n\,, \tag{109}\] where \(\partial_{p}\) is a derivative with respect to the one of the radial momenta the \(F^{(m,k)}\) depends on. #### v.2.1 Landau damping We can now move on to explicit calculations of the density two and three point correlators. For the two point function, the Gaussian action suffices and we only need the linear-in-\(\phi\) term in the density operator, \[\rho=\frac{p_{F}^{d-1}}{(2\pi)^{d}}\int_{\theta}\nabla_{n}\phi+\dots\quad. \tag{110}\] Using the propagator (107) we find that the two-point function evaluates to the following expressions in terms of the hypergeometric function \({}_{2}F_{1}\), \[\langle\rho\rho\rangle\left(s=\frac{\omega}{v_{F}|\mathbf{q}|} \right)=i\frac{p_{F}^{d-1}}{(2\pi)^{d}}\frac{1}{v_{F}}\int d^{d-1}\theta\frac{ \cos\theta_{1}}{\cos\theta_{1}-s}\] \[=i\frac{p_{F}^{d-1}}{(2\pi)^{d}}\frac{1}{v_{F}}\frac{\pi^{d/2}}{ \Gamma(d/2)}\frac{2-\delta_{d,1}}{1+|s|}\left[{}_{2}F_{1}\left(1,\frac{d+1}{2} ;d,\frac{2}{1+|s|}\right)-{}_{2}F_{1}\left(1,\frac{d-1}{2};d-1,\frac{2}{1+|s|} \right)\right]\,, \tag{111}\] where \(\theta_{1}\) is one of the angles that parametrize the spherical Fermi surface - the polar angle from the direction of the external momentum \(\mathbf{q}\). The \(d+1\) loop integrals in the fermionic picture have been replaced by \(d-1\) angular integrals. In \(d=1\) the answer reduces to the well-known result for a Luttinger liquid: \[\langle\rho\rho\rangle(\omega,\mathbf{q})=-\frac{i}{\pi}\frac{v_{F}q^{2}}{ \omega^{2}-v_{F}^{2}q^{2}}\,. \tag{112}\] In \(d=2\) we recover the expression \[\langle\rho\rho\rangle(\omega,\mathbf{q})=\frac{i}{2\pi}\frac{p_{F}}{v_{F}}\left(1 -\frac{|\omega|}{\sqrt{\omega^{2}-v_{F}^{2}q^{2}}}\right)\,, \tag{5.146}\] with the branch-cut for \(|\omega|<v_{F}q\) coming from the particle-hole continuum, while in \(d=3\) we find [64] \[\langle\rho\rho\rangle(\omega,\mathbf{q})=\frac{i}{2\pi^{2}}\frac{p_{F}^{2}}{ v_{F}}\left(1+\frac{1}{2}\frac{|\omega|}{v_{F}q}\log\frac{|\omega|-v_{F}q}{| \omega|+v_{F}q}\right)\,. \tag{5.147}\] It is easy to see that the expressions are in agreement with the scaling form (5.140). #### v.2.2 Cubic response Next, we calculate the three point function. Note that even though the nonlinear-in-\(\phi\) terms in the Hamiltonian can be set to zero by choosing the interactions appropriately, the nonlinearities in the WZW term cannot be avoided and are a rigid part of the structure of our theory. These have no counterpart in \(d=1\) and encode the geometry of the Fermi surface, i.e., its curvature. The density three point function is hence necessarily non-vanishing for \(d>1\) for a circular Fermi surface, irrespective of what interactions are turned on. There are two diagrams that contribute to the density three point function, as shown in figure 9. We will refer to the first of the two as the'star' diagram, and the second as the 'triangle' or 'wedge' diagram. The latter comes from the quadratic part of the density: \[\rho^{(2)}=\frac{p_{F}^{d-2}}{2(2\pi)^{d}}\int_{\theta}\nabla_{s}^{i}(\partial _{\theta^{i}}\phi\nabla_{n}\phi)\,. \tag{5.148}\] The former is the consequence of the cubic vertices in the action, which can be separated into two distinct terms: the \(S_{H}^{(3)}\) piece obtained from the Hamiltonian and the \(S_{\mathrm{WZW}}^{(3)}\) piece from the WZW term. The \(S_{H}^{(3)}\) piece is the only one that picks up a contribution from \(\epsilon^{\prime\prime}\), and is given by \[\langle\rho\rho\rho\rangle_{H}=-\frac{p_{F}^{d-2}}{(2\pi)^{d}}\left(\frac{d-1 }{2}v_{F}+p_{F}\epsilon^{\prime\prime}\right)\int_{\theta}\frac{q_{n}}{\omega -v_{F}q_{n}}\frac{q_{n}^{\prime}}{\omega^{\prime}-v_{F}q_{n}^{\prime}}\frac{( q+q^{\prime})_{n}}{(\omega+\omega^{\prime})-v_{F}(q+q^{\prime})_{n}}\,, \tag{5.149}\] Figure 9: The density three-point function in fermionic and bosonic descriptions. with \(q_{n}={\bf n}_{\theta}\cdot{\bf q}\). The \(S^{(3)}_{\rm WZW}\) piece takes the form \[\langle\rho\rho\rho\rangle_{\rm WZW}=\frac{p_{F}^{d-2}}{3!(2\pi)^{d}}\int_{\theta }\frac{q_{n}}{\omega-v_{F}q_{n}}\frac{q_{s^{i}}^{\prime}}{\omega^{\prime}-v_{F}q _{n}^{\prime}}\partial_{\theta^{i}}\frac{\omega+2\omega^{\prime}}{(\omega+ \omega^{\prime})-v_{F}(q+q^{\prime})_{n}}+5\ {\rm perm.}\,, \tag{150}\] where the permutations are those of the set \(\{(\omega,{\bf q}),(\omega^{\prime},{\bf q}^{\prime}),(\omega^{\prime\prime}, {\bf q}^{\prime\prime})\}\) with \(\omega^{\prime\prime}=-\omega-\omega^{\prime}\) and \({\bf q}^{\prime\prime}=-{\bf q}-{\bf q}^{\prime}\) due to conservation of energy and momentum. Finally, the triangle/wedge diagram evaluates to \[\langle\rho\rho\rho\rangle_{\rho^{(2)}}=-\frac{p_{F}^{d-2}}{2(2\pi)^{d}}\int_ {\theta}\frac{q_{n}(q+q^{\prime})_{s^{i}}}{\omega-v_{F}q_{n}}\partial_{\theta ^{i}}\frac{1}{\omega^{\prime}-v_{F}q_{n}^{\prime}}+5\ {\rm perm.}\,, \tag{151}\] and the density three point function is given by the sum of the three expressions \[\langle\rho\rho\rho\rangle(\omega,{\bf q};\omega^{\prime},{\bf q}^{\prime})= \langle\rho\rho\rho\rangle_{\rho^{(2)}}+\langle\rho\rho\rho\rangle_{\rm WZW}+ \langle\rho\rho\rho\rangle_{H}\,. \tag{152}\] Each of the terms do indeed have the scaling form (140), as we expected. While directly matching this expression with the fermion loop in the scaling limit is a highly nontrivial task due to the complexity of the expression evaluated in [65; 66; 67] (for a Galilean invariant dispersion relation), we can instead calculate the density 3 point function using kinetic theory for an arbitrary dispersion and show that it matches with the above expressions. This was done in [1] and we refer the reader to appendix F in the paper for details. Of course this matching should not be unexpected, since the equation of motion for our theory is exactly the kinetic equation, and tree level diagrams reproduce classical physics that is captured by the equation of motion. ### UV/IR mixing and why it is not all _that_ bad Since the cubic and higher order terms in our action (131) are strictly irrelevant, in the deep IR we can focus only on the quadratic part of the action which, for free fermions, is given by \[S\sim\int_{\rm tx\theta}\nabla_{n}\phi(\dot{\phi}+v_{F}\nabla_{n}\phi)\,, \tag{153}\] The theory has a zero mode which propagates tangent to the Fermi surface. In momentum space, this corresponds to modes \[\phi(\omega=0,q_{n}=0,q_{s^{i}},\theta)\,, \tag{154}\] for all values of the tangential components \(q_{s^{i}}\) of the momentum. In particular, this means that we have low energy modes with indefinitely large momenta (of the order of the cutoff \(p_{F}\)) in our EFT, which is the hallmark of UV/IR mixing18. This results in UV divergences in loop contributions to correlation functions as well as thermodynamic properties. Footnote 18: This is similar to fractonic models where exotic symmetries disallow kinetic terms that would suppress large momentum modes at low energies, also resulting in UV/IR mixing. For instance, we can calculate the thermal partition function by rotating to imaginary time \(t=-i\tau\) and compactifying it on a thermal circle \(\tau\in[0,\beta]\), \[Z_{\rm FL}(\beta)=\det\left[q_{n}(-i\omega_{k}+v_{F}q_{n})\right]^{-1/2}\,, \tag{5.155}\] where \(\omega_{k}=2\pi Tk\) are bosonic Matsubara frequencies with \(k\in\mathbb{Z}\). The pressure is given by the logarithm of the partition function, \[P=\frac{T}{V}\log Z_{\rm FL}=-\frac{T}{2}\sum_{k}\int_{\mathbf{q},\theta}\log \left[q_{n}(-i\omega_{k}+v_{F}q_{n})\right]\,. \tag{5.156}\] Since the integrand has a zero mode, the pressure diverges. Nevertheless, we can still extract the scaling form of the pressure with respect to temperature in a hand-wavy manner by writing \(\int_{\mathbf{q}}=\int d^{d-1}q_{s}\int q_{n}\). Since the integrand does not depend on \(q_{s^{i}}\), the integral over these components needs to be regulated by some cutoff. However, the momentum \(\mathbf{q}\) is bounded above by a physical cutoff \(p_{F}\), owing to the semiclassical truncation of the Moyal algebra to the Poisson algebra (4.14). This cutoff is not an arbitrary scale that is introduced by hand into low energy physics, but a measurable property of the IR. The integral \(\int d^{d-1}q_{s}\) hence must scale like \(p_{F}^{d-1}\), leaving only \[\sum_{k}\int dq_{n}\log[q_{n}(-i\omega_{k}+v_{F}q_{n})]\sim T\,. \tag{5.157}\] From this heuristic analysis we surprisingly find the correct scaling form for the pressure: \[P\sim p_{F}^{d-1}T^{2}\,. \tag{5.158}\] A similar problem occurs in loop corrections to correlation functions as well, for instance in the 1-loop correction to \(\langle\rho\rho\rangle\) and the result is not indifferent to how the integral is cutoff. This suggests that there is a preferred way of introducing the cutoff \(p_{F}\) in loop integrals as well, which needs to be studied more carefully. One possible resolution would be a potential resummation of the Moyal expansion of the theory, which we leave for future work. A road to perturbative non-Fermi liquids One of our main motivations for developing the coadjoint orbit formalism for Fermi liquids was to resolve the drawbacks of Fermi liquid theory that manifest themselves as serious bottlenecks when coupling to a gapless mode and studying the RG flow to a non-Fermi liquid. This approach to describing non-Fermi liquids as Fermi liquids coupled to a gapless mode is often known as the 'Hertz-Millis-Moriya' description [68; 69; 70] (see [71] for a review). The upper critical dimension for the coupling to the gapless mode is \(d=3\), which makes \(d=2\) the most interesting case to study, since there is no extended Fermi surface in \(d=1\) and bosonization allows for either exact or perturbative solutions to the \(d=1\) problem. The original approach developed by Hertz in the 1970's was to integrate out the Fermi surface and write down a non-local effective action for the gapless mode. Naturally, this approach is extremely uncontrolled and unreliable. Progress was made after the development of the Shankar-Polchinski RG scheme using both fermionic EFT [12; 13; 14; 72; 73] as well as traditional bosonization [74; 50; 75], but these approaches were also found to be limited owing either to a lack of a systematic expansion [76; 77; 73] for fermionic EFTs or to the incompleteness of the traditional bosonized description. A controlled, systematic expansion is yet to be found and our hope is that the postmodern formalism for Fermi liquids can provide one. One advantage of a bosonized theory is that some important physical properties of non-Fermi liquids can already be captured from a Gaussian theory. The Gaussian truncation of the EFT (5.131) in \(d=2\) can be coupled to a bosonic field \(\Phi(t,\mathbf{x})\) through the linearized density, \[S_{\rm NFL}^{(2)}=-\frac{p_{F}^{2}}{8\pi^{2}}\int_{t\mathbf{x}\theta}\nabla_{ n}\phi\left(\dot{\phi}+v_{F}\nabla_{n}\phi\right)-\frac{1}{2}\int_{t\mathbf{x}} \left[(\nabla\Phi)^{2}+k_{0}^{2}\Phi^{2}\right]+\lambda\frac{p_{F}}{4\pi^{2}} \int_{t\mathbf{x}}\Phi\int_{\theta}\nabla_{n}\phi\,, \tag{6.1}\] with the bare mass \(k_{0}^{2}\) tuned to criticality. The coupling can be generalized to a spin-\(l\) harmonic of the Fermi surface by inserting an additional factor of \(\cos(l\theta)\). This action is Gaussian and can hence be exactly solved. The \(\Phi\) propagator is Landau damped, \[\langle\Phi\Phi\rangle(\omega,\mathbf{q})=\frac{i}{q^{2}+k_{0}^{2}-\langle\rho \rho\rangle(\omega,\mathbf{q})}\,, \tag{6.2}\] with \(\langle\rho\rho\rangle\) being the tree level density two point function (5.146). Taking the limit \(\omega\ll q\) and tuning the boson mass to criticality by setting \(k_{0}^{2}=-p_{F}\lambda^{2}/2\pi v_{F}\), we find \[\langle\Phi\Phi\rangle(\omega,\mathbf{q})\simeq\frac{1}{q^{2}-i\frac{p_{F} \lambda^{2}}{2\pi v_{F}^{2}}\frac{|\omega|}{v_{F}q}}\,,\qquad\omega\ll v_{F}q\,, \tag{6.3}\] from which we can read off the dynamical critical exponent: \[z=3\,. \tag{6.4}\] The temperature scaling of the specific heat can also be calculated from this Gaussian theory from the thermal partition function, \[Z_{\rm NFL}(\beta)=\int D\phi D\Phi\ e^{-S_{E}}\,, \tag{6.5}\] where \(S_{E}\) is the Euclidean action obtained by Wick rotating \(t=-i\tau\) and putting imaginary time on a circle \(\tau\in[0,\beta]\). The partition function can be calculated by first integrating over \(\phi\) followed by \(\Phi\) and we find that it factorizes into a product of a Fermi liquid contribution and a Landau-damped critical boson contribution, \[Z_{\rm NFL}=\det\left[q_{n}(-i\omega_{k}+v_{F}q_{n})\right]^{-1/2}\det\left(q^ {2}+\frac{p_{F}\lambda^{2}}{2\pi v_{F}}\frac{|\omega_{k}|}{\sqrt{\omega_{k}^{2 }+v_{F}^{2}q^{2}}}\right)^{-1/2}\,, \tag{6.6}\] where \(\omega_{k}=2\pi Tk\) are bosonic Matsubara frequencies with \(k\in\mathbb{Z}\). The free energy or pressure then also splits up into a sum of a Fermi liquid contribution and a Landau-damped critical boson contribution. \[\begin{split} P&=\frac{T}{V}\log Z_{\rm NFL}\\ &=-\frac{T}{2}\sum_{k}\int_{\mathbf{q},\theta}\log\left[q_{n}(-i \omega_{k}+v_{F}q_{n})\right]-\frac{T}{2}\sum_{k}\int_{\mathbf{q}}\log\left(q ^{2}+\frac{p_{F}\lambda^{2}}{2\pi v_{F}}\frac{|\omega_{k}|}{\sqrt{\omega_{k}^ {2}+v_{F}^{2}q^{2}}}\right)\,.\end{split} \tag{6.7}\] As discussed in section V.6, the Fermi liquid contribution in the EFT suffers from UV/IR mixing and needs to be regulated appropriately. Nevertheless we can deduce its scaling form to be \(P_{\rm FL}\sim p_{F}T^{2}\) from a heuristic scaling analysis. Restricting our attention to low temperatures, the Matsubara sum for the critical boson contribution is dominated in the IR by frequencies of order \(\omega_{k}\sim q^{3}\ll v_{F}q\), allowing us to simplify the integral to \[\int_{q}\log\left(q^{2}+\tilde{\lambda}^{2}\frac{|\omega_{k}|}{q}\right)= \tilde{\lambda}^{4/3}\frac{|\omega_{k}|^{2/3}}{2\sqrt{3}}\,, \tag{6.8}\] after dropping a temperature-independent UV divergence, and defining \(\tilde{\lambda}^{2}=p_{F}\lambda^{2}/2\pi v_{F}^{2}\). The Matsubara sum is also divergent but can be regulated by introducing an exponential \(e^{-\varepsilon k}\) in the sum with \(\varepsilon>0\) to suppress the large \(k\) contribution, expanding for small \(\varepsilon\) and then subtracting off divergent pieces to find \[\sum_{k}k^{2/3}\simeq\zeta(-2/3)\,. \tag{6.9}\] We ultimately find that the critical boson contribution to the pressure evaluates to \[P=-\frac{\zeta(-2/3)}{4\sqrt{3}}\tilde{\lambda}^{4/3}T^{5/3}\,. \tag{6.10}\] At low temperatures, \(T^{5/3}\) dominates over \(T^{2}\) and the Fermi liquid contribution to the specific heat can be dropped. Any concerns about UV/IR mixing also vanish with it, since the critical boson contribution does not suffer from UV/IR mixing. The low temperature specific heat of the Gaussian NFL is hence given by \[c_{V}=T\frac{ds}{dT}=T\frac{d^{2}P}{dT^{2}}=-\frac{5\zeta(-2/3)}{18\sqrt{3}}\tilde {\lambda}^{4/3}T^{2/3}\,, \tag{6.11}\] in perfect agreement with the \(T^{2/3}\) scaling of the specific heat found from other approaches [22]. ### Scaling in non-Fermi liquids19 Footnote 19: The results presented in this section are based on ongoing work, soon to appear. The Gaussian truncation of the Fermi liquid EFT is evidently insufficient for a full description of the NFL (see e.g., [78]). But now that we know how systematically add corrections to the Gaussian action, we can hope to analyze the theory with the corrections and perform an RG analysis for the coupling to the gapless boson. The bosonized NFL action up to cubic order in arbitrary dimensions \(d\), for instance, looks like \[\begin{split} S_{\text{NFL}}[\phi,\Phi]=&-\frac{p _{F}^{d-1}}{2(2\pi)^{d}}\int_{\text{tx}\theta}\nabla_{n}\phi\left(\dot{\phi}+v _{F}\nabla_{n}\phi\right)\\ &-\frac{p_{F}^{d-2}}{3!(2\pi)^{d}}\int_{\text{tx}\theta}\nabla_{ n}\phi\left[\nabla_{s}^{i}\phi\partial_{\theta^{i}}\dot{\phi}-\nabla_{s}^{i} \dot{\phi}\partial_{\theta^{i}}\phi\right]+\left[\epsilon^{\prime\prime}+ \frac{d-1}{2}\frac{v_{F}}{p_{F}}\right](\nabla_{n}\phi)^{3}\\ &-\lambda\frac{p_{F}^{d-1}}{(2\pi)^{d}}\int_{\text{tx}}\Phi\int_{ \theta}\nabla_{n}\phi+\frac{1}{2p_{F}}\nabla_{s}^{i}(\partial_{\theta^{i}} \phi\nabla_{n}\phi)\\ &-\frac{1}{2}\int_{\text{tx}}\Phi\left(-|\nabla|^{1+\epsilon} \right)\Phi\\ &+\mathcal{O}(\phi,\Phi)^{4}\,,\end{split} \tag{6.12}\] where we have replaced the kinetic term for the critical boson by a non-local term, a la Nayak-Wilczek [13; 14]. The bare mass term for the critical boson has also been suppressed for brevity, since it is tuned to criticality anyway. We can now attempt to understand the scaling properties of this theory. From the tree level propagator (6.2) of the critical boson, it is clear that time must scale with a non-trivial power \(z\neq 1\) of space. However, requiring every term in the Gaussian part of the action then necessitates that the angles \(\theta\) scale with \(q\) as well. This can be understood in the following way: ultimately the scaling properties of the actions are to be applied to correlation functions with external momenta. Pick one such external momentum -- \(\mathbf{Q}\), and decompose the momentum \(\mathbf{q}\) of the fields parallel and perpendicular to the external momentum: \[\mathbf{q}=q_{\parallel}\frac{\mathbf{Q}}{|\mathbf{Q}|}+\mathbf{q}_{\perp}\,. \tag{6.13}\] Parametrize the Fermi surface with angles \(\theta_{i}\) such that \(\theta_{d-1}\) is the polar angle subtended from the direction of \({\bf Q}\) and the rest \(\theta_{1},\ldots,\theta_{d-2}\) are azimuthal angles for the \((d-2)\)-spherical slices of the Fermi surface for a fixed \(\theta_{d-1}.\) The external momentum \({\bf Q}\) couples most strongly to the parts of the Fermi surface that are tangent to it, i.e., at the equator when \(\theta_{d-1}\approx\pi/2.\) Define \(\delta\theta=\theta_{d-1}-\pi/2.\) In this parametrization we have \[\nabla_{n}\sim|{\bf q}_{\perp}|+q_{\parallel}\delta\theta\,. \tag{6.14}\] Marginality of the quadratic part of the Fermi liquid action (the first line of equation (6.12)) then requires \[\omega\sim|{\bf q}_{\perp}|\sim q_{\parallel}\delta\theta\,. \tag{6.15}\] If we let frequency scale with an arbitrary power (greater than 1) of the parallel momentum, \[\omega\sim q_{\parallel}^{z}\,, \tag{6.16}\] we find that the polar angle must scale toward the equator and the field momentum must scale towards a direction tangential to the Fermi surface (and collinear with the external momentum): \[\delta\theta\sim\frac{\omega}{q_{\parallel}}\sim q_{\parallel}^{z-1},\qquad| {\bf q}_{\perp}|\sim\omega\sim q_{\parallel}^{z}\,. \tag{6.17}\] Since the transverse components \({\bf q}_{\perp}\) scale to zero much faster than the parallel component, the parallel component in the IR scales like the magnitude of the field momentum \(q_{\parallel}\sim q\) and the following scaling relations hold: \[\omega\sim|{\bf q}_{\perp}|\sim q^{z}\,,\qquad\delta\theta\sim q^{z-1}\,, \qquad\nabla_{n}\sim q^{z}\,,\qquad\nabla_{s}^{i}\sim q\,. \tag{6.18}\] From this we can calculate the scaling dimension of \(\phi,\) \[\phi\sim q^{1+z(d-3)/2}\,, \tag{6.19}\] and that of the density, \[\rho\sim\int_{\theta}\nabla_{n}\phi\sim q^{z(d+1)/2}\,. \tag{6.20}\] The scaling dimension of the critical boson can be calculated from its kinetic term, \[\Phi\sim q^{(zd-\epsilon)/2}\,, \tag{6.21}\] and requiring the Gaussian part of the interaction to be marginal sets the dynamical critical exponent: \[z=2+\epsilon\,, \tag{6.22}\] which is consistent with \(z=3\) for \(\epsilon=1.\) Now let us look at the cubic terms. The cubic part of the Hamiltonian term scales like \[\frac{S_{H}^{(3)}}{S^{(2)}}\sim\nabla_{n}\phi\sim q^{1+z(d-1)/2}\,, \tag{6.23}\] which is irrelevant for all values of \(z>0,d\geq 1\), and hence does not contribute to the RG flow in any dimension. The cubic parts of the WZW term as well as the coupling, on the other hand, scale differently: \[\frac{S_{\rm WZW}^{(3)}}{S^{(2)}}\sim\frac{S_{\rm int}^{(3)}}{S^{(2)}}\sim \nabla_{s}^{i}\partial_{\theta^{i}}\phi=q^{3-z(5-d)/2}\,. \tag{6.24}\] This gives us a set of marginal cubic corrections in the \((d,z)\)-plane \[z=\frac{6}{5-d}\quad\Leftrightarrow\quad d=5-\frac{6}{z}\,,\qquad\frac{d}{2} \quad 3\quad 4\quad 5\] with the corrections being relevant if \(z\) is larger at fixed \(d\) or \(d\) is smaller at fixed \(z\). This suggests two possible methods to obtain a perturbative NFL fixed point: * \(d=2,z=2-\epsilon\) for small \(\epsilon\) (Nayak-Wilczek). * \(d=3-\epsilon,z=3\) for small \(\epsilon\) (dimensional regularization). The former has the advantage of being technically simpler by virtue of having fewer angles to integrate over, while the latter has the advantage of having a local order parameter and a more traditional and familiar expansion, similar to the perturbative fixed point for the \(O(N)\) model. We leave an explicit analysis of both these expansions to future work. ## VII Spin and BCS Extensions So far we have been exclusively working with spinless fermions and the charge 0 bosonic operators that can be constructed from them. Only a small class of fermion systems fall into this category so a natural extension would be to understand how to include internal symmetries as well as charged operators. The way this is achieved in traditional multidimensional bosonization is by writing a non-abelian patch fermion in terms of a bosonic vertex operator (see e.g., [17]), \[\psi_{i}(\eta)\sim e^{i\phi_{i}(\eta)}\,, \tag{109}\] where \(i\) is an internal index, e.g., spin, and \(\eta\) is a discrete label for the patches into which the Fermi surface is decomposed. There are various issues with this construction. The most immediate objection one could have is the mismatch of operator statistics on both sides. A fermion operator cannot possibly be written as a bosonic operator. In 1+1d this works in a subtle way since the Bose-Fermi duality is not strictly between the bosonic and fermionic theories, but rather the bosonic theory is dual to the fermionic one with a gauged \((-1)^{F}\) fermion parity symmetry. An intuitive way of thinking about this is that exchanging operators in 1+1d forces us to pass a coincidence singularity which allows for non-trivial transition functions to enter the exchange statistics of operators, unlike in higher dimensions. The usual workaround for this is an 'engineering' solution which multiplies the bosonic vertex operators by a 'Klein factor' \(O_{\eta}\) that obeys anticommutation relations and fixes the mismatch of exchange statistics on both sides. But this solution is unsatisfactory and unsystematic since its not clear whether these factors are supposed to be treated as dynamical quantities (to be integrated over in a path integral) or effectively as transition functions between different patches on the Fermi surface and if the physics of the bosonized theory is independent of the choice of Klein factors. Secondly, the bosonization prescription ignores the non-abelian nature of the fermion, since the bosonic field \(\phi_{i}\) transforms in the same representation of the internal symmetry as the patch fermion. This is evidently incorrect since nonabelian bosonization requires the addition of WZW terms in one higher dimension [79], and the bosonized field lives in the square of the representation of the patch fermion. We take an alternate approach to bosonizing Fermi surfaces of non-abelian fermions - one that relies on the algebra of fermion bilinears that can be constructed from the microscopic fermion bilinears. ### Spinful Fermi surfaces Recall that our starting point for the postmodern formalism was the algebra of fermion bilinears. For spin-1/2 fermions, the same holds, but the generators of our algebra have additional indices. \[\begin{split} T_{\sigma\sigma^{\prime}}(\mathbf{x},\mathbf{y})& \equiv\frac{i}{2}\left[\psi_{\sigma}^{\dagger}\left(\mathbf{x}+ \frac{\mathbf{y}}{2}\right)\psi_{\sigma^{\prime}}\left(\mathbf{x}-\frac{ \mathbf{y}}{2}\right)-\psi_{\sigma^{\prime}}\left(\mathbf{x}-\frac{\mathbf{y} }{2}\right)\psi_{\sigma}^{\dagger}\left(\mathbf{x}+\frac{\mathbf{y}}{2}\right) \right]\,,\\ T_{\sigma\sigma^{\prime}}(\mathbf{q},\mathbf{p})& \equiv\frac{i}{2}\left[\psi_{\sigma}^{\dagger}\left(\frac{ \mathbf{q}}{2}+\mathbf{p}\right)\psi_{\sigma^{\prime}}\left(\frac{\mathbf{q} }{2}-\mathbf{p}\right)-\psi_{\sigma^{\prime}}\left(\frac{\mathbf{q}}{2}- \mathbf{p}\right)\psi_{\sigma}^{\dagger}\left(\frac{\mathbf{q}}{2}+\mathbf{ p}\right)\right]\,,\\ T_{\sigma\sigma^{\prime}}(\mathbf{x},\mathbf{p})& \equiv\int_{\mathbf{y}}T_{\sigma\sigma^{\prime}}(\mathbf{x}, \mathbf{y})e^{i\mathbf{p}\cdot\mathbf{y}}=\int_{\mathbf{q}}T_{\sigma\sigma^{ \prime}}(\mathbf{q},\mathbf{p})e^{-i\mathbf{q}\cdot\mathbf{x}}\,,\\ T_{\sigma\sigma^{\prime}}(\mathbf{q},\mathbf{y})& \equiv\int_{\mathbf{x},\mathbf{p}}T_{\sigma\sigma^{\prime}}(\mathbf{x}, \mathbf{p})e^{i\mathbf{q}\cdot\mathbf{x}}e^{-i\mathbf{p}\cdot\mathbf{y}}=\int_ {\mathbf{x}}T_{\sigma\sigma^{\prime}}(\mathbf{x},\mathbf{y})e^{i\mathbf{q} \cdot\mathbf{x}}=\int_{\mathbf{p}}T_{\sigma\sigma^{\prime}}(\mathbf{q}, \mathbf{p})e^{-i\mathbf{p}\cdot\mathbf{y}}\,.\end{split} \tag{7.2}\] Ignoring the dependence on phase space coordinates, the generators live in the tensor product representation, \[\frac{1}{2}\otimes\frac{1}{2}=0\oplus 1\,, \tag{7.3}\] of the fundamental (spin-1/2) representation of \(SU(2)\), which decomposes into a direct sum of the scalar (singlet) and the adjoint (triplet). Therefore an alternate choice of basis for these generators is given by \[T^{a}(\mathbf{x},\mathbf{p})=\frac{i}{2}\int_{\mathbf{y}}\left[\psi^{\dagger }\left(\mathbf{x}+\frac{\mathbf{y}}{2}\right)\cdot S^{a}\cdot\psi\left( \mathbf{x}-\frac{\mathbf{y}}{2}\right)-\text{h.c.}\right]e^{i\mathbf{p}\cdot \mathbf{y}}\,,\qquad a=0,1,2,3\,, \tag{7.4}\] where h.c. stands for hermitian conjugate and \(S^{0}=1\) is the identity matrix and \(S^{i}=\sigma^{i}/2\) are the generators of the Lie algebra \(\mathfrak{su}(2)\). The generators close under commutation and we have \[\begin{split}[T^{a}(\mathbf{q},\mathbf{y}),T^{b}(\mathbf{q}^{ \prime},\mathbf{y}^{\prime})]=2\left(i\cos\frac{\mathbf{q}^{\prime}\cdot \mathbf{y}-\mathbf{q}\cdot\mathbf{y}^{\prime}}{2}[S^{a},S^{b}]^{c}+\sin\frac{ \mathbf{q}^{\prime}\cdot\mathbf{y}-\mathbf{q}\cdot\mathbf{y}^{\prime}}{2}[S^{ a},S^{b}]^{c}_{+}\right)\\ \times\,T^{c}(\mathbf{q}+\mathbf{q}^{\prime},\mathbf{y}+ \mathbf{y}^{\prime})\,,\end{split} \tag{7.5}\] where \([S^{a},S^{b}]^{c}\) and \([S^{a},S^{b}]^{c}_{+}\) are respectively the components of the commutator and anticommutator of the spin generators expanded in the \(S^{c}\) basis. This Lie algebra, which we refer to as the \(\mathfrak{su}(2)\)_-extended Moyal algebra_ or the _spin-Moyal algebra_ is isomorphic, as a vector space, to the tensor product \[\mathfrak{g}_{\text{spin-Moyal}}\cong(\mathbb{C}\oplus\mathfrak{su}(2))\otimes \mathfrak{g}_{\text{Moyal}}\,, \tag{7.6}\] where \(\mathbb{C}\) is a one-dimensional complex vector space. The semi-classical / Poisson limit is the same as before (4.12), and we find that this truncates to the following algebra: \[\begin{split}[T^{0}(\mathbf{q},\mathbf{y}),T^{0}(\mathbf{q}^{ \prime},\mathbf{y}^{\prime})]&=(\mathbf{q}^{\prime}\cdot\mathbf{ y}-\mathbf{q}\cdot\mathbf{y}^{\prime})T^{0}(\mathbf{q}+\mathbf{q}^{\prime}, \mathbf{y}+\mathbf{y}^{\prime})\,,\\ [T^{0}(\mathbf{q},\mathbf{y}),T^{i}(\mathbf{q}^{\prime},\mathbf{y}^ {\prime})]&=(\mathbf{q}^{\prime}\cdot\mathbf{y}-\mathbf{q}\cdot \mathbf{y}^{\prime})T^{i}(\mathbf{q}+\mathbf{q}^{\prime},\mathbf{y}+\mathbf{y}^ {\prime})\,,\\ [T^{i}(\mathbf{q},\mathbf{y}),T^{j}(\mathbf{q}^{\prime},\mathbf{ y}^{\prime})]&=-f^{ijk}T^{k}(\mathbf{q}+\mathbf{q}^{\prime}, \mathbf{y}+\mathbf{y}^{\prime})\,,\end{split} \tag{7.7}\] where \(f^{ijk}\) are the structure factors of \(\mathfrak{su}(2)\). One can check by explicit calculation that these truncated commutators do indeed obey the Jacobi identity. Convoluting with a \(0\oplus 1\)-valued phase space function to define a general linear combination of these generators \[O_{F}\equiv\int_{\mathbf{x},\mathbf{p}}F^{a}(\mathbf{x},\mathbf{p})T^{a}( \mathbf{x},\mathbf{p})\,, \tag{7.8}\] we find that the commutator \([O_{F},O_{G}]\) for two arbitrary \(0\oplus 1\)-valued functions \(F^{a}(\mathbf{x},\mathbf{p})\) and \(G^{a}(\mathbf{x},\mathbf{p})\) is given by another operator \(O_{[F,G]_{\text{spin}}}\) corresponding to the components \[\begin{split}[F,G]^{0}_{\text{spin}}&=\{F^{0},G^{ 0}\}\,,\\ [F,G]^{k}_{\text{spin}}&=\{F^{0},G^{k}\}+\{F^{k},G^{ 0}\}-f^{ijk}F^{j}G^{k}\,,\end{split} \tag{7.9}\] We will refer to this Lie algebra as the _\(\mathfrak{su}(2)\)-extended Poisson algebra_ or the _spin-Poisson algebra_: \[\mathfrak{g}_{\text{spin-Poisson}}\cong(\mathbb{C}\oplus\mathfrak{su}(2)) \otimes\mathfrak{g}\,. \tag{7.10}\] This algebra also has an interpretation in terms of canonical transformations, in a single particle phase-space, except we now allow the action of (infinitesimal) canonical transformations on functions with spin indices to mix with transformations of the spin indices. For a function \(O(\mathbf{x},\mathbf{p})\), with suppressed internal indices, that transforms under some representation \(\rho\) of \(\mathfrak{su}(2)\), the infinitesimal transformation is given by \[\begin{split}\mathbf{x}\to\mathbf{x}^{\prime}&= \mathbf{x}-\nabla_{\mathbf{p}}F^{0}\,,\\ \mathbf{p}\to\mathbf{p}^{\prime}&=\mathbf{p}+\nabla_{ \mathbf{x}}F^{0}\,,\\ O(\mathbf{x},\mathbf{p})\to O^{\prime}(\mathbf{x}^{\prime}, \mathbf{p}^{\prime})&=\left(1_{\rho}+F^{i}\rho(S^{i})\right) \cdot O(\mathbf{x},\mathbf{p})\,,\end{split} \tag{7.11}\] where \(1_{\rho}\) is the identity operator in the representation \(\rho\). One can see that these transformations are generated by the phase space vector field valued in the representation \(\rho\): \[X_{F}=\left(\nabla_{\mathbf{x}}F^{0}\cdot\nabla_{\mathbf{p}}-\nabla_{\mathbf{ p}}F^{0}\cdot\nabla_{\mathbf{x}}\right)\cdot 1_{\rho}+F^{i}\rho(S^{i})\,. \tag{7.12}\] Evaluating the commutator of two such vector fields acting on a spinful test function, we find the Lie bracket (7.9) of the spin-Poisson algebra. This perspective also makes it evident that the Lie bracket must obey Jacobi identity, without having to explicitly demonstrate it, since vector fields by definition obey it. The corresponding Lie group of canonical transformations augmented with spin, or _spin-canonical transformations_ will be labelled \(\mathcal{G}_{\text{spin}}\). The dual space \(\mathfrak{g}^{*}_{\text{spin-Poisson}}\) then consists of \((0\oplus 1)\)-valued distributions corresponding to the expectations values of the fermion bilinears in a given state \(\sigma\), \[f^{a}(\mathbf{x},\mathbf{p})=\langle T^{a}(\mathbf{x},\mathbf{p})\rangle_{ \sigma}\,, \tag{7.13}\] with the ground state distribution for a spherical Fermi surface given by \[f^{0}_{\text{gs}}(\mathbf{p})=\Theta(p_{F}-|\mathbf{p}|)\,,\qquad f^{i}_{ \text{gs}}=0\,. \tag{7.14}\] The \(0\oplus 1\) decomposition of the fermion bilinears allows for a useful physical interpretation of the various components of the distribution, with \(f^{0}\) being the charge fluctuation and \(f^{i}\) being the spin fluctuation. At first glance, this formalism hence seems to be amenable to a description of spin-charge separation without the need for a parton construction, and perhaps might be able to answer questions about the energetic favourability of spin-charge separation states. We leave a study of this to future work. The spin-Poisson bracket (7.9) points out an important scaling relation between the charge and spin fluctuations. For every term in the commutator \([F,G]^{k}_{\text{spin}}\) to scale homogeneously, we need \[f^{i}\sim\nabla_{\mathbf{x}}\nabla_{\mathbf{p}}f^{0}\sim\frac{q}{p_{F}}f^{0}\,, \tag{7.15}\] so the spin fluctuations are suppressed compared to the total charge in the Poisson limit. The coadjoint orbit action for spinful Fermi surfaces can be computed in the usual way, by first finding the stabilizer \(\mathcal{H}_{\text{spin}}\), which consists of functions \(\alpha^{a}(\mathbf{x},\mathbf{p})\) such that \[[\alpha,f_{0}]_{\text{spin}}=0\,,\qquad\Longrightarrow\qquad(\mathbf{n}_{ \theta}\cdot\nabla_{\mathbf{x}}\alpha^{a})_{|\mathbf{p}|=p_{F}}=0\,. \tag{7.16}\] This allows us to parametrize the coadjoint orbit \(\mathcal{G}_{\text{spin}}/\mathcal{H}_{\text{spin}}\) by the degree of freedom \[\phi^{a}(\mathbf{x},\theta)\,, \tag{7.17}\] with a typical state given by \[f_{\phi}=Uf_{0}U^{-1}=f_{\text{gs}}-[\phi,f_{\text{gs}}]_{\text{spin}}+\frac{ 1}{2!}[\phi,[\phi,f_{\text{gs}}]_{\text{spin}}]_{\text{spin}}+\dots\quad, \qquad U=\exp(-\phi) \tag{7.18}\] We find that the fluctuations of a spinful Fermi surfaces are characterized by twice as many degrees of freedom compared to multidimensional bosonization, which is a fact that is well known in the conventional Fermi liquid approach (see, e.g., [80]). The Gaussian part of the EFT can be evaluated in the usual way to find an expression very similar to the spinless case, \[S=-\frac{p_{F}^{d-1}}{2}\int_{\mathfrak{tr}\theta}(\nabla_{n}\phi^{a})\left( \dot{\phi}^{a}+v_{F}\nabla_{n}\phi^{a}+\int_{\theta^{\prime}}F^{(2,0)}_{ab}( \theta,\theta^{\prime})(\nabla_{n}\phi^{b})^{\prime}\right)\,, \tag{7.19}\] where we allow interactions \(F^{(2,0)}_{ab}\) that can break the \(\mathfrak{su}(2)\) symmetry. The cubic WZW term, however, has an important difference in the last term: \[\begin{split} S^{(3)}_{\text{WZW}}=-\frac{p_{F}^{d-2}}{3!}\int_{ \mathfrak{tr}\theta}&\nabla_{n}\phi^{0}\left(\nabla_{s}\dot{ \phi}^{0}\partial_{\theta}\phi^{0}-\nabla_{s}\phi^{0}\partial_{\theta}\dot{ \phi}^{0}\right)\\ &+\nabla_{n}\phi^{i}\left(\nabla_{s}\dot{\phi}^{0}\partial_{ \theta}\phi^{i}-\nabla_{s}\phi^{i}\partial_{\theta}\dot{\phi}^{0}+\nabla_{s} \dot{\phi}^{i}\partial_{\theta}\phi^{0}-\nabla_{s}\phi^{0}\partial_{\theta} \dot{\phi}^{i}\right)\\ &-f^{ijk}(\nabla_{n}\phi^{i})\dot{\phi}^{j}\phi^{k}\,.\end{split} \tag{7.20}\] The expansion of the charge density \(\rho^{0}\) in terms of \(\phi^{0}\) remains identical to the spinless case, but the spin density picks up new types of terms: \[\rho^{i}=\frac{p_{F}^{d-1}}{(2\pi)^{d}}\int_{\theta}\nabla_{n}\phi^{i}+\frac{1}{ 2p_{F}}\nabla_{s}\left(\partial_{\theta}\phi^{0}\nabla_{n}\phi^{i}+\partial_{ \theta}\phi^{i}\nabla_{n}\phi^{0}\right)+f^{ijk}\phi^{j}\nabla_{n}\phi^{k}\,. \tag{7.21}\] The scaling scheme is determined by requiring the quadratic part of the action to be exactly marginal, implying that we need to scale \(\phi^{i}\sim\phi^{0}\). This causes the \(f^{ijk}\) terms in the WZW piece as well as the spin density to scale with an additional factor of \(q^{-1}\) compared to the others, making the spin density correlators scale differently compared to their charge density analogues (see section VI B of [1]). ### Charged fermion bilinears While we do not necessarily have access to the patch fermion operator in the postmodern formalism, we can consider charged bilinears of the form, \[T^{(2)}(\mathbf{x},\mathbf{y}) =-T^{(2)}(\mathbf{x},-\mathbf{y})\equiv i\psi^{\dagger}\left( \mathbf{x}+\frac{\mathbf{y}}{2}\right)\psi^{\dagger}\left(\mathbf{x}-\frac{ \mathbf{y}}{2}\right)\,, \tag{7.22}\] \[T^{(-2)}(\mathbf{x},\mathbf{y}) =-T^{(-2)}(\mathbf{x},-\mathbf{y})\equiv i\psi\left(\mathbf{x}+ \frac{\mathbf{y}}{2}\right)\psi\left(\mathbf{x}-\frac{\mathbf{y}}{2}\right)\,,\] \[T^{(0)}(\mathbf{x},\mathbf{y}) \equiv\frac{i}{2}\left[\psi^{\dagger}\left(\mathbf{x}+\frac{ \mathbf{y}}{2}\right)\psi\left(\mathbf{x}-\frac{\mathbf{y}}{2}\right)-\psi \left(\mathbf{x}-\frac{\mathbf{y}}{2}\right)\psi^{\dagger}\left(\mathbf{x}+ \frac{\mathbf{y}}{2}\right)\right]\,,\] where the number in the parenthesis in the superscript denotes their charge under the particle number conserving \(U(1)\) symmetry. We have defined these operators so that \[T^{(q)}(\mathbf{x},\mathbf{y})^{\dagger}=-T^{(-q)}(\mathbf{x},-\mathbf{y})\,. \tag{7.23}\] The various Fourier transforms of these are defined in the usual way, but we write down the momentum space versions here for later use: \[T^{(2)}(\mathbf{q},\mathbf{p}) \equiv i\psi^{\dagger}\left(\frac{\mathbf{q}}{2}+\mathbf{p}\right) \psi^{\dagger}\left(\frac{\mathbf{q}}{2}-\mathbf{p}\right)=-T^{(2)}(\mathbf{ q},-\mathbf{p})\,, \tag{7.24}\] \[T^{(-2)}(\mathbf{q},\mathbf{p}) \equiv i\psi\left(\frac{\mathbf{q}}{2}+\mathbf{p}\right)\psi\left( \frac{\mathbf{q}}{2}-\mathbf{p}\right)=-T^{(-2)}(\mathbf{q},-\mathbf{p})\,,\] \[T^{(0)}(\mathbf{q},\mathbf{p}) \equiv\frac{i}{2}\left[\psi^{\dagger}\left(\frac{\mathbf{q}}{2}+ \mathbf{p}\right)\psi\left(\frac{\mathbf{q}}{2}-\mathbf{p}\right)-\psi\left( \frac{\mathbf{q}}{2}-\mathbf{p}\right)\psi^{\dagger}\left(\frac{\mathbf{q}}{ 2}+\mathbf{p}\right)\right]\,.\] It turns out that these also close under commutation, and we find the following Lie algebra: \[[T^{(0)}(\mathbf{q},\mathbf{y}),T^{(0)}(\mathbf{q}^{\prime},\mathbf{ y}^{\prime})] =2\sin\left(\frac{\mathbf{q}^{\prime}\cdot\mathbf{y}-\mathbf{q} \cdot\mathbf{y}^{\prime}}{2}\right)T^{(0)}(\mathbf{q}+\mathbf{q}^{\prime}, \mathbf{y}+\mathbf{y}^{\prime})\,, \tag{7.25}\] \[=ie^{\frac{i}{2}(\mathbf{q}^{\prime}\cdot\mathbf{y}-\mathbf{q} \cdot\mathbf{y}^{\prime})}T^{(\pm 2)}(\mathbf{q}+\mathbf{q}^{\prime},\mathbf{y}+ \mathbf{y}^{\prime})\] \[\qquad\pm ie^{-\frac{i}{2}(\mathbf{q}^{\prime}\cdot\mathbf{y}- \mathbf{q}\cdot\mathbf{y}^{\prime})}T^{(\pm 2)}(\mathbf{q}+\mathbf{q}^{\prime}, \mathbf{y}-\mathbf{y}^{\prime})\,,\] \[=2\sin\left(\frac{\mathbf{q}^{\prime}\cdot\mathbf{y}-\mathbf{q} \cdot\mathbf{y}^{\prime}}{2}\right)T^{(0)}(\mathbf{q}+\mathbf{q}^{\prime}, \mathbf{y}+\mathbf{y}^{\prime})\] \[\qquad\qquad\qquad\qquad\qquad-2\sin\left(\frac{\mathbf{q}^{ \prime}\cdot\mathbf{y}+\mathbf{q}\cdot\mathbf{y}^{\prime}}{2}\right)T^{(0)}( \mathbf{q}+\mathbf{q}^{\prime},\mathbf{y}-\mathbf{y}^{\prime})\,.\] What remains is to find an appropriate semi-classical truncation of this algebra in order to apply the coadjoint orbit method and obtain an action that would involve three distributions \(f^{(q)}(\mathbf{x},\mathbf{p})\) as the degrees of freedom, corresponding to the usual occupation number distribution for \(q=0\), as well as charged distributions for \(q\pm 2\) whose values encode the BCS gap function. Taking a semiclassical limit in this case is a bit more involved, since the right hand side of the commutators includes operators not only evaluated at \(\mathbf{y}+\mathbf{y}^{\prime}\), but also at \(\mathbf{y}-\mathbf{y}^{\prime}\). From an intuitive perspective, the Poisson limit for the charge \(0\) section (4.12) \[|\mathbf{q}|\ll|\mathbf{p}|\sim p_{F} \tag{7.26}\] still seems to give the most relevant configurations of both particle-hole pairs as well as Cooper pairs, since in this limit the particle and hole nearly coincide at the Fermi surface, while the particles in, say, \(T^{(2)}\) become nearly antipodal20 (see figure 3b). However, the appropriate truncation of the algebra (7.25) in this limit that also obeys the Jacobi identity needs to be found to ensure that this intuition works quantitatively. Footnote 20: Recall that in our convention \(\psi(\mathbf{k})\) creates a hole at \(-\mathbf{k}\), while \(\psi^{\dagger}(\mathbf{k})\) creates a particle at \(+\mathbf{k}\). ## VIII Conclusion and Outlook To summarize, we presented in this dissertation a formalism for the study of Fermi surface physics that is built out of a robust geometric structure underlying a large subalgebra of operators that governs low energy physics in the presence of a Fermi surface. This formalism provides an algorithm to obtain an effective field theory description for Fermi liquids given the internal symmetries of the microscopic fermions. The effective field theory has a rigid structure determined by the geometry of the Fermi surface and a collection of Wilson coefficient functions that parametrize fermion interactions. Unlike previous approaches, this postmodern formalism systematizes the expansion of low energy properties in a way that makes the scaling behaviour of these properties transparent. The amenability of this EFT to simple power counting arguments exemplifies its usefulness, on top of making diagrammatic calculations simpler compared to earlier approaches. Not only that, the geometric nature of this formalism allows us to identify emergent symmetries in a straightforward manner as well, entirely by analyzing the Ward identity for canonical transformations. The postmodern formalism opens up many different avenues of exploration, the primary one being a systematic study of non-Fermi liquids. The fact that power counting arguments work even for the non-Fermi liquid theory presented above in section VI is promising, but the theory needs to be studied more carefully to obtain quantitative results. The ability to include charged fermion bilinears in the algebra opens up the possibility of combining Fermi liquids and conventional superconductors into a parent effective field theory, which could serve as a useful theoretical platform to analyze the competition between non-Fermi liquid and superconducting instabilities of a Fermi liquid, as well as provide a path towards understanding the mechanisms underlying high temperature superconductivity, e.g., in cuprates. Other, slightly less ambitious, directions include a study of the non-perturbative properties of the postmodern formalism, for instance through an analysis of the topological properties of the coadjoint orbit \(\mathcal{O}_{f_{0}}\) which were largely ignored in the present construction since we were looking for a perturbative expansion around the ground state \(f_{0}\) in the coadjoint orbit. The nonlinear Ward identity might also serve as a powerful non-perturbative constraint, especially if it holds in more general systems beyond Fermi liquids. One rather curious aspect of the postmodern formalism is that despite the presence of UV/IR mixing, it seems possible to analyze the scaling behaviour of various physical quantities such as the specific heat, since our EFT seemingly comes with a preferred choice of UV cutoff. A more careful exploration of such UV divergences needs to be undertaken to see whether we can obtain quantitative results through a prescription for the cutoff or a resummation of the Moyal expansion. We hope that such an analysis will also shed light on the question of how to deal with UV/IR mixing in other effective theories. One lesson to take away from this formalism and its ubiquity across other phases of matter is that diffeomorphism groups have an untapped potential to constrain the emergent physics of many-body systems. We hope that this work will serve as a stepping stone towards exploiting this potential further. ## Appendix A Coadjoint orbit method -- mathematical details The coadjoint orbit method [54] is, in principle, a method used to quantize a Lie group, i.e., find irreducible (linear and/or projective) representations of the group. The notion of quantizing a classical dynamical theory is closely tied to finding irreducible representations of its symmetries, as is made evident by the example of a single spin. Classically, a single spin is just some vector of arbitrary length in 3 dimensions whose dynamics are governed by the rotation group \(SO(3)\). It is only when we quantize the classical dynamics that we find that the magnitude of the spin must be \(\sqrt{l(l+1)}\hbar\) where \(l\) is a half-integer or an integer. When \(l\) is an integer, we find linear representations of \(SO(3)\), while for \(l\) a half-integer, the representation is a projective representation of \(SO(3)\) which is equivalently a linear representation of \(SU(2)\). Of course this distinction between the two groups only occurs once we take into consideration the global topological structure of the Lie groups, since both have identical Lie algebras. One approach to the coadjoint orbit method is hence to set up a dynamical system that evolves under the action of the Lie group, and then quantize it [55; 56]. Quantizing the dynamical system will result in some consistency constraints which will label the irreducible representations of the Lie group. For the purpose of this draft, we are only interested in the first step: setting up a dynamical system that evolves under the action of canonical transformations. While established methods of quantizing this dynamical system describing semi-classical Fermi liquids should in principle apply, in practice they are rather difficult to implement due to the fact that the Lie group of canonical transformations is an infinite dimensional diffeomorphism group. Therefore, we resort to a more 'lowbrow' approach to quantizing the theory like one would any other quantum field theory. For most of this section, we will keep the discussion rather general, and provide intuition for the results we obtain using the example of a single spin (or equivalently a rigid body in the center of mass frame). Consider a Lie group \(\mathcal{G}\), whose typical element will be represented by the letter \(g\). The identity element of the Lie group will be represented as \(e\). It is Lie algebra \(\mathfrak{g}\) consists of left-invariant vector fields \(X\) on the Lie group, with the commutator of these vector fields (viewed as differential operators acting on test functions of the Lie group) determines the Lie bracket, which will be denoted by \([\,\ ]\). Alternately, one can think of the Lie algebra as the tangent space to the Lie group at unity, with the Lie bracket prescribed externally. For any Lie group we can define an exponent map and its inverse, the logarithm, \[\exp:\mathfrak{g}\to\mathcal{G}_{e},\qquad\log:\mathcal{G}_{e}\to\mathfrak{g}\,, \tag{101}\] which map the Lie algebra to and from the largest possible simply connected patch \(\mathcal{G}_{e}\) of the Lie group that includes the identity. In general, the exponent map is not globally defined, i.e., it is not always possible to take the logarithm of a general Lie group element. \(SO(3)\) provides an example of this, since the logarithm of a \(\pi\)-rotation around any axis does not exist within the Lie algebra (unless the Lie algebra is complexified). Therefore, if we insist upon parametrizing elements of a Lie group as exponents of the elements of its Lie algebra, like we do for the case of canonical transformations, we necessarily lose information about the topological structure of the Lie group. For \(SO(3)\), a Lie group element is a \(3\times 3\) orthogonal matrix \(O^{T}O=1_{3}=OO^{T}\). A Lie algebra element is an antisymmetric \(3\times 3\) matrix with real components \(M^{T}=-M\), and the exponent map is the literal exponent of the matrix. The group and algebra are both 3 dimensional and a general Lie algebra element can be written as a 3 dimensional vector \(\vec{\Omega}\) with real components. Given the usual generators \(L_{1},L_{2},L_{3}\) of \(\mathfrak{so}(3)\), the matrix that the vector \(\vec{\Omega}\) corresponds to is simply \[M_{\vec{\Omega}}=\sum_{i}\Omega^{i}L_{i}\,. \tag{100}\] If we are using \(SO(3)\) to describe the configuration space of a single spin or a rigid body, an element of the Lie algebra \(\vec{\Omega}\) can be interpreted as an angular velocity. The Lie bracket of two antisymmetric matrices is just the matrix commutator and takes the following form: \[[M_{\vec{\Omega}},M_{\vec{\Omega}^{\prime}}]=M_{\vec{\Omega}\times\vec{\Omega} ^{\prime}}\,, \tag{101}\] where \(\vec{\Omega}\times\vec{\Omega}^{\prime}\) is the cross product of the two angular velocities. Next, for the Lie algebra, we can define its dual space \(\mathfrak{g}^{*}\), i.e., the space of linear functions acting on the Lie algebra. A typical element of the dual space will be labelled by lowercase Greek letters: \[\begin{split}\eta:\mathfrak{g}\rightarrow\mathbb{R}\,,\\ \eta[X]\equiv\langle\eta,X\rangle\,.\end{split} \tag{102}\] The angular brackets are standard notation for the action of a dual space element on the Lie algebra element. The dual to any finite dimensional vector space is isomorphic to the vector space itself, but we will maintain the distinction between the two. For \(SO(3)\), the dual space also consists of 3 dimensional real vectors \(\vec{l}\), which act on Lie algebra elements \(\vec{\Omega}\) via the dot product: \[\vec{l}[\vec{\Omega}]\equiv\langle\vec{l}^{\prime},\vec{\Omega}\rangle\equiv \vec{l}\cdot\vec{\Omega}\,. \tag{103}\] Elements of the dual to \(\mathfrak{so}(3)\) are interpreted as angular momenta of the rigid body, or the orientation of the spin itself. These characterize the state of the spin or the rigid body. The adjoint action of the Lie algebra on itself is given by the Lie bracket: \[\mathrm{ad}_{X}Y=[X,Y]\,. \tag{104}\] This induces a coadjoint action of the Lie algebra on its dual space, determined uniquely by the requirement \[\langle\mathrm{ad}_{X}^{*}\eta,Y\rangle=\langle\eta,-\mathrm{ad}_{X}Y\rangle\,. \tag{104}\] We will avoid rigorous definitions of the Lie group adjoint and coadjoint actions \(\mathrm{Ad}_{g}\) and \(\mathrm{Ad}_{g}^{*}\) on \(\mathfrak{g}\) and \(\mathfrak{g}^{*}\) respectively since these definitions are somewhat involved, but it suffices to know that these action exist and generalize the definitions via the exponentials of \(\mathrm{ad}\) and \(\mathrm{ad}^{*}\) to group elements that cannot be written as exponentials of Lie algebra elements. If the Lie group and its Lie algebra consist of matrices, then the group adjoint and coadjoint actions are just matrix conjugation \(gXg^{-1}\) and \(g\eta g^{-1}\). Returning to \(SO(3)\), the Lie algebra adjoint and coadjoint actions are given by cross products of vectors, \[\mathrm{ad}_{\vec{\Omega}}\vec{\Omega}^{\prime}=\vec{\Omega}\times\vec{\Omega} ^{\prime}\,,\qquad\mathrm{ad}_{\vec{\Omega}}^{*}\vec{l}=\vec{\Omega}\times \vec{l}\,, \tag{105}\] while the group adjoint and coadjoint actions reduce to rotation of 3d vectors: \[\mathrm{Ad}_{O}\vec{\Omega}=O\cdot\vec{\Omega}\,,\qquad\mathrm{Ad}_{O}^{*} \vec{l}=O\cdot\vec{l}\,. \tag{106}\] Now, since \(\mathcal{G}\) is the configuration space of our system, the phase space is given by the cotangent bundle (which can be shown to be a trivial direct product for any Lie group), \[T^{*}\mathcal{G}\cong\mathcal{G}\times\mathfrak{g}^{*}\,. \tag{107}\] Roughly speaking, \(\mathcal{G}\) itself is also a symmetry of our dynamical system, so we can quotient it out to obtain a reduced phase space, \[T^{*}\mathcal{G}/\mathcal{G}\cong\mathfrak{g}^{*}\,, \tag{108}\] that is isomorphic to the dual space21. This is just to say that the configuration of a rigid body in the center of mass frame is effectively determined by its total angular momentum, for the purpose of time evolution. Footnote 21: More precisely, this is achieved by defining a momentum map \(\mu:T^{*}\mathcal{G}\to\mathfrak{g}^{*}\) such that the pre-image \(\mu^{-1}(0\in\mathfrak{g}^{*})\) of this map gives us the reduced phase space. With \(\mathfrak{g}^{*}\) as the reduced phase space for the dynamics of our system, we need only a Poisson structure and a choice of Hamiltonian to obtain Hamilton's equations of motion. The Poisson structure is given by the Lie-Poisson bracket of two functionals \(\mathscr{F}[\eta]\) and \(\mathscr{G}[\eta]\) of \(\mathfrak{g}^{*}\): \[\{\mathscr{F},\mathscr{G}\}_{\mathrm{LP}}[\eta]\equiv\langle\eta,[d_{\eta} \mathscr{F},d_{\eta}\mathscr{G}]\rangle. \tag{109}\] This definition can be understood as follows: the differentials \(d_{\eta}\mathscr{F}\) and \(d_{\eta}\mathscr{G}\) at the point \(\eta\in\mathfrak{g}^{*}\) live in the cotangent space to \(\mathfrak{g}^{*}\) at the point \(\eta\). Since \(\mathfrak{g}^{*}\) is a vector space, its cotangent spaces are isomorphic to its dual \(\mathfrak{g}^{**}\), which is just the Lie algebra \(\mathfrak{g}\). Since the differentials can be treated as Lie algebra elements, we can take their Lie bracket to obtain a new Lie algebra element. The pairing of \(\eta\) with this Lie algebra element defines the value of the functional \(\{\mathscr{F},\mathscr{G}\}_{\rm LP}\) at the point \(\eta\). This can be done for every point \(\eta\) to define the Lie-Poisson bracket. It is instructive to write this formula in terms of the structure constants \(f^{abc}\) of the Lie group: \[\{\mathscr{F},\mathscr{G}\}[\eta]=\eta_{c}f^{abc}\partial_{a}\mathscr{F} \partial_{b}\mathscr{G}=\Pi^{ab}(\eta)\partial_{a}\mathscr{F}\partial_{b} \mathscr{G}\,, \tag{113}\] where \(\partial_{a}\) are derivatives on \(\mathfrak{g}^{*}\) in the basis of generators, and \(\Pi^{ab}(\eta)=f^{abc}\eta_{c}\) is the Poisson-bivector. For \(SO(3)\) recall again that the Lie bracket is the cross product of vectors and the pairing is given by the dot product, so the the Lie-Poisson bracket of two functions of the angular momentum takes the form: \[\{\mathscr{F},\mathscr{G}\}_{\rm LP}[\vec{l}]\equiv\vec{l}\cdot\left(\frac{ \partial\mathscr{F}}{\partial\vec{l}}\times\frac{\partial\mathscr{G}}{ \partial\vec{l}}\right)\,. \tag{114}\] The choice of Hamiltonian is determined by the dynamical system under consideration. For a rigid body, the natural choice of Hamiltonian is the total rotational energy defined in terms of the inverse of the moment of inertia tensor, with an additional torque term, \[H[\vec{l}]\equiv\frac{1}{2}\left(\vec{l}\cdot I^{-1}\cdot\vec{l}\right)-\vec{ \tau}\cdot\vec{l}\,. \tag{115}\] The equation of motion is then given by \[\dot{\vec{l}}=\{\vec{l},H\}_{\rm LP}[\vec{l}]=-\left(I^{-1}\cdot\vec{l}\right) \times\vec{l}+\vec{\tau}\,. \tag{116}\] The moment of inertia tensor defines a map from \(\mathfrak{g}\) to \(\mathfrak{g}^{*}\) and vice versa, so that \(I^{-1}\cdot\vec{l}=\vec{\Omega}\), the angular velocity, and the equation of motion takes the more familiar form of Euler's equations for a rigid body: \[\dot{\vec{l}}\,+\ \vec{\Omega}\times\vec{l}=\vec{\tau}\,. \tag{117}\] For a single spin, the Hamiltonian would not have a quadratic term, but there could be an external magnetic field providing the torque, so the equation of motion is identical, except without the moment of inertia term. The final step is to turn this Hamiltonian into an action, which requires a symplectic form on the reduced phase space, obtained by inverting the Lie-Poisson bivector. However, \(\mathfrak{g}^{*}\) does not host a symplectic form, since the Lie-Poisson bivector \(\Pi^{ab}=f^{abc}\eta_{c}\) is not invertible, since \(\eta_{c}\) can be zero! \(SO(3)\) once again provides some intuition for this: the dual space for this Lie group is a 3 dimensional vector space. Symplectic forms can only exist on even dimensional manifolds. Therefore it is impossible to define one on the dual space. However, given that time evolution on \(\mathfrak{g}^{*}\) for any choice of Hamiltonian occurs through the action of a one-parameter family of group elements, the space of states in \(\mathfrak{g}^{*}\) that are reachable from one another is smaller than \(\mathfrak{g}^{*}\). Such a space is called a coadjoint orbit. It is defined as an equivalence class of states \(\eta\in\mathfrak{g}^{*}\) such that any two such states are related by the coadjoint action of some group element. We will avoid the proof here, but it is possible to show that the Lie-Poisson bivector does become invertible when restricted to functions of the coadjoint orbit. The symplectic form hence obtained on a given coadjoint orbit is known as the Kirillov-Kostant-Souriau (KKS) form, and is defined by its action on two vectors \(\rho,\sigma\) tangent to a point \(\nu\) in the coadjoint orbit in \(\mathfrak{g}^{*}\), which can be thought of as elements of \(\mathfrak{g}^{*}\), \[\omega_{\text{KKS}}(\rho,\sigma)|_{\eta}\equiv\langle\eta,[X,Y]\rangle\, \tag{101}\] where \(X\) and \(Y\) are Lie algebra elements such that \[\text{ad}^{*}_{X}\eta=\rho\,,\qquad\text{ad}^{*}_{Y}\eta=\sigma\,. \tag{102}\] \(X\) and \(Y\) are not uniquely determined by this condition, but it is possible to show that the expression on the right hand side is independent of this ambiguity. The action that reproduces the same equation of motion as the Hamiltonian \(H[\eta]\) is then given by \[S=\int_{0}^{1}ds\int dt\ \omega_{\text{KKS}}(\partial_{t}\eta,\partial_{s} \eta)-\int dt\ H[\eta]\,, \tag{103}\] where \(s\) is an extra dimension with \(s=1\) corresponding to physical time and boundary conditions \(\eta(s=0)=0\). Consider once again the case of \(SO(3)\) whose coadjoint action on \(\mathfrak{g}^{*}\) is simply the rotation of an angular momentum vector. Evidently, coadjoint orbits are spheres of fixed radius \(|\vec{l}|\), so that the Poisson bivector, \[\Pi^{ij}(\vec{l})=\epsilon^{ijk}l_{k}\,, \tag{104}\] becomes invertible on such a sphere, with the inverse given by \[(\omega_{\text{KKS}})_{ij}=\frac{l^{k}}{l^{2}}\epsilon_{ijk}\,. \tag{105}\] This is just the rescaled area form on the sphere, which is closed but not exact. While for this case we were able to find an explicit expression for \(\omega_{\text{KKS}}\), this will not necessarily happen in general, and we have to resort to the definition (101). The action for a rigid body or a spin is then given by \[S=\frac{1}{l^{2}}\int_{0}^{1}ds\int dt\ \vec{l}\cdot\left(\partial_{t}\vec{l }\times\partial_{s}\vec{l}\right)-\int dt\ H[\vec{l}]\,. \tag{106}\] The term obtained by integrating the Kirillov is the familiar WZW term for a spin or a rigid body, and making \(\vec{l}\) a local function of space turns it into the Berry phase term for the effective field theory of a ferromagnet. It is worth pointing out that since the KKS form is not exact, the extra dimension cannot be integrated over unless we work in a perturbative expansion around some fixed ground state angular momentum \(\vec{l}_{0}\) (every closed form is locally exact). However, had we parametrized the coadjoint orbit as the action of exponentiated infinitesimal rotations acting on \(\vec{l}_{0}\) to begin with, we would have found the KKS form to be exact and the WZW term to be a total \(s\)-derivative. This is what happens in the case of Fermi liquids, and we leave an exploration of the topological structure of the coadjoint orbit to future work. ## Appendix B Luttinger liquids from the coadjoint orbit method In this section we show that the coadjoint orbit formalism reproduces the bosonized theory of Luttinger liquids. In particular, the mixed anomaly between the emergent chiral \(U(1)\) symmetries at the Fermi points can be understood as a linearization of the Ward identity for canonical transformations. Luttinger liquids have been extensively studied in the literature, see in particular Refs. [45; 46; 47; 48; 49] for constructions using coadjoint orbits. We begin with a review of the construction of the bosonized action for Luttinger liquids from the algebra of densities. Fermi'surfaces' in 1+1 dimensions are a collection of discrete points in momentum space. Assuming that the dispersion relation \(\epsilon(p)\) is an even function that monotonically increases with positive momentum, the Fermi surface consists of exactly two points at momentum values \(p=\pm p_{F}\). Each Fermi point hosts a chiral mode whose chirality is given by \(\text{sgn}[\partial_{p}\epsilon]\). Denoting the chiral modes at the points \(+p_{F}\) and \(-p_{F}\) by the subscripts \(R\) and \(L\) (for 'right' and 'left') respectively, the particle number densities obey the following equal time commutation relations \[\begin{split}[\rho_{R}(x),\rho_{R}(x^{\prime})]&=- \frac{i}{2\pi}\partial_{x}\delta(x-x^{\prime})\,,\\ [\rho_{L}(x),\rho_{L}(x^{\prime})]&=\frac{i}{2\pi} \partial_{x}\delta(x-x^{\prime})\,,\\ [\rho_{R}(x),\rho_{L}(x^{\prime})]&=0\,.\end{split} \tag{104}\] The so-called Schwinger terms on the right-hand side of the first two lines are indicative of the chiral anomalies carried by each chiral fermion. \(\rho_{R,L}\) are the charge densities corresponding to two copies of \(U(1)\) symmetry, which we will refer to as \(U(1)_{R}\) and \(U(1)_{L}\). The chiral algebra can be realized in terms of bosonic fields \(\phi_{R,L}\) by defining the densities as \[\rho_{R}=\frac{1}{2\pi}\partial_{x}\phi_{R}\,,\qquad\rho_{L}=-\frac{1}{2\pi} \partial_{x}\phi_{L}\,. \tag{105}\] The commutators of the densities with the bosonic fields are then \[\begin{split}[\phi_{R}(x),\rho_{R}(x^{\prime})]&=-i \delta(x-x^{\prime})\,,\\ [\phi_{L}(x),\rho_{L}(x^{\prime})]&=-i\delta(x-x^{ \prime})\,,\end{split} \tag{106}\] which tells us that the \(U(1)_{R,L}\) symmetries are non-linearly realized on the bosonic fields as \[\phi_{R}\to\phi_{R}-\lambda_{R}\,,\qquad\phi_{L}\to\phi_{L}-\lambda_{L}\,. \tag{107}\] An action that produces the algebra (106) is \[\begin{split} S&=\frac{1}{2}\int dtdx\,\dot{\phi }_{R}\rho_{R}+\dot{\phi}_{L}\rho_{L}\\ &=-\frac{1}{4\pi}\int dtdx\,\partial_{x}\phi_{R}\dot{\phi}_{R}- \partial_{x}\phi_{L}\dot{\phi}_{L}\,.\end{split} \tag{108}\] The factor of \(\frac{1}{2}\) in the first line comes from the fact this is a constrained system: using the appropriate Dirac brackets one recovers the commutation relation (107) as desired. This action corresponds to the WZW term in the coadjoint orbit construction. The integral over the Fermi surface angle \(\theta\) becomes a sum over two points \(\theta=0\), \(\pi\), so that one finds \[\begin{split} S_{\text{WZW}}&=-\frac{1}{4\pi}\sum_{ \sigma=\pm}\sigma\int dtdx\,\partial_{x}\phi_{\sigma}\dot{\phi}_{\sigma}\\ &=-\frac{1}{4\pi}\int dtdx\,\partial_{x}\phi_{R}\dot{\phi}_{R}- \partial_{x}\phi_{L}\dot{\phi}_{L}\,,\end{split} \tag{108}\] in agreement with (109). Nonlinearities in the WZW term, present for any \(d>1\), entirely vanish in \(d=1\). These nonlinearities are associated with the curvature of the Fermi surface, which explains why they are absent in one dimension. For the same reason, the relation between \(\rho\) and \(\phi\) (107) does not receive nonlinear corrections. In \(d=1\), all nonlinearities in the bosonized description of a Luttinger liquid come from the Hamiltonian, in particular from nonlinearities in the dispersion relation. The Hamiltonian part of the action also produces a term in the quadratic action, \[\begin{split} S^{(2)}&=-\frac{1}{4\pi}\sum_{\sigma =\pm}\int dtdx\,\partial_{x}\phi_{\sigma}\left(\sigma\dot{\phi}+v_{F}\partial _{x}\phi\right)\\ &=-\frac{1}{4\pi}\int\partial_{x}\phi_{R}\left(\partial_{0}\phi_ {R}+v_{F}\partial_{x}\phi_{R}\right)-\partial_{x}\phi_{L}\left(\partial_{0} \phi_{L}-v_{F}\partial_{x}\phi_{L}\right)\,,\end{split} \tag{109}\] which is the well-known Gaussian action for a Luttinger liquid. ### Chiral anomaly as a linear approximation When coupled to background gauge fields, both chiral symmetries are anomalous with opposite anomalies. If \(A_{\mu}^{R}\) and \(A_{\mu}^{L}\) are the background fields for the two global symmetries, the anomalous conservation laws are \[\begin{split}\partial_{\mu}j_{R}^{\mu}&=-\frac{1}{4 \pi}\epsilon^{\mu\nu}F_{\mu\nu}^{R}\,,\\ \partial_{\mu}j_{L}^{\mu}&=\frac{1}{4\pi}\epsilon^{ \mu\nu}F_{\mu\nu}^{L}\,.\end{split} \tag{110}\] In the coadjoint orbit formalism, the chiral anomalies appear as a linearized approximation to the invariance of the maximally gauged action (106) under all canonical transformations. To see this, we begin with the Ward identity for free fermions, that have \(\mathcal{J}_{p^{j}}=0\) \[\partial_{\mu}\mathcal{J}^{\mu}+\{\mathcal{J}^{\mu},A_{\mu}\}=0\,. \tag{111}\] Turning off \(A_{x}\) for simplicity, the conservation law takes the form \[\partial_{0}\mathcal{J}^{0}+\partial_{x}\mathcal{J}^{x}+\partial_{x}\mathcal{ J}^{0}\partial_{p}A_{0}=\partial_{p}\mathcal{J}^{0}\partial_{x}A_{0}\,. \tag{112}\] Recall that \(\mathcal{J}^{0}\) is simply the phase space distribution \(f\). Hence, it has a nonzero expectation value in the ground state \[\langle\mathcal{J}^{0}\rangle=f_{0}\,.\] (B11) If we now linearize the equation around the two Fermi points by writing \[\mathcal{J}^{0}=f_{0}+\delta\mathcal{J}^{0},\qquad\mathcal{J}^{x}=\delta \mathcal{J}^{x}\,,\] (B12) and treat \(A_{0}(t,x,p)\) to be of the same order as \(\delta\mathcal{J}^{\mu}\), we find that the equation takes the form \[\partial_{0}\delta\mathcal{J}^{0}+\partial_{x}\delta\mathcal{J}^{x}=(\partial _{x}A_{0}^{L})\delta(p+p_{F})-(\partial_{x}A_{0}^{R})\delta(p-p_{F})\,.\] (B13) Integrating over either \(p>0\) or \(p<0\) and using the expressions for the chiral density and current \[\begin{split}\rho_{R}&=\int_{0}^{\infty}\frac{dp}{ 2\pi}\ \delta\mathcal{J}^{0},\qquad j_{R}=\int_{0}^{\infty}\frac{dp}{2\pi}\ \delta\mathcal{J}^{x}\,,\\ \rho_{L}&=\int_{-\infty}^{0}\frac{dp}{2\pi}\ \delta \mathcal{J}^{0},\qquad j_{L}=\int_{-\infty}^{0}\frac{dp}{2\pi}\ \delta\mathcal{J}^{x}\,,\end{split}\] (B14) we find that the Ward identity takes the form of the anomalous conservation laws for the chiral anomalies \[\begin{split}\partial_{t}\rho_{R}+\partial_{x}j_{R}& =-\frac{1}{2\pi}\partial_{x}A_{0}^{R}\,,\\ \partial_{t}\rho_{L}+\partial_{x}j_{L}&=\frac{1}{2 \pi}\partial_{x}A_{0}^{L}\,.\end{split}\] (B15) The chiral anomaly is therefore a linear approximation to the non-abelian Ward identity, or a covariant conservation law, around a state with nonzero charge density \(\langle\mathcal{J}^{0}\rangle\neq 0\).
2305.17024
Contouring by Unit Vector Field Regression
This work introduces a simple deep-learning based method to delineate contours by `walking' along learnt unit vector fields. We demonstrate the effectiveness of our pipeline on the unique case of open contours on the task of delineating the sacroiliac joints (SIJs) in spinal MRIs. We show that: (i) 95% of the time the average root mean square error of the predicted contour against the original ground truth is below 4.5 pixels (2.5mm for a standard T1-weighted SIJ MRI), and (ii) the proposed method is better than the baseline of regressing vertices or landmarks of contours.
Amir Jamaludin, Sarim Ather, Timor Kadir, Rhydian Windsor
2023-05-26T15:32:22Z
http://arxiv.org/abs/2305.17024v1
# Contouring by Unit Vector Field Regression ###### Abstract This work introduces a simple deep-learning based method to delineate contours by 'walking' along learnt unit vector fields. We demonstrate the effectiveness of our pipeline on the unique case of open contours on the task of delineating the sacroiliac joints (SIJs) in spinal MRIs. We show that: (i) \(95\%\) of the time the average root mean square error of the predicted contour against the original ground truth is below 4.5 pixels (2.5mm for a standard T1-weighted SJI MRI), and (ii) the proposed method is better than the baseline of regressing vertices or landmarks of contours. Amir Jamaludin\({}^{\star}\) Sarim Ather\({}^{\dagger}\) Timor Kadir\({}^{\ddagger}\) Rhydian Windsor\({}^{\star}\)\({}^{\star}\) Visual Geometry Group, Department of Engineering Science, University of Oxford \({}^{\dagger}\) Oxford University Hospitals NHS Foundation Trust \({}^{\ddagger}\) Plexalis Ltd CNN, MRI, Spine, SJJ, Sacroiliac Joint, Vector Field ## 1 Introduction Contouring objects is a very important step in various medical image analysis tasks. Currently, one common approach is to predict a segmentation map of the object and then extract the map's edges. However, this approach has limitations. Firstly, the output segmentations are not necessarily a single interconnected volume and thus additional post-processing is required before finding edges, which can introduce errors (e.g. by removing additional volumes). Secondly, this method does not allow for detecting open contours. An alternative approach is to treat pixels along the open contour as segmentation targets. However, this approach often leads to small, challenging segmentation targets. Furthermore, these approaches do not guarantee a unique solution or easily allow for sub-pixel precision contours in both the open and closed settings. Therefore, in this paper, we propose a new method to delineate contours, avoiding these limitations. This is done by 'walking' along a learnt vector field. Along the contour, the field should point parallel to the contour, whereas outside the contour the field should point to the nearest contour point. To demonstrate the effectiveness of this method, we apply it to a novel task; delineating the sacroiliac joint (SIJ) boundary in clinical MRI scans. **Sacroiliac Joint Delineation.** The SJI is the joint between the sacrum of the spine and the ilium bones of the pelvis. There are two SIJs per person, one on the left and one on the right. MR imaging is typically done to look at the inflammation of the SJI, or sacroiliitis, which is one of the causes of low back pain and part of the diagnosis for ankylosing spondylitis (AS). In AS, the severity of SJI inflammation is used to assess disease progression. AS Grading systems often refer to specific regions surrounding the SJI [1], which makes SJI detection a must. Since the SJI is defined as the space between two bones, we follow the approach suggested by [2] and delineate each SJI as an individual open contour, which is beneficial for the further downstream task of grading the SJ. **Related Work.** There have been multiple works on detecting or segmenting parts of the spine in spinal medical imaging across several imaging modalities, e.g. intervertebral discs [3] and vertebral bodies in MRI [4] and CT scans [5] as well as the whole spine in DXA scans [6, 7]. However, there has been relatively little research on detecting the SJI and related downstream tasks, for example, inflammation prediction or quantifying structural changes. The closest work to date on SJI delineation is [8]. However, this method focuses on the classification of sacroiliitis and requires manual annotation to locate the SJI region. Another closely related work is [9], Figure 1: Overview of the contouring pipeline on an example SJI MRI. The model outputs two vector fields, one for both the left (red) and right (green) SIJs. Each vector field is shown as a gradient map of the angle (in degrees) of the vector at that point. These vector fields are then used to extract contours for both the SIJs, shown in the bottom left panel. where the authors propose a method to detect changes in the SIJ. However, this is done without explicitly focusing on the SIJ region, instead taking the whole slice of an SIJ MRI as input. We propose that by delineating the SIJ, models can focus on the exact region of the disease without additional noise from surrounding anatomical structures. Our contouring method has analogies to several works on shape representation using deep learning via implicit functions (e.g. [10, 11, 12]). In this case, rather than representing shapes as a binary mask over a regular grid of voxels, a model learns \(f:\mathbb{R}^{3}\rightarrow\mathbb{R}\), such that \(f(x,y,z)\) estimates the closest distance from point \((x,y,z)\) to the object of interest's surface (_signed distance_ functions), or whether \((x,y,z)\) is occupied by the shape (_occupancy_ functions). These methods allow for sub-pixel/voxel precision representations of surfaces. Though we validated our approach on SIJ MRIs, it is worth noting that open contours are widely used in other medical imaging tasks e.g. torso contour segmentation for better ECG interpretation [13], and reconstructing 3D meshed of the heart from 2D cardiac MRIs [14]. ## 2 Approach Overview Our method takes as input 2D images and outputs an array of vertices delineating the contour of interest. This is done by a two stage approach: (i) Firstly, a model to predicts _a unit vector field_ (UVF) for the image. At location \(\mathbf{x}\), the UVF indicates the direction towards the nearest point on the contour of interest (ii) Secondly, we propose a method to extract open contours from this learned vector field. Our overall approach for the task of SIJ delineation can be seen in Figure 1. ### Unit Vector Fields The idea of contours and vector fields in combination is not a new one; for example, several early works in computer vision combined Snakes [15] with gradient vector flow [16], i.e. a vector field pointing towards object edges in a given image. However, instead of defining the vector field using object edges, we instead learn the unit vector field, \(\mathbf{\hat{v}}_{i,j}\), where at each location in the vector field, \((i,j)\), the field 'points' to the nearest vertex, i.e. annotated ground truth landmark, on the contour of the object. The unit vector field is made of two separate x and y components corresponding to the directions of the vectors in the field. To preserve the directionality of the contour, we impose a rule where vectors laying on top of the contour should 'point' to where the next vertex is expected. An example unit vector field can be seen in Figure 2. ### Extracting Contours From Unit Vector Fields The unit vector field alone does not obviously indicate where a contour starts and ends. We solve this by also predicting the start and end points with the same network that generates the unit vector field; this is done simultaneously as a separate output. We take inspiration from previous works [4, 17, 18] and regress two distinct Gaussian heatmaps for the start and end points respectively. Each Gaussian has a maximum value of 1 and a variance proportional to the area of the task-relevant object. In our case, we use the sacrum, i.e. the area which lies in between two SIJs. In the case where the contour is without a defined area of interest, we suggest scaling the Gaussian heatmap proportional to the length of the overall con Figure 3: Following on from the example shown in Figure 2, alongside the UVF, we regress two 2D Gaussian heatmaps. (a) 2 Gaussians representing the start and end points of the contour, (b) the UVF overlaid on top of the Gaussians, (c) the contour which starts from the Gaussian now marked in Green and ends on the Gaussian marked in Blue, (d) the final contour for the left SIJ marked in Red. Figure 2: The Unit Vector Field (UVF): (a) a slice of an SIJ MRI with annotated landmarks in red delineating the left SIJ (with respect to the patient), (b) the resulting target UVF, overlaid on top of a gradient map of the field’s direction in degrees. tour. The beginning of the contour is defined from the Gaussian heatmap designated as the start point. We then iteratively 'walk' following the direction in the UVF, \(\hat{\mathbf{v}}_{i,j}\), and the contour ends when approaching the second Gaussian heatmap i.e. end point. Each step is 1 unit in magnitude, although this could be adjusted to generate contours of varying fidelity. Figure 3 gives an example of how a contour is defined with the Gaussian heatmaps and the UVF. Since the UVF can be visualized, errors can be more easily interpreted. Though not shown in this work, a closed contour solution would not require heatmaps and could be found by simply searching for a loop in the UVF. ## 3 Dataset & Training Details **Dataset.** The Oxford Sacroiliac Joint (**OSIJ**) dataset is a collection of SIJ MRIs from 339 patients that have undergone scanning for in the Oxford University Hospitals NHS trust. For experiments conducted in this work, the dataset is split into training (80%), validation (10%) and testing (10%) sets on a per subject basis (271:34:34). Each subject possesses an average of two sequences (typically T1, T2, STIR, and FS) resulting in a total of 793 scans. Each scan roughly consists of 20 2D slices, resulting in a total of 16,978 images. For the annotations of the contour of the SIJs, an expert was tasked with marking the landmarks (vertices) that best define both left and right SIJs through every slice in a given scan. The number of landmarks varies depending on the view of the SIJ; typically, mid-coronal SIJs cover a bigger image area demanding a larger number of landmarks and vice versa. The number of landmarks per slice ranges from 2 to 21. **Training Details.** The experiments in this work were conducted using a simple U-Net architecture [19]. For each contour, the network predicts 2 Gaussian heatmaps and 2 components (x and y direction) of the unit vector field; separate contours were predicted for each of the two SIJ (left and right). The SIJs are not guaranteed to be inside the field-of-view of the scans and as such these cases were kept in the training set to suppress false positives. The scans were typically squares in shape; thus, they are bi-cubically re-sampled to \(224\times 224\) pixels. Slices that were not square were padded with zeros prior to re-sampling so as to not change the aspect ratio. The network is trained using an Adam optimiser [20] with \(\beta_{1}=0.9\), \(\beta_{2}=0.999\) and a learning rate of \(10^{-3}\) until convergence. Several augmentations were applied during training, namely: (a) translation \(\pm 20\%\), (b) scale \(\pm 20\%\), (c) rotation \(\pm 15^{\circ}\), (d) left/right flips, (e) additive Gaussian noise, and (f) Gaussian blur. A combination of L2-loss, for the UVF, and weighted L2-loss (see [4]), for the Gaussian heatmaps, is used to train the network. ## 4 Performance Evaluation & Results For comparison we compare against a baseline network trained to predict 21 Gaussian heatmaps for each SIJ, 21 being the maximum number of landmarks in the dataset. We find this to be the simplest naive solution to predict landmarks using a similar U-Net architecture as our proposed UVF approach. Samples with a lower number of annotated points were up-sampled via linear interpolation. At test time, each prediction is compared against the ground truth landmarks of the contour and the root mean square (RMS) error is cal Figure 4: Example scans in the dataset with their marked-up annotated landmarks. (a), (b), and (c) are slices from the same T1-weighted scan at differing slice positions (anterior, mid-coronal, posterior) while (d), (e), and (f) are mid-coronal examples of different sequences in the dataset. Figure 5: Cumulative test set error distribution (measured in pixels). Baseline is in blue and contouring via UVF is in red. culated from the closest points between the prediction and ground truth. Results for both networks are shown in Figure 5 and Table. 1. Contouring by UVF overall works slightly better than the baseline ranging from 0.14 to 0.35 difference in RMS pixel error up to \(95\%\) of the data in the test set. This might not seem like a huge amount, but Figure 6 highlights that there is lower aliasing when looking at the contours using via UVF compared to just predicting landmarks via heatmaps. In general, \(95\%\) of the test set have a lower than 4.5 pixel error which for our purposes is adequate for further downstream tasks e.g. defining an ROI for SIJ oedema classification. Figure 7 shows results on several examples both from **OSIJ**'s test set and to images extracted from Radiopaedia. ## 5 Conclusion In this paper, we presented a pipeline to contour, focusing more on open contours but applicable to closed contours as well, objects in images and demonstrated its use to delineate SIJs in coronal spinal MRIs. Overall, the performance is better than the naive baseline of predicting landmarks of contours and is applicable to other contouring problems in medical image analysis. \begin{table} \begin{tabular}{l|c c c c c c} Data Proportion & 0.1 & 0.3 & 0.5 & 0.7 & 0.9 & 0.95 \\ \hline Baseline Error & 0.52 & 1.00 & 1.41 & 2.00 & 3.40 & 4.45 \\ UVF Error & 0.38 & 0.72 & 1.15 & 1.76 & 3.10 & 4.10 \\ \end{tabular} \end{table} Table 1: Table of RMS per proportion of data in the test set. Figure 6: Quantitative result of the baseline against the proposed method on a test set example. Green contours highlight the right SIJ and red contours highlight the left; GT in yellow. (b) and (d) are from the baseline model while (c) and (e) are contours using UVF. Baseline predictions are sparse, with 21 landmarks for each contour, resulting in more aliasing. Figure 7: Example contours via UVF. (a), (c), and (e) are from the **OSIJ** test with ground truth annotations in yellow (b), (d), and (e) are real-world unseen samples taken from Radiopaedia (73884, 75292, 154033). ## 6 Compliance with Ethical Standards The scans in the dataset were sourced from retrospective scans in the Oxford University Hospitals approved by the Health Research Authority (IRAS Project ID 207858). ## 7 Acknowledgments The authors would like to thank Aimee Readie and Gregory Ligozio for their useful discussion and feedback during the research of this paper. Rhydian Windsor is supported by CRUK as part of the EPSRC AIMS CDT (EP/L015897/1).
2309.00855
DoRA: Domain-Based Self-Supervised Learning Framework for Low-Resource Real Estate Appraisal
The marketplace system connecting demands and supplies has been explored to develop unbiased decision-making in valuing properties. Real estate appraisal serves as one of the high-cost property valuation tasks for financial institutions since it requires domain experts to appraise the estimation based on the corresponding knowledge and the judgment of the market. Existing automated valuation models reducing the subjectivity of domain experts require a large number of transactions for effective evaluation, which is predominantly limited to not only the labeling efforts of transactions but also the generalizability of new developing and rural areas. To learn representations from unlabeled real estate sets, existing self-supervised learning (SSL) for tabular data neglects various important features, and fails to incorporate domain knowledge. In this paper, we propose DoRA, a Domain-based self-supervised learning framework for low-resource Real estate Appraisal. DoRA is pre-trained with an intra-sample geographic prediction as the pretext task based on the metadata of the real estate for equipping the real estate representations with prior domain knowledge. Furthermore, inter-sample contrastive learning is employed to generalize the representations to be robust for limited transactions of downstream tasks. Our benchmark results on three property types of real-world transactions show that DoRA significantly outperforms the SSL baselines for tabular data, the graph-based methods, and the supervised approaches in the few-shot scenarios by at least 7.6% for MAPE, 11.59% for MAE, and 3.34% for HR10%. We expect DoRA to be useful to other financial practitioners with similar marketplace applications who need general models for properties that are newly built and have limited records. The source code is available at https://github.com/wwweiwei/DoRA.
Wei-Wei Du, Wei-Yao Wang, Wen-Chih Peng
2023-09-02T08:01:32Z
http://arxiv.org/abs/2309.00855v3
# DoRA: Domain-Based Self-Supervised Learning Framework for Low-Resource Real Estate Appraisal ###### Abstract. The marketplace system connecting demands and supplies has been explored to develop unbiased decision-making in valuing properties. Real estate appraisal serves as one of the high-cost property valuation tasks for financial institutions since it requires domain experts to appraise the estimation based on the corresponding knowledge and the judgment of the market. Existing automated valuation models reducing the subjectivity of domain experts require a large number of transactions for effective evaluation, which is predominantly limited to not only the labeling efforts of transactions but also the generalizability of new developing and rural areas. To learn representations from unlabeled real estate sets, existing self-supervised learning (SSL) for tabular data neglects various important features, and fails to incorporate domain knowledge. In this paper, we propose DoRA, a **D**omain-based self-supervised learning framework for low-resource **R**eal estate **A**praisal. DoRA is pre-trained with an intra-sample geographic prediction as the pretext task based on the metadata of the real estate for equipping the real estate representations with prior domain knowledge. Furthermore, inter-sample contrastive learning is employed to generalize the representations to be robust for limited transactions of downstream tasks. Our benchmark results on three property types of real-world transactions show that DoRA significantly outperforms the SSL baselines for tabular data, the graph-based methods, and the supervised approaches in the few-shot scenarios by at least 7.6% for MAPE, 11.59% for MAE, and 3.34% for HR10%. We expect DoRA to be useful to other financial practitioners with similar marketplace applications who need general models for properties that are newly built and have limited records. The source code is available at [https://github.com/wwwreview/DoRA](https://github.com/wwwreview/DoRA). 2023 ## 1. Introduction The exploration of property valuations has broad applicability across various domains. Whether it involves developing strategies for mortgage lending, house rental, or security price revaluation, these scenarios can be effectively framed as property valuation systems characterized by intricate and high-cost domain knowledge. In real estate appraisal, appraisers spend several hours estimating an individual property based on their knowledge (Han et al., 2017), which introduces subjective biases of human estimations due to different understandings of the market (Han et al., 2017). Recently, automated valuation models (AVMs) including machine learning (Han et al., 2017; Wang et al., 2018) and graph-based approaches (Wang et al., 2018; Wang et al., 2018) have been developed to solve this issue by conducting a price estimation according to the information of real estate, as shown in Figure 1. However, existing work adopting labeled datasets for supervised learning requires a large number of annotated labels and suffers from the generalization from seen to newly built appraisals. In real-world scenarios, another challenging constraint is that properties are often sparse or newly built in most areas. Beyond annotation costs, endlessly training specialized models on new types of real estate is not scalable in many practical scenarios. Therefore, it is desirable to have a systematic approach to learn generic knowledge from existing unlabeled types of transactions to achieve effective quality with only very few annotated examples. This problem is often defined as few-shot learning. We note that defining _low-resource scenarios_ can be dependent on the goals and expectations of financial institutions and customers. For instance, it can be defined as the Figure 1. Illustration of the mortgage loan application process and how the AVM model works. number of transactions per city, per property type, or the combination of different factors. In this paper, we aim to tackle the actual application scenarios of few-shot real estate appraisal: **rural area**, **new developing area**, and **reducing label effort**. Therefore, we define low-resource scenarios as the number of transactions per city. However, how to effectively utilize unlabeled records with a high variability of property types remains a challenging problem. The rapid development of self-supervised learning (SSL) has demonstrated the remarkable power of learning representation by using temporal information in spatial structure in images (Dong et al., 2018; Chen et al., 2019), and semantic relationships in languages (Dong et al., 2018; Chen et al., 2019). This is beneficial in few-shot scenarios from the generic knowledge of pre-trained models. However, the datasets of real estate appraisal are tabular domains which cannot directly apply these SSL techniques due to the different natures of their compositions (e.g., the 2D structure between pixels and the semantics between words). Thus, several SSL approaches have been proposed to learn the general latent representation of tabular data (Chen et al., 2019; Chen et al., 2019). Nonetheless, there is no existing approach that is able to integrate expert knowledge into the SSL objective for property estimation. Existing tabular-based SSL methods are feature-agnostic, which does not consider the meaning of the features and regards all features as having the same importance. In this paper, we propose DoRA, a domain-based self-supervised learning framework to tackle the low-resource scenarios for real estate appraisal. In order to design an upstream task that produces universal representations, a pre-trained stage is introduced by solving the intra-sample domain-based pretext task (i.e., learning the appraiser's knowledge) from the unlabeled set. In addition, inter-sample contrastive learning is proposed to distinguish the similarities and discrepancies between transactions across towns. In these manners, pre-trained embeddings generate robust representations in a lower dimension that contains more structured and domain-based information to use in the downstream task with only limited data. In the fine-tuning stage, the pre-trained embedders and encoder are reused as a feature extractor for converting downstream transactions with pre-trained embeddings, and the weights are then adjusted based on a few target examples. To summarize, the contributions of our work are as follows: * To the best of our knowledge, DoRA is the first work focusing on low-resource real estate appraisal, which not only meets the needs of real-world scenarios but can also be adopted in other property valuations (e.g., house rental). * The proposed framework is introduced with novel and effective intra- and inter-sample SSL objectives to learn robust geographical knowledge from unlabeled records. * Extensive experiments were conducted to empirically show that DoRA is effective in few-shot settings compared with existing methods. We also illustrate a developed system of DoRA and the real-world industrial scenarios for cities and towns with extremely limited transactions. ## 2. Related Work **Real Estate Appraisal.** Previous works defined real estate appraisal as a supervised regression problem, and addressed it with machine learning techniques (Zhao et al., 2019; Zhang et al., 2019; Zhang et al., 2019). To incorporate multi-modal data sources for improving the performance, Zhao et al. (Zhao et al., 2019) took the visual content of rooms into account using a deep learning framework with XGBoost (Chen et al., 2019), while Bin et al. (Bin et al., 2020) utilized street map images with attention-based neural networks. On the other hand, Luce was proposed to tackle spatial and temporal sparsity with the lifelong learning heterogeneous information network consisting of graph convolutional networks and long short-term memory networks (Zhao et al., 2019). Nonetheless, utilizing unlabeled transactions with high variability for low-resource real estate appraisal remains an unexplored yet challenging problem, which is also beneficial for property valuations. We, therefore, aimed to design a self-supervised learning approach to learn domain representations of transactions from the unlabeled set. **SSL in Tabular Data.** Recently, SSL has achieved prominent success in the image (Dong et al., 2018; Chen et al., 2019), audio (Zhao et al., 2019), and text (Dong et al., 2018; Chen et al., 2019) research fields. However, these approaches are often difficult to transfer to the tabular domain since tabular data do not have explicit structures to learn the contextualized representations. Therefore, multiple SSL approaches are proposed to learn the relation and latent structure between features in the tabular data domain (Chen et al., 2019; Chen et al., 2019; Chen et al., 2019; Zhang et al., 2019). However, there is another domain that has not yet been explored: SSL for property valuations, which is mainly comprised of records in tabular format, and requires domain knowledge to evaluate the property objectively. To that end, we have designed a pretext task based on domain knowledge and inter-sample contrastive learning to reinforce the model equipping domain-based contextualized representations of limited transactions for downstream tasks. ## 3. Preliminaries ### Datasets Our three datasets (building, apartment, and house) were collected from the Taiwan Real Estate Transaction Platform (Zhao et al., 2019), which contains real estate transactions in Taiwan from 2015 to 2021. It is noted that the dataset of previous work (Zhao et al., 2019) is a subset of our collected building dataset since the authors adopted only the top 6 largest cities, while we expanded the dataset to all 22 cities and 3 different property types to accommodate more challenging but valuable real-world scenarios, and investigated the robustness of the model, especially for handling limited transactions. Following (Zhao et al., 2019), 3 months of transactions were used as the testing set (Apr. 2021 to Jun. 2021, 24,142 records), 3 months of transactions were used as the validation set (Jan. 2021 to Mar. 2021, 40,124 records), and the others were used as the training set. We note that most real estate in the training set does not have corresponding prices due to the maintenance of the transaction platform; therefore, we used these unlabeled cases to form the unlabeled set (434,243 records). We also collected two additional types of features to comprehensively describe the real estate from a neighboring and global view: PoI (Point of interest), and economic and geographical features. We summarize these features as follows due to space limitations. **Real Estate Features.** Real estate has 39 features, including 16 categorical features and 23 numerical features to represent the metadata, for instance, location, the layout of the real estate, the current condition of the real estate, and household facilities. **PoI Features.** The original data is in spatial distribution format. It was collected from a third-party map information company. We designed a PoI converter to transform the data into a tabular format by dividing the facilities into YIMBY (Yes In My Back Yard, i.e., desirable adjacent public facilities) facilities, e.g., park, school, and NIMBY (Not In My Back Yard, i.e., non-desirable adjacent public facilities) facilities, e.g., power station, landfill. Then, the number of PoI of the real estate property was calculated with the Euclidean distance. For instance, the feature YIMBY_100 denotes the number of TIMBY facilities within 100 meters. **Economic and Geographical Features.** We added 7 external socio-economic features based on the real estate transaction quarter to represent the global view, including the house price index, unemployment rate, economic growth rate, lending rate, land transactions count, average land price index, and steel price index. We also incorporated the land area and population density by the town name to consider the number of residents and the demand. ### Problem Formulation As discussed in the Introduction, we randomly sampled 1 and 5 shots for each city from the labeled training set as annotated examples (i.e., support set) to estimate the value of the real estate, and each example consisted of real estate features, PoI features, and economic and geographical features. For instance, there is a total of 110 transactions in the 5-shot setting since the number of cities in Taiwan is 22. We note that one of the cities, Lianjiang, does not have any apartments due to the nature of the city; thus, the apartment dataset only has 105 instances in the 5-shot setting. It is noted that reporting each setting in the paper may not be feasible due to the page limit; therefore, we follow the standard few-shot settings (e.g., (Kang et al., 2018)) to report the overall performance in the experiments. ## 4. Method The DoRA framework is illustrated in Figure 2. DoRA takes economic and geographical features, real estate features, and PoI features as inputs. The architecture of DoRA is mainly comprised of the embedder, encoder, and predictors. The training pipeline is decomposed into two phases: **Pre-training stage**: Train by intra-sample pretext task and inter-sample contrastive learning to learn contextualized representations. **Fine-tuning stage:** Train with labeled data to predict the values of real estate. ### Model Architecture **Embedder.** The heterogeneous input features can be categorized into 4 types: numerical real estate features, categorical real estate features, numerical economic and geographical features, and numerical PoI features. The numerical real estate features \(NR_{i}\) of \(i\)-th real estate are embedded as follows: \[E_{NR_{i}}=Embedder_{NR}(NR_{i}), \tag{1}\] where the dimensions of \(E_{NR_{i}}\) and \(NR_{i}\) are \(dN_{R}\) and 23, respectively. To encode categorical real estate features, a naive method is to represent with one-hot encodings, which become sparse for high dimensions of categories and fail to preserve contextual information across features. Thus, each category of real estate features is encoded separately and then concatenated as the categorical real estate embeddings: \[E^{j}_{CR_{i}}=Embedder^{j}_{CR}(CR^{j}_{i}), \tag{2}\] \[E_{CR_{i}}=E^{1}_{CR_{i}}\oplus E^{2}_{CR_{i}}\oplus...\oplus E^{6}_{CR_{i}}, \tag{3}\] where \(j\) indicates the index of the 16 categorical real estate features, \(\oplus\) is the concatenation operator, and the dimensions of \(E^{j}_{CR_{i}}\) and \(E_{CR_{i}}\) are \(d_{CR}\) and \(16d_{CR}\), respectively. Similarly, two embedders with the same architecture are employed as the numerical real estate embedder to encode numerical economic and geographical features \(Econ\&Geo_{i}\) and PoI features \(PoI_{i}\) of the \(i\)-th real estate property, respectively: \[E_{Econ\&Geo_{i}}=Embedder_{Econ\&Geo}(Econ\&Geo_{i}), \tag{4}\] \[E_{PoI_{i}}=Embedder_{PoI_{i}}(PoI_{i}), \tag{5}\] where the input dimensions of \(Econ\&Geo_{i}\) and \(PoI_{i}\) are 9 and 16, respectively, and the dimensions of \(E_{Econ\&Geo_{i}}\) and \(E_{PoI_{i}}\) are \(d_{Econ\&Geo}\) and \(d_{PoI}\). The aforementioned embedders are composed of an MLP and a Mish activation function (Kang et al., 2018) similar to (Kang et al., 2018). Afterwards, the embedding of the \(i\)-th real estate \(E_{i}\) is then concatenated as: \[E_{i}=E_{NR_{i}}\oplus E_{CR_{i}}\oplus E_{Econ\&Geo}\oplus E_{PoI_{i}}, \tag{6}\] where \(E_{i}\in R^{dN_{R}+16d_{CR}+d_{Econ\&Geo}+d_{PoI}}\). **Encoder.** To encode the \(i\)-th embedding of real estate, the encoder is introduced to learn the contextualized feature representation \(Z_{i}\) during both the pre-training and fine-tuning stages: \[Z_{i}=Encoder(E_{i}), \tag{7}\] where \(Z_{i}\) is a \(d_{Z}\) dimension vector. Since existing work on SSL for tabular data mainly focuses on the strategies of pretext tasks (Kang et al., 2018; Wang et al., 2018), we also adopt MLPs with \(N\) layers to align the comparison, where the number of layers \(N\) of the encoder is optimized to ensure it is able to perform competitive performance of the pretext task (Section 5.2). The output of the encoder can then be used to learn prior knowledge from the pre-training stage and downstream tasks from the fine-tuning stage. ### Pre-Training DoRA **Intra-Sample Pretext Task: Located Town Prediction.** To enrich the feature representations from the unlabeled set, an intuitive Figure 2. The framework of DoRA. The left side shows the three input data sources, including economic and geographical features, real estate features, and PoI features. The right side is the two stages of DoRA: 1) The pre-training stage for located town prediction and 2) The fine-tuning stage for real estate appraisal. method for designing the pretext task is to add noise and reconstruct the input, e.g., (Zhou et al., 2017), which treats all features with the same importance but neglects the domain knowledge of the meaning of the features. Inspired by a recent study (Kang et al., 2018) on the conceptual connections of features between pretext and downstream tasks benefiting the downstream tasks, we introduce a domain-based pretext task: **predict the located town of the given real estate**, which can also be adopted in various geographic-related tasks (e.g., house rental). The input features of the real estate do not include the located town to avoid cheating. In this way, the model is equipped with fine-grained domain knowledge to distinguish what might be the composition of real estate for each city, which benefits downstream tasks with limited transactions. **Pre-Trained Predictor.** The pre-trained predictor takes the feature representation of the \(i\)-th real estate property as the input, and predicts the corresponding town \(\hat{Y}_{i}\in\mathbb{R}^{N_{Y}}\) with an MLP and the softmax activation function, where the number of \(N_{Y}\) is 350: \[\hat{Y}_{i}=Predictor_{pretext}(Z_{i}). \tag{8}\] **Pre-Training Loss.** During the pre-training stage, the embedders, encoder, and pre-trained predictor are jointly trained in the following optimization problem: \[\min_{M}\mathbb{E}[\alpha\mathbb{L}_{ce}(y,\hat{y}),(1-\alpha)\mathbb{L}_{ cl}(y,\hat{y})], \tag{9}\] where \(\alpha\) adjusts the weight between the two losses, \(y\) is the ground-truth of the located town, and \(M=\{Embedder_{NR},Embedder_{CR},Embedder_{Zcon\& Geo},Embedder_{Pol},\)\(Encoder,Predictor_{pretext}\}\). \(\mathbb{L}_{ce}\) is the cross-entropy loss: \[\mathbb{L}_{ce}=-\sum_{i=1}^{|\mathbb{C}|}Y_{i}log(\hat{Y}_{i}), \tag{10}\] where C denotes the number of the unlabeled set. **Inter-Sample Contrastive Learning.** Since cross-entropy is sensitive to noisy labels and lessens generalization performance (Kang et al., 2018; Wang et al., 2018), we extend contrastive learning (CL) to incorporate label information to consider the similarities and discrepancies between real estate across towns. \(\mathbb{L}_{cl}\) is defined as: \[\mathbb{L}_{cl}=\sum_{i\in I}\frac{-1}{|\mathcal{P}(i)|}\sum_{p\in P(i)}log \frac{exp(Z_{i}\cdot Z_{p}/\tau)}{\sum_{\alpha\in A(i)}exp(Z_{i}\cdot Z_{a}/ \tau)}, \tag{11}\] where \(P(i)\) is the set of indices of all positive pairs, \(A(i)\) is the set of all positive and negative pairs, \(\tau\) is a temperature parameter, and \(I\) is the instances (i.e., real estate) in a batch. Positive pairs represent the two instances located in the same town, while negative pairs represent the two instances located in different towns. As a result, \(Z_{p}\) is one of the contextualized embeddings that is sampled from the same town within the batch as the positive pairs for CL. By optimizing \(\mathbb{L}_{cl}\), embeddings with the same class are closer, and embeddings from different classes are pulled over. ### Fine-Tuning DoRA **House Price Predictor.** To train a regressor for the house price prediction, the feature representations of the pre-trained encoder of the \(i\)-th real estate property are fed into the house price predictor: \[\hat{P}_{i}=Predictor_{price}(Z_{i}), \tag{12}\] where \(\hat{P}_{i}\) is the estimated price. **Fine-Tuning Loss.** To optimize the estimated prices of real estate, the pre-trained embedders, pre-trained encoder, and the house price predictor of DoRA are jointly fine-tuned by minimizing the mean square error loss \(\mathbb{L}_{mse}\): \[\mathbb{L}_{mse}=\frac{1}{|S|}\sum_{s_{i}\in S}(\hat{P}_{i}-P_{i})^{2}, \tag{13}\] where \(S\) is the support set and \(P_{i}\) is the \(i\)-th ground truth price. ## 5. Experiments In this section, we attempt to answer the following research questions on three property types of real-world real estate datasets: 1. How does DoRA perform in the few-shot real estate appraisal? (Section 5.2) 2. [leftmargin=*] 3. De each proposed component, the pretext task, and the input features contribute to DoRA? (Section 5.3) 4. How can DoRA be deployed for low-resource real estate scenarios? (Section 5.4) ### Experimental Setup **Implementation Details.** The dimensions of \(d_{NR}\), \(d_{Econ\& Geo}\), and \(d_{Pol}\) are 16, the dimension of \(d_{CR}\) is 10, and the dimension of \(d_{Z}\) is 256. The number of layers of the encoder \(N\) is 6 and is designed with hidden dimensions \((2d_{Z},4d_{Z},8d_{Z},4d_{Z},d_{Z})\) in order. We set weight \(\alpha\) as 0.7 and temperature \(\tau\) as 0.1. For the numerical features, we impute them with standard normalization. In the fine-tuning stage, both DoRA and other baselines use an MLP as the house price predictor. We employ the AdamW optimizer (Kingmae et al., 2014) using the learning rate of 0.005, and the batch size is 512. During the pre-training stage, we use all property types of unlabeled sets to learn a pre-trained model to enforce the generalizability and then fine-tune based on various property type datasets. The training epochs of the pre-training and fine-tuning stages are 150 and 200, respectively. All of the hyper-parameters in the experiment were tuned based on the validation set. All experiments were repeated 5 times with different random seeds for sampling support sets to reduce the bias of few-shot sampling and to report average metrics with standard deviations for each evaluation metric. **Evaluation Metrics.** Previous work mainly focused on regression metrics for evaluating house price prediction (Kingmae et al., 2014; Wang et al., 2018). We extended these metrics with hit rate k% (Kang et al., 2018) to measure the accuracy of target properties within a tolerance error percentage k conforming to real-world financial requirements. Therefore, we adopted mean absolute percentage error (MAPE), mean absolute error (MAE), and hit rate 10% (HR10%) to comprehensively evaluate the results. **Baselines.** The baselines can be categorized into four groups: 1) **Statistics model (STA):** Historical Average (HA), 2) **Supervised models (SUP):** Linear Regression (LR), XGBoost (Chen et al., 2016), DNN, and DNN with contrastive learning (DNN + CL), 3) **Self-supervised models (SSL)**: DAE (Kang et al., 2018) and SubTab (Kang et al., 2018), and 4) **Graph-based models (Graph)**: MugRep (Wang et al., 2018) and ReGram (Kingmae et al., 2014). The SSL baselines are also pre-trained using the unlabeled set and are then fine-tuned to the house price prediction task. ### Overall Performance The pre-training performance of DoRA reaches about 0.85 and 0.96 in terms of macro-f1 and micro-f1 scores to ensure that DoRA is capable of detecting the geographic locations of real estate. Table 1 and Table 2 summarize the performance comparisons of various methods with 1-shot and 5-shot scenarios. The best result of each metric is highlighted in boldface and the second best is underlined. Quantitatively, the improvement in DoRA is at least 7.6% for MAPE, 11.59% for MAE, and 3.34% for HR10% on average for the three property types. We make the following observations: 1) DoRA consistently outperforms STA, SUP, SSL, and Graph approaches for the building, apartment, and house datasets with limited transactions, while some SUP models (e.g., XGBoost) are even inferior to the performance compared to the statistics-based method in the 1-shot scenario. Moreover, SUP approaches significantly hinder the performance of all datasets, which indicates that supervised methods require a large amount of labeled data to achieve competitive performance. We can also observe that graph-based methods considering the neighbor's information to construct a graph perform worse since most rural real estate does not have neighbors. 2) DAE and SubTab perform worse than DoRA in terms of all metrics and different scenarios, which verifies that randomly adding noise to the inputs and regarding all features with the same importance hinder models' learning of domain-based representations from the unlabeled set. This also points out the attribution of taking advantage of the intra-sample domain-based pretext task in DoRA capable of improving downstream performance. 3) We also notice that adding contrastive loss to the DNN slightly improves some metrics, which implies that contrastive learning enriches representations for the downstream task, even when only applying limited data. ### Ablation Study **Model Ablation.** We study a comprehensive component ablation with the building dataset in the 5-shot scenario in terms of MAPE. As shown in Table 3, only using the unlabeled set of the corresponding type (building type in row 1) significantly degenerates the downstream performance, which signifies that DoRA is able to leverage the unlabeled set from various types to improve model performance on the house price prediction. Rows 2, 3, 4, and 5 present the sensitivity analysis of different hyper-parameters, which shows that removing contrastive loss and changing the weight of \(\alpha\) reduce the performance more substantially. In addition, freezing the pre-trained encoder and using the different dimensions of feature representations also negatively impact the house price performance. \begin{table} \begin{tabular}{c c|c c c|c c c|c c c} \hline \hline & & & Building & & & \multicolumn{2}{c|}{Apartment} & \multicolumn{2}{c}{House} \\ \cline{3-11} Type & Model & MAPE (\(\downarrow\)) & MAE (\(\downarrow\)) & HR10\% (\(\uparrow\)) & MAPE (\(\downarrow\)) & MAE (\(\downarrow\)) & HR10\% (\(\uparrow\)) & MAPE (\(\downarrow\)) & MAE (\(\downarrow\)) & HR10\% (\(\uparrow\)) \\ \hline STA & HA & 45.36\(\pm\)0.97 & 12.93\(\pm\)0.25 & 13.91\(\pm\)0.09 & 46.20\(\pm\)2.73 & 16.74\(\pm\)0.36 & 13.69\(\pm\)1.19 & 43.00\(\pm\)0.84 & 7.74\(\pm\)0.04 & 18.29\(\pm\)0.19 \\ \hline \multirow{4}{*}{SUP} & LR & 213.1\(\pm\)15.16 & 33.95\(\pm\)24.52 & 4.61\(\pm\)4.42 & 153.3\(\pm\)19.65 & 34.21\(\pm\)6.57 & 3.69\(\pm\)1.48 & 167.4\(\pm\)0.80 & 27.23\(\pm\)6.66 & 4.24\(\pm\)2.94 \\ & XGBoost & 50.46\(\pm\)11.12 & 13.22\(\pm\)3.24 & 11.42\(\pm\)3.75 & 40.21\(\pm\)2.47 & 14.17\(\pm\)2.22 & 16.86\(\pm\)1.70 & 52.69\(\pm\)3.32 & 9.77\(\pm\)1.51 & 13.80\(\pm\)3.58 \\ & DNN + CL & 41.31\(\pm\)22.14 & 13.06\(\pm\)0.74 & 12.50\(\pm\)4.90 & 40.84\(\pm\)2.96 & 15.72\(\pm\)1.73 & 15.56\(\pm\)2.32 & 37.93\(\pm\)4.65 & 8.06\(\pm\)0.47 & 16.20\(\pm\)1.47 \\ & DNN + CL & 41.68\(\pm\)4.21 & 12.85\(\pm\)0.77 & 12.32\(\pm\)14.9 & 38.34\(\pm\)1.41 & 14.47\(\pm\)2.42 & 14.64\(\pm\)2.88 & 2.70\(\pm\)0.02 & 8.760\(\pm\)0.87 & 16.20\(\pm\)0.44 \\ \hline \multirow{2}{*}{SSL} & DAE & 44.67\(\pm\)0.90 & 12.93\(\pm\)1.47 & 12.51\(\pm\)0.68 & 4.51\(\pm\)0.90 & 14.25\(\pm\)2.30 & 14.16\(\pm\)0.44 & 38.40\(\pm\)2.81 & 6.73\(\pm\)0.14 & 21.77\(\pm\)0.84 \\ & SubTab & 39.20\(\pm\)3.49 & 22.32\(\pm\)7.30 & 12.96\(\pm\)3.46 & 37.99\(\pm\)0.79 & 14.34\(\pm\)0.45 & 17.74\(\pm\)0.81 & 41.96\(\pm\)1.31 & 14.23\(\pm\)0.86 & 13.77\(\pm\)2.95 \\ \hline \multirow{2}{*}{Graph} & MugRep & 52.28\(\pm\)12.03 & 22.74\(\pm\)3.44 & 11.53\(\pm\)3.63 & 51.62\(\pm\)13.52 & 20.44\(\pm\)8.52 & 12.22\(\pm\)1.80 & 51.38\(\pm\)10.86 & 21.87\(\pm\)7.72 & 11.65\(\pm\)4.69 \\ & ReGram & 41.83\(\pm\)1.31 & 14.47\(\pm\)3.65 & 12.77\(\pm\)3.30 & 42.36\(\pm\)1.25 & 13.09\(\pm\)3.31 & 14.90\(\pm\)0.86 & 39.13\(\pm\)0.31 & 7.94\(\pm\)0.27 & 16.94\(\pm\)1.06 \\ \hline \multirow{2}{*}{ \begin{tabular}{c} DoRA (Ours) \\ \end{tabular} } & DoRA (Ours) & **38.77\(\pm\)2.85** & **11.16\(\pm\)0.85** & **14.53\(\pm\)1.24** & **33.73\(\pm\)1.04** & **10.51\(\pm\)2.14** & **19.55\(\pm\)2.25** & **33.59\(\pm\)1.59** & **5.63\(\pm\)0.25** & **22.41\(\pm\)1.72** \\ \hline \hline \end{tabular} \end{table} Table 1. Overall 1-shot performance evaluated by MAPE, MAE, and HR10% on the building, apartment, and house datasets. \begin{table} \begin{tabular}{c c|c c c|c c} \hline \hline & & Building & & \multicolumn{2}{c|}{Apartment} & \multicolumn{2}{c}{House} \\ \cline{3-6} Type & Model & MAPE (\(\downarrow\)) & MAE (\(\downarrow\)) & HR10\% (\(\uparrow\)) & MAPE (\(\downarrow\)) & MAE (\(\downarrow\)) & HR10\% (\(\uparrow\)) & MAPE (\(\downarrow\)) & MAE (\(\downarrow\)) & HR10\% (\(\uparrow\)) \\ \hline STA & HA & 45.17\(\pm\)0.97 & 13.04\(\pm\)0.24 & 13.78\(\pm\)0.09 & 46.20\(\pm\)2.73 & 17.03\(\pm\)0.19 & 13.69\(\pm\)1.19 & 45.74\(\pm\)2.66 & 7.77\(\pm\)0.04 & 18.47\(\pm\)0.25 \\ \hline \multirow{4}{*}{SUP} & LR & 99.74\(\pm\)34.19 & 17.38\(\pm\)3.06 & 10.06\(\pm\)2.46 & 127.0\(\pm\)97.90 & 30.59\(\pm\)15.23 & 8.58\(\pm\)3.98 & 245.25\(\pm\)27.71 & 39.70\(\pm\)37.55 & 6.75\(\pm\)6.11 \\ & XGBoost & 34.75\(\pm\)24.46 & 8.63\(\pm\)0.82 & 18.76\(\pm\)4.65 & 39.53\(\pm\)11.50 & 11.65\(\pm\)2.40 & 20.54\(\pm\)4.53 & 36.29\(\pm\)8.74 & 7.59\(\pm\)1.58 & 23.00\(\pm\)2.64 \\ & DNN & 37.71\(\pm\)11.17 & 11.07\(\pm\)1.09 & 14.46\(\pm\)1.63 & 38.06\(\pm\)3.14 & 14.62\(\pm\)1.52 & 15.41\(\pm\)4.05 & 33.74\(\pm\)0.62 & 7.30\(\pm\)0.71 & 17.24\(\pm\)4.37 \\ & DNN + CL & 36.77\(\pm\)20.10 & 10.94\(\pm\)1.21 & 52.23\(\pm\)2.91 & 37.55\ **Feature and Pretext Task Ablations.** To investigate the relative effects of different feature sources, we evaluate DoRA with full features and its three variants: 1) **RF** includes only real estate features, which is the basic feature to describe real estate; 2) **RF+PoI** includes both real estate features and Pol features; 3) **RF+Econ&Geo** includes both real estate features and economic and geographical features; 4) **All** includes the complete set of features. Figure 2(a) reports the performance with the building dataset in the 5-shot setting. Removing either Pol features or economic and geographical features leads to inferior performance in terms of all metrics. Moreover, Pol features affect the performance more considerably compared with economic and geographical features, which indicates that the neighboring facilities are critical factors for real estate appraisal. These observations suggest that considering only the metadata of real estate is insufficient for house price prediction, while various sources of describing real estate from a global viewpoint enhance the capability of the model. We also examine various pretext tasks as shown in Figure 2(b), which replaces the original pretext task with another pretext task from the real estate metadata. We can observe that all metrics performance of replacing with different pretext tasks is degraded, which empirically showcases the importance of the pretext task objective. ### Deployment **Deployed System.** E.SUN Bank is a commercial bank encouraging AI-driven solutions for businesses in Taiwan. In the past, real estate appraisal tasks were primarily manual, high-cost, and subjective to the apprasiser. Moreover, it is challenging to estimate real estate if there are limited historical transactions. To that end, we partered with the fintech team to deploy automated DoRA to perform an online mortgage calculator platform, which requires real estate appraisal for suggesting the mortgage. In the prototype system, the user is required to enter the information (e.g., address, house age, parking space, etc.) of the property that is to be mortgaged. DoRA will then execute an online real estate appraisal by extracting the Pol features based on the house address as part of the inputs. Afterwards, the appraised price will be incorporated with other internal entities to compute the approximate loan. **Case Study.** As the distribution of transactions for each city is a long-tail distribution, the great majority of the cities only have a few transactions. Therefore, we simulated three cities with extremely limited transactions and compared DoRA with XGBoost. As shown in Table 4, we can observe that DoRA is significantly superior to the baseline for all metrics, particularly in property types where it fails to appraise real estate with 0 hit rate scores. The performance of the house type in Wugu District shows that DoRA appraises effectively while the baseline deteriorates all metrics more substantially. These cases confirm that DoRA is capable of low-resource scenarios due to the incorporation of the unlabeled set. In partnership with the fintech team, such an improvement definitely improves resource utilization and performance of the real estate appraisal. ## 6. Conclusion In this work, we propose DoRA, a domain-based SSL framework for low-resource real estate appraisal, which is one of the challenging property valuation tasks due to heavy human efforts. Predicting the geographic location as an intra-sample pretext task reinforces the model learning the domain-based representations of real estate. Moreover, DoRA integrates inter-sample contrastive learning to distinguish the discrepancies between transactions across towns for the robustness of limited examples in downstream tasks. Extensive results on different property types of real-world real estate appraisals demonstrate that DoRA consistently outperforms supervised, graph, and SSL approaches in few-shot scenarios. Prior to this work, real estate estimations with new and rural transactions were mainly evaluated by appraisers using manual, ad-hoc, intuition-driven methods at E.SUN Bank. Now, fintech teams have adopted DoRA for automatically estimating values of real estate, which saves time and drives objectivity in the business, and empowers the financial institution to plan and adapt dynamically to new information. As our proposed approach was flexibly designed with SSL, we expect DoRA to be useful to other applied scientists in financial marketplaces, especially those whose goal is to perform property valuation with only a few labeled data. ## 7. Acknowledgments We would like to thank Fu-Chang Sun, Yi-Hsun Lin, Hsien-Chin Chou, Chih-Chung Sung, and Leo Chyn from E.SUN Bank for sharing data and discussing the findings. \begin{table} \begin{tabular}{c c|c c c|c c c|c c c} \hline \hline & & \multicolumn{3}{c|}{Building} & \multicolumn{3}{c|}{Apartment} & \multicolumn{3}{c}{House} \\ Area & Model & MAPE (\(\downarrow\)) & MAE (\(\downarrow\)) & HR10\% (\(\uparrow\)) & MAPE (\(\downarrow\)) & MAE (\(\downarrow\)) & HR10\% (\(\uparrow\)) & MAPE (\(\downarrow\)) & MAE (\(\downarrow\)) & HR10\% (\(\uparrow\)) \\ \hline \multirow{2}{*}{Nantun District, Taichung} & XGBoost & 39.91 & 7.93 & 0.00 & 38.74 & 12.45 & 4.72 & 33.63 & 10.68 & 20.00 \\ & DoRA & **15.82** & **2.77** & **40.00** & **21.53** & **10.15** & **16.04** & **22.53** & **9.95** & **35.33** \\ \hline \hline \multirow{2}{*}{Wugu District, New Taipei} & XGBoost & 54.63 & 11.74 & 16.67 & 65.11 & 16.62 & 0.00 & 16.285 & 28.53 & 0.00 \\ & DoRA & **34.77** & **6.20** & **19.67** & **61.58** & **10.61** & **4.46** & **2.55** & **0.44** & **100.00** \\ \hline \hline \multirow{2}{*}{Nantou City, Nantou County} & XGBoost & 52.69 & 6.86 & 7.69 & 61.58 & 10.61 & 4.46 & 34.86 & 7.52 & 10.14 \\ & DoRA & **39.65** & **5.14** & **15.38** & **32.57** & **6.49** & **11.76** & **31.73** & **5.36** & **24.64** \\ \hline \hline \end{tabular} \end{table} Table 4. The case study of 1-shot performance with three areas. Figure 3. Feature and task ablations with the building dataset.
2306.01316
Independent Modular Networks
Monolithic neural networks that make use of a single set of weights to learn useful representations for downstream tasks explicitly dismiss the compositional nature of data generation processes. This characteristic exists in data where every instance can be regarded as the combination of an identity concept, such as the shape of an object, combined with modifying concepts, such as orientation, color, and size. The dismissal of compositionality is especially detrimental in robotics, where state estimation relies heavily on the compositional nature of physical mechanisms (e.g., rotations and transformations) to model interactions. To accommodate this data characteristic, modular networks have been proposed. However, a lack of structure in each module's role, and modular network-specific issues such as module collapse have restricted their usability. We propose a modular network architecture that accommodates the mentioned decompositional concept by proposing a unique structure that splits the modules into predetermined roles. Additionally, we provide regularizations that improve the resiliency of the modular network to the problem of module collapse while improving the decomposition accuracy of the model.
Hamed Damirchi, Forest Agostinelli, Pooyan Jamshidi
2023-06-02T07:29:36Z
http://arxiv.org/abs/2306.01316v1
# Independent Modular Networks ###### Abstract Monolithic neural networks that make use of a single set of weights to learn useful representations for downstream tasks explicitly dismiss the compositional nature of data generation processes. This characteristic exists in data where every instance can be regarded as the combination of an identity concept, such as the shape of an object, combined with modifying concepts, such as orientation, color, and size. The dismissal of compositionality is especially detrimental in robotics, where state estimation relies heavily on the compositional nature of physical mechanisms (e.g., rotations and transformations) to model interactions. To accommodate this data characteristic, modular networks have been proposed. However, a lack of structure in each module's role, and modular network-specific issues such as module collapse have restricted their usability. We propose a modular network architecture that accommodates the mentioned decompositional concept by proposing a unique structure that splits the modules into predetermined roles. Additionally, we provide regularizations that improve the resiliency of the modular network to the problem of module collapse while improving the decomposition accuracy of the model. modular networks, representation learning, world models ## I Introduction Representation learning using monolithic models (models that use a single set of weights for every input) typically involves passing all data samples through the model and extracting features after training on pretext tasks deemed helpful for downstream tasks. However, this approach ignores the compositional characteristics of data generation processes. As an example of this common characteristic for images of objects, every object can be described as a composition of the object's shape, color, size, orientation, texture, etc. Instead of learning these concepts separately while considering the compositional nature of each possible combination of the mentioned concepts, monolithic models attempt to directly extract high-level features without any constraints that would impose such a structure explicitly. Modular neural networks [1] was proposed as a potential solution that considers this compositional characteristic, where a set of modules instead of a singular neural network is used to extract features from any given input. Generally, the feature extraction process for these models proceeds using a scoring method to determine which module or set of modules from the available modules should process the input. Then, the input is passed through the chosen modules. In the case where multiple modules were chosen, a combination method, such as adding features, is chosen to combine the features extracted by each module. Regardless of the module selection approach, while modular networks take a step towards solutions considering the mentioned compositional characteristics of data, one particular structural element is still missing from the currently available works. To have a compositional structure, there is a need for an identity state to be altered through combination with compositional concepts. We define this identity state as one of the true generative factors of the dataset that does not require another factor to be defined. Using the previous example of images of objects, one can consider the shape of the object (a physical concept) as the identity state that can be modified by compositional concepts (non-physical concepts) such as rotations, color, and scaling. In this example, while the object's shape can be defined separately from the other factors, the remaining factors require a shape to be defined in a hierarchical manner beforehand. Therefore, we propose structural changes to modular networks that allow the learning of two separate sets of modules, where one set of modules automatically learns a notion of identity, and the other set only encodes the compositional concepts. In particular, this is done by only allowing the compositional modules to observe the input while the identity modules do not. Instead, the identity modules will use a set of learnable parameters to output a static state that is modified using compositional modules. The reason for this change is that the identity modules will now have to learn a specific, unchangeable concept that remains static among large portions of the dataset. A problem that modular networks commonly deal with is when the model routes all the inputs through the same module or set of modules. Different works propose varying solutions, such as regularizing the router to choose diverse modules or modifying the way modules are combined during feature extraction. In this work, we propose to solve this problem indirectly by imposing independence constraints for the features extracted by each module and among the compositional and identity modules. By penalizing correlations between the concepts embedded by each module, this constraint would force modules to learn to embed information not considered by other modules, which leads to the decomposition of the concepts present in the dataset into separate modules. * Propose a new architecture for modular networks that promotes learning features that are in line with the compositional nature of data generation processes * Propose an independence-based solution that indirectly solves module collapse. * Provide experiments showcasing the automatic decomposition capabilities of the proposed approach in the extraction of the identity states. ## II Related Work Algorithmically, the previous works on modular networks share a few structural traits, such as the usage of a router and the recursiveness of the application of modules to the input. Meanwhile, other details such as the training method, learnability of different components of the methods, and the application differ between the works in the literature. In the following, a summary of the literature alongside the distinction between each work and ours is delineated. In [1], a router and a set of modules are trained using a generalized expectation-maximization (EM) algorithm. For each input, the router selects a set of modules from a library of modules to process the input. The outputs from selected modules are then combined (either through concatenation or addition) to compute the output of the layer. While a modular approach, this method does not impose any structural constraints on the modules to decompose features into identity and compositional states. While making use of reinforcement learning algorithms instead of EM, [2] allows the routing module to decide when to stop processing the original input rather than using a predetermined number of recursions. While the decomposition of the input into multiple atomic representations using multiple modules is similar to our work, we do not constrain our modules to encode specific information such as rotation, and the learning is done automatically during the training. Each of the mentioned works deals with a problem called module collapse, where during training, the routing algorithm routes all the inputs through only a few of the modules. [1] uses a different number of M-step compared to the E-step in the EM algorithm to mitigate this issue. On the other hand, [3] finds that using multiple agents per task mitigates the module collapse problem. Even though previous works have, in some cases, shown that module collapse is prevented using various regularization techniques, without an independency constraint, diversity in usage of modules may not mean that module collapse does not occur since multiple modules might still encode the same concept. Conventional module collapse detection methods cannot detect this. In contrast, our proposed independence-based approach solves this problem indirectly by preventing the same concept from being encoded by multiple modules. ## III Proposed approach The general architecture of the proposed approach is shown in Fig. 1. The pool of available modules can be split into two categories, where one group of modules is only used for learning compositional concepts, and modules from this group are able to observe the input. The other modules are not able to observe the input and output of a learnable set of parameters in the shape of a transformation matrix to be combined with compositional modules. Each compositional module's architecture is similar to that of the encoder of Lie group Variational Autoencoders (LVAE) [4]. Modules designated \(T^{I}\) represent the learnable identity matrices we name identity modules, while \(m^{c}\) designates compositional modules. While the proposed approach can be used with any set of data with compositional characteristics, we use an image reconstruction application in this paper. Therefore, a decoder is required to reconstruct the image using the features extracted by the modules. This module is designated by \(d\) in Fig. 1. To train the model, we first output a reconstruction of the input image using every combination of the available modules. Then, a winning combination is chosen by choosing the combination that provides an output with the lowest loss. In this section, we give a brief background of LVAEs. Then, feature extraction and the reconstruction processes are delineated. Finally, the regularizations and loss calculation methods are provided. ### _Lie-group VAE_ LVAE [4] is a variational autoencoder that extracts a vector of disentangled representations for the input image, while the same vector is an element of a Lie group represented using the tangent space of the group. This way, smooth equivariant representations are learned where the encoder predicts values along the axes of the Lie algebra vector that are converted to their matrix representations through exponential mapping. In feature extraction, the input is first passed through an encoder typically chosen as a CNN for feature extraction from images and a distribution is inferred over the position of the latent variables. After sampling from this distribution, the latent variables are used alongside the learnable Lie algebra bases to infer the element of the group representing the input through the exponential operator. This process is formulated as follows [4]. \[\begin{split}\mu,\sigma&=g(x),\;z=\mu+\sigma \epsilon,\\ T&=exp(zA),\;\hat{x}=d(T)\end{split} \tag{1}\] Where \(g\) and \(d\) are the encoders and the decoder of this network, \(z\) is the latent vector, and \(A\) is the learnable Lie algebra bases. Additionally, \(T\in R^{u\times x}\) represents the extracted features as a transformation matrix, i.e., an element of the group representing the input. To recover disentangled representations, regularizations are applied to the network. [4] Fig. 1: The general structure of the proposed approach showed that by imposing \(A_{i}A_{j}=0,\forall i\neq j\), the following would be satisfied. \[H_{ij}=\frac{\partial^{2}g(t)}{\partial z_{i}\partial z_{j}}=0 \tag{2}\] where \(A_{i}\) is the \(i^{th}\) Lie algebra basis and \(H\) represents the Hessian matrix. If (2) is satisfied for every \(i\) and \(j\) when \(i\neq j\), then the changes in the output of the network with respect to a latent variable are not dependent on the changes of other latent variables. This ensures that independent concepts are encoded into each latent variable during training. ### _Modular feature extraction_ To extract the features from an image, every compositional module first processes the input image as follows: \[T_{i}^{c}=m_{i}^{c}(x) \tag{3}\] where \(T_{i}^{c}\) represents the transformation features extracted by the \(i^{th}\) compositional module. Thereafter, every combination of the compositional features is calculated, denoted as: \[\mathbf{T}^{c}=\{T_{1}^{c},\ldots,T_{m}^{c},T_{1}^{c}T_{2}^{c},\ldots,T_{m-1}^{ c}T_{m}^{c},\ldots,T_{1}^{c}..T_{m-1}^{c}T_{m}^{c}\} \tag{4}\] where \(\mathbf{T}^{c}\) represents the set of all combinations of the compositional features, and \(m\) is the number of compositional modules set manually at the start of training. Now, to obtain the combined representation of the compositional features with the identity transformations, every element of the set \(\mathbf{T}^{c}\) is multiplied by the learnable transformation matrix representing each identity module as follows: \[\mathbf{T}=\{T_{c_{i}}T_{I_{j}}\mid T_{c_{i}}\in\mathbf{T}^{c}\;,\;T_{I_{j}} \in\mathbf{T}^{I}\}, \tag{5}\] where \(\mathbf{T}^{I}=\{T_{1}^{I},\cdots,T_{n}^{I}\}\) represents the set of learnable transformations from every identity module where the pool of available modules consists of \(n\) identity modules. Every element of the set \(\mathbf{T}\) will be considered the candidate representation for input \(x\) until the combination that produces the best output is chosen based on the scoring criteria depicted in Section III-C1. But first, we need to produce a reconstruction based on every combined feature in \(\mathbf{T}\) so that every element of this set can be evaluated in the image space. In this work, we use a single decoder network for every feature to get a set of reconstructions as follows: \[\hat{\mathbf{X}}=\{d(T_{i})\mid T_{i}\in\mathbf{T}\}, \tag{6}\] where \(d\) represents the decoder network. ### _Scoring, regularization, and loss function_ In order to update the weights of the modules, two steps remain. The winning combination needs to be chosen first. Then, based on the output from the winning combination, the loss will be calculated, and after the computation of the gradients, the weights of the modules responsible for the generation of the winning output will be updated. #### Iii-C1 Scoring the output from each combination Three criteria are used to score the reconstruction and the features extracted using each combination of modules. First, the reconstruction is evaluated using image mean-squared-error loss \[\mathcal{L}_{i}^{img}=\|x-\hat{x}_{i}\|, \tag{7}\] where \(\hat{x}_{i}\) is the \(i^{th}\) element of \(\hat{\mathbf{X}}\). The second criterion quantifies the level of independence between the features extracted by each module, alongside the independency of the features between the different modules. By promoting intra-module independency, the modules that output higher-quality latent variables are prioritized. This is while inter-module independency constraints incentivize combinations of modules that embed the least amount of correlated information with respect to other modules in the combination. To quantify intra-module independence (which will also be used in the loss function later), we use the same approach as [4] where the changes of the gradient of the output with respect to a single element of the latent vector are computed with respect to every other element of the same vector. We extend this formulation to inter-module independence as well. The following shows the formulation for both independence quantification methods. \[\mathcal{L}^{ind}=\sum_{i,j}\frac{\partial^{2}\hat{x}}{z_{i}z_{j}}+\sum_{i,j} \frac{\partial^{2}\hat{x}}{z_{i}^{\prime}z_{j}^{\prime}} \tag{8}\] where \(z_{i},z_{j}\in Z_{k}^{I}\) and \(z_{i}^{\prime}\in Z_{k}^{I},z_{j}^{\prime}\in Z_{k}^{I}\). Additionally, \(Z_{k}^{I}\) represents the latent variables extracted by the \(k^{th}\) module in the combination under evaluation. Therefore, the first term on the right-hand side of (8) quantifies how correlated the variables in the latent vector of one module are while the second term quantifies how correlated the variables of one module are compared to that of another module in the combination under evaluation. The final term of the scoring function is the Kullback-Leibler Divergence (KLD) between the predicted and the true posterior distribution over the latent used as a part of the originally proposed VAE [5] loss function. Putting this scoring formula together, we have the following equation. \[S_{i}=-(\mathcal{L}_{i}^{img}+\mathcal{L}_{i}^{ind}+\sum_{j}\mathcal{L}_{ij}^{ KL}) \tag{9}\] where \(S_{i}\) determines the score for the \(i^{th}\) combination from (5) and \(j\) represents the KL divergence for the latent variable of the \(j^{th}\) module in combination \(i\). To choose the winning combination, the one with the maximum score is selected and the loss is calculated for this winning combination. #### Iii-C2 Calculating the loss for the winning combination With the winning combination selected, the only step left is to calculate the loss and update the weights of the modules and the learnable parameters representing the learned identity of the winning combination. To this end, the following is used. \[\mathcal{L}_{i}=\mathcal{L}_{i}^{img}+\mathcal{L}_{i}^{ind}+\sum_{j}\mathcal{L }_{ij}^{KL}+\mathcal{L}_{i}^{extra} \tag{10}\] where the same image, independence, and KLD loss are reused by adding an extra LVAE-specific loss term that is not included in the scoring process and is only used to ensure training stability in [4]. So far, the loss function does not include any terms that would give the modules information about what the encoded identity matrices should represent. To explore the effects of the introduction of such an element, we use an additional loss and compare the results to the case where (10) is used. This term aims to guide the modules toward learning a specific ground truth factor as identity representations. To do this, we introduce a classifier with a fully-connected architecture trained to predict the shape of the input based on the identity transformation of the winning combination. Moreover, to ensure the imposed identity remains unchanged by the compositional modules, we add another loss that lowers the \(L^{2}\) distance between the classifier features of the identity and the combined representations that we name ID classifier loss. ## IV Experiments ### _Datasets_ We used the 3Dshapes [6] dataset, a synthetically generated dataset of 3D objects in the middle of a room. The variable factors used to generate this dataset consist of the camera's orientation with respect to the object, shape, and scale, and the color of the walls, object, and floor. The objects in this dataset are capsules, spheres, cylinders, and cubes. A few example images of this dataset are shown in Fig. 1(a). A pool of 5 compositional modules and 5 learnable identity representations is available for every training session. ### _What are the learned identities?_ To visualize what the identity representations learn, we use the decoder without any input images to reconstruct the image represented by each learned identity. Fig. 1(b) shows one image per identity reconstruction. Note that the loss used for this training session is based on (10), and the ID classifier mentioned in Section III-C2 is not used. Based on this figure, the 4 shapes available in the dataset are split among the 5 modules, where one of the shapes (cube) is learned by 2 modules. Therefore, it is evident that the proposed identity-preserving approach to modular networks is able to decompose the shapes into different learnable representations automatically without the need for labels for the different shapes. Additionally, despite the presence of 6 different ground truth generative factors, the learned representations are the shape of the object and not any of the other 5 factors. Fig. 1(c) visualizes the reconstructions when an ID classifier is used. Similar to the previous reconstructions, one shape is learned twice, and the decomposition is based on shapes. ### _What modules are responsible for what shapes?_ The previous experiment confirms that the learned identities represent shapes when reconstructed. However, the same experiment needs to be performed when inputs are present to study the effects of compositional modules on these identities. To this end, we separate the images in the test split with respect to their ground truth shapes. These images are then passed through the dataset, and the winning combination for each image is stored. Using this data, Table I is created where each entry in the table quantifies the fraction of images of that row's ground truth shape that had that column's learned identity in the winning combination for that image. This table is split row-wise for the two experiments based on the usage of an ID classifier. Without an ID classifier, all images containing a cylinder are passed through \(m_{1}\), and all images of spheres are passed through \(m_{3}\), which shows perfect decomposition for the two shapes. However, due to a lack of regularization on the compositional modules, these modules modify the shape of the learned identity alongside other variable factors, causing an imperfect decomposition for shapes cube and capsule. With the introduction of the ID classifier, the decomposition is improved significantly, where a large majority of the images for each shape are passed through one module only, leaving \(m_{3}\) with nearly no images. ## V Conclusion In this work, we introduced a modular network-based model that is able to decompose the generative factors of a synthetically generated dataset into multiple compositional modules and static transformation matrices representing a physical factor present in the ground truth factors of the dataset. This is done by modifying the structure of the model and preventing a set of modules from observing the input while imposing independence constraints for each module separately and between different modules. Each identity is embedded in a single transformation matrix modifiable by the learned compositional modules. \begin{table} \begin{tabular}{l l l l l l l} \hline \hline ID Cls. & GT Shape & \(m_{1}(\%)\) & \(m_{2}(\%)\) & \(m_{3}(\%)\) & \(m_{4}(\%)\) & \(m_{5}(\%)\) \\ \hline \multirow{4}{*}{\(\mathcal{K}\)} & Cube & 28.03 & 28.80 & 0 & 43.17 & 0 \\ & Cylinder & 100.0 & 0 & 0 & 0 & 0 \\ & Sphere & 0 & 0 & 100.0 & 0 & 0 \\ & Capsule & 40.32 & 0 & 0 & 0 & 59.68 \\ \hline \multirow{4}{*}{\(\mathcal{V}\)} & Cube & 93.62 & 0 & 0 & 0 & 6.377 \\ & Cylinder & 0 & 0 & 0 & 0 & 100.0 \\ & Sphere & 0 & 0 & 12.33 & 87.66 & 0 \\ & Capsule & 0 & 77.80 & 13.34 & 7.992 & 0 \\ \hline \hline \end{tabular} \end{table} TABLE I: Shape decomposition evaluation Fig. 2: Reconstructions of identity representations
2304.10462
Creation and annihilation operators for 2D non-abelian anyons
We define creation and annihilation operators for any 2D non-abelian anyon theory by studying the algebraic structure from the anyon diagrammatic formalism. We construct the creation operators for Fibonacci anyons explicitly. We obtain that a single creation operator per particle type is not enough; we need an extra creation operator for every alternative fusion channel. We express any physically allowed observable in terms of these creation and annihilation operators. Finally, we express the 2D Fibonacci Hubbard Hamiltonian in terms of the Fibonacci creation and annihilation operators, and we comment on developing methods for simulation based on these creation and annihilation operators.
Nicetu Tibau Vidal, Lucia Vilchez-Estevez
2023-04-20T17:08:04Z
http://arxiv.org/abs/2304.10462v3
# Creation and annihilation operators for 2D non-abelian anyons ###### Abstract We define creation and annihilation operators for any 2D non-abelian anyon theory by studying the algebraic structure from the anyon diagrammatic formalism. We construct the creation operators for Fibonacci anyons explicitly. We obtain that a single creation operator per particle type is not enough; we need an extra creation operator for every alternative fusion channel. We express any physically allowed observable in terms of these creation and annihilation operators. Finally, we express the 2D Fibonacci Hubbard Hamiltonian in terms of the Fibonacci creation and annihilation operators, and we comment on developing methods for simulation based on these creation and annihilation operators. Introduction--Anyons are postulated quasiparticle excitations in two-dimensional systems [1]. They have a topological nature and exotic exchange statistics [1; 2; 3; 4; 5; 6], which differentiate them from bosons and fermions. We call them topological particles or phases of matter because the geometry of space-time or the distance between them does not change the result of the relevant operations. These topological properties make anyons systems a promising platform for quantum information processing [7; 8; 9; 10; 11]. Topological quantum computing tries to exploit these features to have a robust computation against error due to local perturbations and noise by the environment. However, the experimental discovery of such systems has remained elusive so far [12; 13; 14; 15; 16; 17]. Information processing with topological systems has been one of the main attractions to the study of anyonic theories. We build on the recent information-theoretic perspective on anyons [18; 19; 20; 21; 22; 23; 24]. Nevertheless, anyons can also be very intriguing from a more foundational point of view. The notion of subsystems and locality in quantum information theory is crucial to understanding interactions between different systems. In a qubit theory, for example, we use the tensor product structure to describe systems consisting of multiple subsystems. Two non-abelian anyons can merge (fuse) together to different anyonic charges depending on the fusion channel. Therefore, to completely describe an anyonic quantum system, we need to know all the charges that make up the system and how they fuse with each other. This means there is no such thing as a tensor product between two subsystems since we need that extra bit of information on the overall charge of the composed system. There is a gap in the literature when talking about a creation and annihilation operator algebra for non-abelian anyons in 2D. Bosons and fermions have well-defined annihilation operators, so it is natural to look for them in anyon theories too. For anyons in one spatial dimension, the creation and annihilation operators have been found [25]. We believe that in the 2D case, the difficulty of defining modes (or subsystems) and the topological charge superselection rule are the main reasons for this literature gap. The latter is an interesting characteristic of anyon theories that ensures operators will only be physical observables when the total topological charge is conserved. In this work, we define an anyonic mode as simply connected sub-regions with boundaries of our two-dimensional space. We can then map the subsystem structure to the level of simply connected regions with the help of the planar representation of anyons [19]. Using the diagrammatic approach for anyons, we can find the candidates for annihilation operators within the operators left invariant by local transformations on the rest of the system. Anyon diagrams--An anyon is a quasiparticle that can exist in two-dimensional systems. We can think of putting two anyons together to create a new particle. This process is known as _fusion_. Two particles \(a\) and \(b\) can be fused to \(c\), which will read as \(a\times b=b\times a=c\). However, in non-abelian anyon theories, it is possible to find different outcomes for the same fusion channel; in this case, we write: \[a\times b=b\times a=\sum_{c}N_{ab}^{c}c, \tag{1}\] where \(N_{ab}^{c}\) are the fusion multiplicities. They indicate the number of different ways in which \(a\) and \(b\) can fuse to \(c\). There is a trivial anyon \(e\), the vacuum or the identity. This particle satisfies the property \(N_{aa}^{b}=\delta_{ab}\). Every particle \(a\) also has its own antiparticle \(\bar{a}\) such that \(N_{ab}^{e}=\delta_{\bar{b}\bar{a}}\). We can write an orthonormal complete set of states for \(n\) anyons as a fusion tree as in Figure 1. If any of the \(N_{aa-1:a}^{i_{+1}}=0\), then the fusion is not allowed, and the diagram is zero. The corresponding bras \(\langle\psi_{i}|\) are obtained by doing the Hermitian conjugate, which is equivalent to flipping the diagram along a horizontal axis. Figure 1: Basis \(|\psi_{i}\rangle\) and its conjugate \(\langle\psi_{i}|\) of an \(n\)-anyon system. All vertices are allowed fusion channels. When utilizing the diagrammatic algebra, we will always set the time direction vertically and upwards and assume that all particles move forward in time. We can interpret a particle going back in time as its antiparticle moving forward in time. One can use the basis states to build arbitrary operators in the same way we do when using kets and bras. A diagram with lines pointing both upwards and downwards can be interpreted as operators that take as input the particle lines coming in from the bottom and give as output the lines coming out the top. The lines coming in from the bottom are the bra part of the operator, and the lines pointing out are the ket part. For instance, we can write a general operator as in Figure 2. An important thing to keep in mind is that operators will only be physical observables when the total charge is conserved. In Figure 2, this would mean that \(a_{2m-1}=b_{2n-1}\) and thus the diagram would be connected. This is a direct consequence of the strong superselection rule that exists in anyonic systems [26]. It is not possible to implement an operator that changes the overall topological charge of the system. We will be particularly interested in one family of nonabelian anyons: Fibonacci anyons[12]. The Fibonacci model is perhaps the simplest non-abelian example and has only two particle types, the vacuum or trivial anyon \(e\) and the Fibonacci anyon \(\tau\). The only non-trivial fusion rule of this theory reads \[\tau\times\tau=e+\tau. \tag{2}\] One can convert between bases associated with different fusion trees by using the \(F\)-matrices shown in Figure 3. In the Fibonacci theory, the only nontrivial \(F\)-matrix is \([F_{\tau}^{\tau\tau\tau}]=\begin{pmatrix}\phi^{-1}&\phi^{-1/2}\\ \phi^{-1/2}&-\phi^{-1}\end{pmatrix}\). Further, remember that braiding two anyons result in a phase factor that depends on the overall charge, and it is given in every anyon theory. For Fibonacci anyons, there are two non-trivial braiding diagrams: (i) when two \(\tau\) anyons fuse to the identity braid and (ii) when two \(\tau\) anyons fusing to the \(\tau\) braid. The phases are \(R_{e}^{\tau\tau}=e^{-4\pi i/5}\) and \(R_{\tau}^{\tau\tau}=e^{3\pi i/5}\), respectively. Anyonic annihilation operators--To define anyonic annihilation operators, we first need a notion of _modes_ that can be excited [27; 28; 29]. Usually, these modes refer either to momentum in quantum field theory or to lattice sites in the usual Ising chain models. For simplicity, we prefer to keep the number of modes finite and use the notion of mode as a lattice site. We want to identify a simply connected sub-region with boundaries of our 2D space as a single mode where the different anyon types can be excited. We consider that the complete system consists of a finite number \(N\) of such regions glued along their boundaries; see Figure 4. Therefore, we use a finite 2D lattice populated by the different anyon particle types of the theory. To consider the annihilation operators, we want to identify the modes as the elementary subsystems in the theory. We want to understand how to map the subsystem structure at the level of simply connected regions in the 2D manifold to planar diagrams. Notice that there are different ways to glue the boundaries between the regions to compose them into larger simply connected regions. Even giving a canonical ordering of the modes to represent them in planar diagrams, for each gluing scheme there is an associated planar representation (see Figure 5). These different planar representations correspond to different partitions of the systems given by the planar canonical basis of the anyon theory. As we said, to define annihilation operators it is helpful to understand each mode as an elemental subsystem. We do this step for two reasons. First, we would like to use the conceptualisation of mode subsystems done in the literature of fermionic and bosonic annihilation operators [28; 29]. Secondly, if we consider the mode as a subsystem, we can find candidates for annihilation operators within the operators left invariant by transformations lo Figure 4: Partition of the plane in different subregions that are associated with modes. Notice that the union of regions \(1,2,3,4,6,7,8\) and \(9\) is not a simply-connected region. We would not consider it a valid subsystem. Figure 3: The \(F\)-matrix defines a change of basis between the two different basis states that define the subspace spanned by the fusion of the anyons \(a\), \(b\) and \(c\) into \(e\). Figure 2: General operator with \(n\) inputs and \(m\) outputs. The coefficients are arbitrary. cal to the system of the rest of the modes. Even though there is not a clear notion of a general local operator in anyonic systems, there is the notion of allowed local unitaries, as shown in Figure 6. If our system consists of modes \(M=\{1,\ldots,m+1\}\), we can say that a candidate local operator in mode \(i\in M\) is an operator \(\hat{O}\) such that is invariant under the action of all local unitaries in the modes \(M\backslash\{i\}\). In equation form that reads as: \(\hat{O}\) is a candidate local operator on mode \(i\in M\) if and only if \[\hat{U}^{\dagger}_{M\backslash\{i\}}\cdot\hat{O}\cdot\hat{U}_{M\backslash\{i \}}=\hat{O} \tag{3}\] for all \(\hat{U}_{M\backslash\{i\}}\) being an allowed local unitary in modes \(M\backslash\{i\}\). This is a natural property that a local operator must satisfy. If one works within the Heisenberg picture of quantum mechanics, it is clear that indeed when evolving a local operator in \(A\) with a physically allowed local unitary in \(B\), then the local operator in \(A\) must be left invariant. Moreover, it is not difficult to check that the conditions in Figure 3 give that the collection of all candidate local operators in \(i\) form an algebra under the usual sum and operator multiplication, and \(\mathbb{C}\) as scalars. So, we can say that we have an abstract definition of the algebra of candidate local operators in mode \(i\). Using the diagrammatic approach for anyons, we can characterise the allowed local unitaries and explore the candidate local operators for any given mode. In Figure 6 we show how an allowed local unitary looks in diagrammatic form. We solve equation 3 that defines candidate local operators using the diagrammatic formalism, and we find the general form of a candidate local operator on an anyonic mode. For simplicity, we show it here for the first mode. We express the general form of a candidate local operator on mode 1 in terms of linear combinations of the elements of a canonical basis: \[O_{1}=\sum_{\begin{subarray}{c}a,a^{\prime},b_{0}\\ d=a\times b_{0},d^{\prime}=a^{\prime}\times b_{0}\end{subarray}}c_{a,a^{\prime },b_{0},d,d^{\prime}}\ A^{aa^{\prime}b_{0}}_{dd^{\prime}} \tag{4}\] where \(c_{a,a^{\prime},b_{0},d,d^{\prime}}\in\mathbb{C}\) and the canonical basis of the candidate local operator algebra for mode 1 given by the terms \(A^{aa^{\prime}b_{0}}_{dd^{\prime}}\) can be seen in Figure 7 as planar diagrams. Using these basis elements, we want to identify components where the first mode is transformed to the vacuum, as an annihilation operator component would. Only one anyon type can be in the same mode in anyon diagrams. Therefore, the components of the anyonic annihilation operators should consist only of terms that send anyon particle types to the vacuum and not any other particle type. In Figure 8, one can observe that if we fix the particle type \(a\neq e\) in mode 1 bra and the vacuum \(e\) in the mode 1 ket, the basis components then depend only on the global charge of the rest of the system \(b_{0}\) and the term \(a\times b_{0}\), since \(e\) is an abelian particle and then \(e\times b_{0}\) is always \(b_{0}\). Thus, we realise that the number of annihilation elements that a particle type \(a\) has associated in a mode is the number of fusion channels that that particle type has associated with it. This result comes directly from the explicit dependency of having the different annihilating components from \(a^{\prime}\times b_{0}\), being \(b_{0}\) any particle type. Thus, all fusion channels of \(a^{\prime}\) will have an associated annihilating element. Figure 5: Different planar representations for different compositions of regions. Sub-figure (a1) indicates that we are first fusing anyon 1 (blue) with anyon 4 (green) and anyon 2 (red) with anyon 3 (orange). In (a2) we express such system in the diagrammatic form, that is equivalent to the planar representation in sub-figure (a3). In the right column, we have the same but when we fuse anyon 1 with anyon 2 and anyon 3 with anyon 4. Figure 7: Basis elements of the local operator algebra for the first mode For notation, we label each of these annihilation elements of the canonical basis \(a_{1}^{b_{0},a\times b_{0}}=A_{b_{0}a\times b_{0}}^{ab_{0}}\) (where \(1\) expresses the fact they are annihilating on the first mode, \(b_{0}\) and \(a\times b_{0}\) specify the fusion channel and annihilating term, and \(a\) is the particle type being annihilated). In all the above and the following expressions, one needs to keep in mind that \(a\neq e\). We will refer to the Hermitian conjugate of such annihilating elements as the creating elements. By direct calculation, we find two very exciting results. First is that the annihilating and creating elements of mode \(j\) are generators of the candidate local algebra of mode \(j\). The second is that the collection of all annihilating and creating elements are generators of the total operator algebra. Let us remark on this crucial point. We have seen that the annihilating elements of Figure 8, together with their adjoints, are generators of the candidate local operator algebra. Having obtained these results, we now naturally wonder if the annihilation operators we are looking for are these annihilating elements. We think they are not. However, we believe that annihilation operators have to be concrete linear combinations of these annihilating elements. In other words, we find that the annihilating elements are components of the annihilation operators, and now we have to decide which is the right way to combine them. We have these insights by analysing the annihilation operators of spinless fermionic theory in a finite lattice [30]. Let us fix the simple setting of having two spinless fermionic modes. We have a vacuum \(|\Omega\rangle\) and two annihilation operators \(f_{1},f_{2}\) such that the anticommutation relations hold: \[\{f_{i},f_{j}\}=0\quad\{f_{i},f_{j}^{\dagger}\}=\delta_{ij} \tag{5}\] We can represent this theory as an abelian anyon theory with two particle types: a fermion \(\psi\) and the vacuum \(e\). It is straightforward to see that if we associate each annihilating element with an annihilation operator, we find that instead of a single annihilation operator \(f_{i}\) per mode, we have two annihilation operators per mode: \(\psi_{i}^{e,\psi}\) and \(\psi_{i}^{\psi,e}\) (see Figure 8 when replacing \(a=\psi\) and summing over the two particle types \(e\) and \(\psi\)). Therefore, this assignment cannot be the correct one. However, we observe that \[f_{1}=\psi_{1}^{e,\psi}+\psi_{1}^{\psi,e}\qquad f_{2}=\psi_{2}^{e,\psi}-\psi_{ 2}^{\psi,e} \tag{6}\] These relations imply that the fermionic annihilation operators are linear combinations of the annihilation components. In the following lines, we derive which exact linear combinations have to be taken to get the annihilation operators. Concretely, we are proposing that the annihilation operators will be operators of the form: \[\alpha_{k}^{(j)}=\sum_{b_{0},c_{0}=a\times b_{0}}C_{b_{0},c_{0},k}^{(j)}\ a_{k}^{b_{0},c_{0}} \tag{7}\] where \(C_{b_{0},c_{0},k}^{(j)}\in\mathbb{C}\). The term \(\alpha\) refers to the fact of being the annihilation operator of the particle type \(a\). The label \((j)\) labels the fact that we may need more than one annihilation operator per particle type. To constraint the coefficients \(C_{b_{0},c_{0},k}^{(j)}\) we consider three conditions that the annihilation operators \(\alpha_{k}^{(j)}\) need to satisfy. The first is that \(\{\alpha_{k_{1}}^{(j)},\ldots,\alpha_{k_{m}}^{(j)}\}_{j,a}\) and their adjoints generate the local algebra of observables in the modes \(k_{1},\ldots,k_{m}\). Second, we require that to obtain \(\alpha_{k}^{(j)}\) we only need to know \(\alpha_{1}^{(j)}\) and braid our way through to \(k\). This requirement comes from the intuition that if one wants to annihilate a particle in \(k\) it should be equivalent to bringing that particle to \(1\) annihilating it there, and then undoing the path we have taken. We show in Figure 9 that the concrete path we take is the chain of simple braids. One could pose different paths giving different Figure 8: Annihilating elements of the basis of local operators for mode \(1\). Note that we express the identity anyon with a dashed line. Figure 9: Annihilating elements of the basis of local operators for the \(k\)th-mode. annihilation operators, it would be interesting to study the relationship between the definition of subsystems and partial tracing procedures with how this path has to be taken. However, this is further from the scope of this paper. The braiding condition imposes the following recursive relation to constrain the coefficients \(C^{(j)}_{b_{0},c_{0},k}\) \[\alpha^{(j)}_{k}=R_{k-1k}\cdot\alpha^{(j)}_{k-1}\cdot R^{\dagger}_{k-1k} \tag{8}\] In the fermionic example that we pose in equation 6 we see exactly how the factor \(-1\) appears in \(f_{2}\) due to the braid operation acting on the 'bra' of \(\psi^{\psi,e}_{1}\) non-trivially. And the third requirement is that for every \(b_{0},j,k\) there is at least one term \(C^{(j)}_{b_{0},c_{0},k}\) that is non-zero. This is to ensure that the annihilation operators \(\alpha^{(j)}_{k}\) have support on any value of the total charge for the modes other than \(k\). This is to prevent explicitly situations where the annihilating terms can be considered annihilation operators and have redundancy. We have found a solution to these three constraints. Thus we have found a way to define annihilation operators in anyonic systems. For the solution we propose, the \(C^{(j)}_{b_{0},c_{0},1}\in\mathbb{C}\) we set them to be either \(0\) or \(1\). However, one could modify our presented solution including different non-zero factors to the terms that are \(1\). The number of annihilating elements in a mode for the anyon type \(a\) is \(n_{a}=\sum_{bc=1}^{n}N_{ab}^{c}\). Following our general construction, the number of annihilation operators associated with this anyon type \(a\) for a given mode will be \(J=n_{a}-n+1\), where \(n\) is the total number of particle types in the theory. Notice that with this scheme we find that for an abelian anyon particle type \(a\), there is a single annihilation operator, since for abelian anyon types \(n_{a}=n\) because there are no multiplicities in the fusion channels associated with \(a\). We show how to construct the \(J\) annihilation operators for any anyon theory in the Appendix A. To make the letter concise, we show here the construction for the simplest non-abelian case, Fibonacci anyons. We order the Fibonacci particle types as \(e,\tau\) of the different allowed fusion channels. We label \(c_{b_{0},j}\) the \(j\)'th particle type such that \(c_{b_{0},j}=\tau\times b_{0}\). For the first annihilation operator of \(\tau\), we set the terms \(C^{(0)}_{b_{0},c_{b_{0},1},1}=1\) and the rest, \(C^{(0)}_{b_{0},c_{b_{0},j},1}\), vanish. This implies that \(\alpha^{(0)}_{1}\) is given by the coefficients being \(C^{(0)}_{e,\tau,1}=1\), \(C^{(0)}_{\tau,e,1}=1\), and \(C^{(0)}_{\tau,\tau,1}=0\). To define \(\alpha^{(1)}_{1}\), we look at the first \(b_{0}\) with more than one compatible \(c_{0}\). In this case, this is \(b_{0}=\tau\). Now all coefficients remain the same as in \(\alpha^{(0)}_{1}\) except for setting \(C^{(1)}_{\tau,c_{b_{0},2},1}=1\) and \(C^{(1)}_{\tau,c_{b_{0},1},1}=0\). Implying that \(\alpha^{(1)}_{1}\) is given by the coefficients being \(C^{(1)}_{e,\tau,1}=1\), \(C^{(1)}_{\tau,e,1}=0\), and \(C^{(1)}_{\tau,\tau,1}=1\). We would follow the construction to find \(\alpha^{(2)}_{1}\) by applying the same changes but with \(c_{\tau,3}\). However, there is no such valid fusion channel. Then we would proceed to the next \(b_{0}\) following the ordering for which \(c_{b_{0},2}\) exists, and follow the same procedure. In the Fibonacci case, there is no next \(b_{0}\). Thus the construction has been completed. We obtain for the Fibonacci case that \(\tau\) has \(J=2\), annihilation operators. See Figure 10 for a diagrammatic representation of the Fibonacci annihilation operators for a three-anyon Fibonacci space. Under this general construction that can be found in Appendix A and using the simple algebraic identities of the annihilation elements that conform to the annihilation operators, it is straightforward to check that the collection of all annihilation and creation operators for all modes can generate the global algebra of operators and, henceforth, of observables in particular. Using direct computation, it is also straightforward to check that the annihilation and creation operators for a set of modes generate the local algebra of observables for such a set of modes. The general proof can be found in Appendix B. We provide specific examples in the next section expressing Fibonacci observables in terms of the creation and annihilation operators. ExamplesWe now exemplify the general results focusing on Fibonacci anyons. Let us start by looking at the generators of the annihilation algebra on three Fibonacci anyons. In Figure 10, we show the three annihilating elements for a Fibonacci anyon \(\tau\) in the left lattice site 1 and central lattice site 2. Note that the operators acting on the site 2, \(\tau_{2}^{b_{0},c_{0}}\), can be obtained from \(\tau_{1}^{b_{0},c_{0}}\) by braiding the anyons on 1 and 2. We express all the operators in the canonical basis by using the F-matrices \(F_{d}^{abc}\) and braiding factors \(R_{c}^{ab}\) presented at the start of this publication. In Fibonacci anyons, we have to define two annihilation operators \(\alpha_{k}^{(1)}\),\(\alpha_{k}^{(0)}\) for the Fibonacci \(\tau\) particle type. Both operators use the term \(\tau_{k}^{e,\tau}\). To have better algebraic properties, we choose to add a factor of \(\frac{1}{\sqrt{2}}\) in front of such terms that will take into account this repetition. We call these two unnormalised annihilation operators: \(\alpha_{k}\) and \(\beta_{k}\), and we will use them throughout the rest of the text. \[\alpha_{k}=\frac{1}{\sqrt{2}}\tau_{k}^{e,\tau}+\tau_{k}^{\tau,e},\ \ \ \ \ \beta_{k}=\frac{1}{\sqrt{2}}\tau_{k}^{e,\tau}+\tau_{k}^{\tau,\tau}. \tag{9}\] In Figure 11, we see how some local observables in modes 1 & 2 can be expressed in terms of the local creation and annihilation operators of such modes. A complete list of all observable terms can be found in Appendix C. Anyonic Hubbard hamiltonianWe want to give use to the annihilation operators that we have defined. A straightforward application is to express Hamiltonians in terms of annihilation operators. Recent work has been studying the properties of Ising-like Fibonacci Hamiltonians [31]. By expressing Hamiltonians using annihilation operators, we hope to first showcase the similarities and differences between Fibonacci anyons and other particle types such as fermions and bosons; and second, provide tools for the simulation of such Hamiltonian systems, allowing the application of tensor-networks methods [32], explore mapping for applying the Bethe ansatz [33] and other methods already used in the 1+1 D case where the notion of annihilation operators is exploited [25]. We focus on the Hubbard Hamiltonian described in [31]. We have a \(2\times N\) square lattice with the ordering shown in Figure 12. We discuss some consequences and issues that arise from this choice further in the text. The Hamiltonian has two contributions. First, a hopping contribution between nearest neighbours, where a \(\tau\)-anyon can jump to the nearest neighbour if it is unoccupied. And a second term, a self-energy term for when there is a \(\tau\) in some site. For simplicity and conciseness, we take the same coupling strength for longitudinal and transverse hopping \(t_{\perp}=t_{\parallel}=t\)[31]. \[\hat{H}=-t\sum_{i=1}^{N-1}\sum_{a_{i+1}\ldots a_{2N-i-1}}\!\!\left(\!\!\! \begin{array}{c}\includegraphics[width=142.26378pt]{Fig142.eps}\\ \includegraphics[width=142.26378pt]{Fig142. unnormalised anyonic creation and annihilation operators we defined. We want to remark that there is nothing in particular of the Hamiltonian in Figure 13 which makes it expressable in terms of the creation and annihilation operators. Any physically allowed Hamiltonian can be expressed in terms of the creation and annihilation operators we have defined. It is a matter of convenience to use the unnormalised annihilation operators. These can be described in terms of the original normalised annihilation operators as \(\alpha_{j}=\frac{1}{\sqrt{2}}\alpha_{j}^{(1)}{\alpha_{j}^{(0)}}^{\dagger} \alpha_{j}^{(0)}+\alpha_{j}^{(0)}-\alpha_{j}^{(1)}{\alpha_{j}^{(0)}}^{\dagger} \alpha_{j}^{(0)}\) and \(\beta_{j}=\alpha_{j}^{(1)}{\alpha_{j}^{(0)}}^{\dagger}\alpha_{j}^{(0)}+\alpha _{j}^{(1)}-\alpha_{j}^{(1)}{\alpha_{j}^{(0)}}^{\dagger}\alpha_{j}^{(0)}\). Nevertheless, there is a subtlety. One needs to pick specific lattices and orderings in order to express the desired notion of locality, as we comment in Figure 5. That is because we have defined the annihilation operators in the different sites as the annihilation operator in the first site and swapped them behind the other sites. However, we could have defined the annihilation operators at the \(k\)th site as the annihilation operator in the first site and swapped them in front of the other lattice sites. The resulting two annihilation operators would generate inequivalent spaces because they correspond to two equivalent notions of locality; they are associated with two different subsystems. In order to express the correct notion of nearest neighbour locality in terms of the annihilation operators we defined (behind) alone, one needs to pick the ordering such that the connection happens behind all the modes between the ones in the connection. We want to explore this further in future works and be able to prove the conjecture that for any lattice, one can find an ordering such that all the nearest neighbour links can be made to happen either completely behind the in-between modes or completely in front. Thus, making any nearest neighbour Hamiltonian expressable in terms of creation and annihilation operators of the neighbouring terms alone. We have strong indications that such a claim is true. However, for length purposes, we prefer to explain it in a different piece fully. Discussion--One may wonder if the general construction of the proposed annihilation operators applies well to the abelian case. Since for abelian particles, the fusion is deterministic and there is a single possible fusion channel. Applying our method, we recover a single annihilation operator per particle type and lattice site as one would expect. One can stop and think about how remarkable it is that a single annihilation operator per mode can define bosonic and fermionic systems. Why can a single mathematical object describe the local behaviour of fundamental particles? We have seen that the critical property that allows us to have such a description is their abelian nature. For non-abelian particles, we observe that the number of annihilation operators per lattice site is \(J=n_{a}-n+1\), where \(n_{a}\) is the total number of allowed fusion channels associated with that particle type. In the Fibonacci case, for the \(\tau\) particle \(n_{a}=3\), we have \(\tau\times e=\tau\) and \(\tau\times\tau=e+\tau\), and \(n=2\) because there are two particle types in Fibonacci anyons, therefore \(J=2\). We want to notice that the construction is general for any non-abelian anyon theory. We have exemplified it with Fibonacci anyons to be concise. Still, annihilation operators can be defined for Ising anyons [10] or any other non-abelian anyon theory one would like to work with. For future work, we would like to explore the connections between the annihilation operators defined using this method for Ising anyons with the annihilation operators one has for Majorana fermions. This article presents annihilation operators in the diagrammatic formalism for non-abelian 2D anyons. We want to describe the algebraic properties of the anyonic annihilation and creation operators in commutation-like relations to have a complete algebraic characterization of the anyonic theory and be able to perform manipulations at the annihilation operator level without computing at the diagrammatic level. We have the suspicion that a complete characterization at the algebraic level might be very challenging. We believe that the algebraic rule for determining whether a combination of creation and annihilation operators is superselection-respecting or not might be rather cumbersome. See Appendix D for some known algebraic relations of the Fibonacci creation and annihilation operators. If we refer to the fusion tree where all the components are the identity particle type as \(|0\rangle\), we see that \(\alpha_{k}^{(j)}|0\rangle=0\) for all \(j\) and \(k\). It is straightforward to see that \(|0\rangle\) is unique under this property. We can now express any state of the canonical basis as a well-ordered sequence of creation operators acting on \(|0\rangle\). Concrete expressions for three-mode Fibonacci anyons can be found in Appendix E. One could try to use these expressions to find suitable Jordan-Wigner mappings for 2+1 D anyons. Furthermore, exploring Bogoliubov-like transformations for the non-abelian anyonic annihilation operators would be interesting. It would allow one to define a different notion of anyonic mode, not tied to the position latticing of the system we introduced. We can now describe general anyonic Hamiltonians in \(2\times N\) lattices, by using the behind-only annihilation operators. The ability to express the Hubbard-like anyonic Hamiltonian in terms of local annihilation operators may have implications in the simulation of the model. Until recently, the community was lacking good numerical techniques to simulate non-Abelian anyons systems. The main difficulty comes from the lack of a tensor product structure and the growth of the Hilbert space with the number of particles. There have been some recent efforts to generalize the tensor network formalism to lattice systems of anyons [34; 35; 36]. However, this work defines the anyonic local operators that constitute the Hamiltonian with their crude representation in the diagrammatic formalism. We expect that having access to the local an nihilation operators of an anyonic theory will facilitate the numerical simulation in some cases. In this way, we can exploit the parallelism between the anyonic Hubbard Hamiltonian and its bosonic or fermionic counterpart for instance. We note that the Hamiltonian in Equation 10 has terms with long-range interactions. These highly long-range terms (with respect to the ordering) can make the simulations time-inefficient. The ordering is deliberately chosen to be the one in Figure 12 so we just need the behind-only annihilation operators. Because we have defined them with the braiding going in one direction, we are restricting ourselves to a not-that-simple expression of the braiding operator in the other direction. Therefore, we choose the lattice ordering so we do not encounter crossings of anyon lines in this direction. Of course, we can think of a more natural (and short-range) ordering, e.g ladder ordering, but then our Hamiltonian terms will contain products of several operators local in not only the nearest-neighbour interacting sites. However, this non-locality can be avoided by defining two more sets of creation and annihilation operators analogous to the ones defined in this paper but changing the direction of the braiding. In conclusion, if we want to avoid long-range terms (with respect to the ordering) in our Hamiltonian we need to sacrifice the simplicity of the current expression. We think that the study of the similarities and differences between these three approaches is a promising future direction to follow. We hope that having found expressions for the 2+1 D non-abelian anyon creation and annihilation operators will advance the study and understanding of this topic, especially by allowing us to apply known techniques to the study of topological quantum computing and the experimental detection of such particles described. Acknowledgements--We would like to thank Steve Simon for the course on Topological Quantum Matter at the University of Oxford and the exquisitely well-written course Lecture Notes. Lucia Vilchez-Estevez thanks the Clarendon Foundation for providing financial support during the development of this project.
2303.05976
On the coherence of one-relator groups and their group algebras
We prove that one-relator groups are coherent, solving a well-known problem of Gilbert Baumslag. Our proof strategy is readily applicable to many classes of groups of cohomological dimension two. We show that fundamental groups of two-complexes with non-positive immersions are homologically coherent, we show that groups with staggered presentations and many Coxeter groups are coherent and we show that group algebras over fields of characteristic zero of groups with reducible presentations without proper powers are coherent.
Andrei Jaikin-Zapirain, Marco Linton
2023-03-10T15:15:58Z
http://arxiv.org/abs/2303.05976v3
# On the coherence of one-relator groups and their group algebras ###### Abstract. We prove that one-relator groups are coherent, solving a well-known problem of Gilbert Baumslag. Our proof strategy is readily applicable to many classes of groups of cohomological dimension two. Indeed we also show that fundamental groups of two-complexes with non-positive immersions are homologically coherent, that groups with staggered presentations and many Coxeter groups are coherent and we show that group algebras over fields of characteristic zero of groups with reducible presentations without proper powers are coherent. ## 1. Introduction A group is **coherent** if all of its finitely generated subgroups are finitely presented. A notorious conjecture of Baumslag's [1] predicts that all one-relator groups are coherent. The first non-trivial result on the coherence of one-relator groups, or indeed, coherence of groups in general, is due to Karrass-Solitar [11, 12] who showed that all cyclically and conjugacy pinched one-relator groups are coherent. Renewed interest in Baumslag's conjecture in the last two decades has galvanised much of the recent literature on coherence, as can be seen from Wise's excellent survey [13]. A notion that plays a particularly important role in recent developments is that of **non-positive immersions**. Wise introduced this notion in [13] and laid out a program to solve Baumslag's conjecture. The strategy involved showing that the fundamental group of a two-complex with non-positive immersions is coherent and establishing that the presentation complex of a torsion-free one-relator group has non-positive immersions. Although the former remains open, many groups known (or conjectured) to be coherent have been shown to fall under the rubric of non-positive immersion [13]. The latter step was eventually achieved by Helfer-Wise [14] and, independently, Louder-Wilton [15]. Following on from this progress, Louder-Wilton [15] and, independently, Wise [13] showed that all one-relator groups with torsion are coherent. Both proofs relied on the fact that a one-relator group with torsion has a finite index subgroup that is the fundamental group of a two-complex with a stronger version of non-positive immersions. Another, related, strengthening of non-positive immersions is constituted by negative immersions, as defined in [15]. Louder-Wilton showed in [15] that the presentation complex of a one-relator group \(G\) has negative immersions if and only if \(G\) is two-free and then later confirmed Baumslag's conjecture for all such one-relator groups in [15]. The reader should also consult Wilton [16] for further related properties that also extend beyond one-relator groups. In a different direction, one-relator groups were shown to be generically coherent by work of Sapir-Spakulova [21] and Kielak-Kropholler-Wilkes [17]. These results were achieved by showing that generic one-relator groups are virtually ascending HNN-extensions of free groups and then appealing to a result of Feighn-Handel [13]. In a similar vein, Kielak and the second author showed in [14] that hyperbolic and virtually compact special one-relator groups are virtually free-by-cyclic and then also appealed to the work of Feighn-Handel to establish their coherence. Although significant work has gone into Baumslag's conjecture, literature on the coherence of group rings of one-relator groups is non-existent; a ring \(R\) is (left) **coherent** if every finitely generated (left) ideal is finitely presented as a (left) \(R\)-module. There seems to be a relation between the coherence of \(G\) and \(\mathbb{Z}[G]\). They are equivalent for elementary amenable groups [15] (see also [11]) and they are both commensurability invariants. However, in general there is no known implication between the two properties. In this paper we confirm Baumsalg's conjecture for all one-relator groups and their group rings. **Theorem 1.1**.: _If \(G\) is a one-relator group and \(K\) is a field of characteristic zero, then_ 1. \(G\) _is coherent._ 2. \(K[G]\) _is coherent._ We now explain the steps that go into the proof. ### Coherence of groups A third notion of coherence plays an important role in our proof of Theorem 1.1; a group \(G\) is **homologically coherent** if every finitely generated subgroup of \(G\) is of type \(\operatorname{FP}_{2}\left(\mathbb{Z}\right)\). Both coherence of \(G\) and \(\mathbb{Z}[G]\) imply homological coherence of \(G\). The first groups of type \(\operatorname{FP}_{2}(\mathbb{Z})\) that are not finitely presented were constructed by Bestvina-Brady [1]. Building on their ideas, many groups with various finiteness properties have been constructed by a plethora of different authors. Despite this, it is unknown whether there exists a group that is homologically coherent but not coherent. In this direction, Gersten [1] showed that for hyperbolic groups of cohomological dimension two, these two notions of coherence are equivalent. The proof of the coherence of one-relator groups consists of two main steps. First, we show that one-relator groups are homologically coherent. In fact, we prove homological coherence for the class of fundamental groups of two-complexes with non-positive immersions and for the class of locally indicable groups of cohomological dimension two with trivial second \(L^{2}\)-Betti number. One-relator groups are virtually in both classes. **Theorem 1.2**.: _Let \(G\) belong to one of the following families of groups:_ 1. _Fundamental groups of two-complexes with non-positive immersions._ 2. _Locally indicable groups of cohomological dimension two with trivial second_ \(L^{2}\)_-Betti number._ _Then \(G\) is homologically coherent._ This result completes a weaker version of Wise's proposed first step for proving Baumslag's conjecture, partially solving [22, Conjecture 12.11]. It turns out that this is sufficient. The second step in the proof is a kind of promotion of the property \(\operatorname{FP}_{2}\left(\mathbb{Z}\right)\) to the property of being finitely presented. **Theorem 1.3**.: _Let \(k\) be a commutative ring with \(1\neq 0\), let \(G\) be a group acting on a tree \(\mathcal{T}\) with coherent vertex stabilisers. If \(H\leq G\) is of type \(\operatorname{FP}_{2}(k)\), then \(H\) is finitely presented. In particular, if \(G\) is homologically coherent, then \(G\) is coherent._ Equipped with Theorem 1.2 and Theorem 1.3, we settle Baumslag's conjecture by appealing to the classic Magnus hierarchy. The combination of Theorem 1.2 and Theorem 1.3 provides a powerful tool for proving coherence of groups. Before discussing the proof of the second part of Theorem 1.1, we mention some further applications. Our method gives a new proof of the coherence of an ascending HNN-extension of a free group, originally due to Feighn-Handel [10]. The homological coherence in this case can be obtained as a consequence of the vanishing of the second \(L^{2}\)-Betti number of these groups and the coherence is then concluded from Theorem 1.3. Jankiewicz-Wise [11, Conjecture 4.7] put forward a conjectural picture of which Coxeter groups of virtual cohomological dimension two are coherent. Following the same strategy, we are also able to confirm one direction of this conjecture; that is, we show that all such Coxeter groups conjectured to be coherent are indeed coherent. See Section 6.2 for the relevant definitions. **Theorem 1.4**.: _Let \(G\) be a Coxeter group and suppose that \(\overline{\chi}(H)\leqslant 0\) for each Coxeter subgroup \(H\leq G\) generated by at least two elements. Then \(G\) is coherent._ A common generalisation of one-relator groups are groups with staggered presentations. Such groups appear naturally in the study of one-relator groups and one-relator products, see [12] and [13]. The reader is referred to Section 2.5 for the precise definition of such groups. By work of Helfer-Wise [11], a torsion-free group with a staggered presentation is the fundamental group of a two-complex with non-positive immersions. Hence, the exact same strategy for proving Theorem 1.1(1) also shows that torsion-free groups with staggered presentations are coherent. Groups with staggered presentations where each relator is a proper power are coherent by work of Wise [14, Theorem 5.7]. Finitely generated groups from both of these subclasses have finite index subgroups that are fundamental groups of two-complexes with non-positive immersions. However, when some of the relators are proper powers and some are not, this is no longer the case (see Example 6.6) and so we cannot directly use Theorem 1.2. Nevertheless, with a little extra work, we establish the coherence of groups with staggered presentations, solving a conjecture of Wise [14, Conjecture 14.10]. **Theorem 1.5**.: _Groups with staggered presentations are coherent._ ### Coherence of group algebras There are not many papers on coherent group rings. For example we could not find any reference for the coherence of \(K[G]\), where \(G\) is a free-by-cyclic group and \(K\) is a field; although this result can be extracted easily from the main result of [10]. In Corollary 3.3 we prove the coherence of group algebras of an ascending HNN-extension of a free group. Other examples of coherent group rings come from general results on coherent rings. If \(k\) is a commutative Noetherian ring, by [1], \(k[G]\) is coherent if \(G\) is the direct product of a free group and an abelian group and, by [1] and [12], \(k[G]\) is coherent if \(G\) belongs to the smallest family of groups containing all virtually polycyclic groups and closed under amalgamated products and HNN-extensions with virtually polycyclic edge subgroups. We would also like to mention a conjecture for graded algebras: every graded algebra with a single defining relation is graded coherent (see [10] for partial results). In view of Theorem 1.2 and by analogy with Wise's conjecture on the coherence of fundamental groups of two-complexes with non-positive immersions, we propose the following conjecture. **Conjecture 1**.: _Let \(K\) be a field and \(G\) the fundamental group of a two-complex with non-positive immersions. The group algebra \(K[G]\) is coherent._ In this paper we prove Conjecture 1 for the group algebras over not only one-relator groups but also for the fundamental groups of reducible two-complexes without proper powers and fields of characteristic \(0\). The notion of a reducible complex was first introduced by Howie in [11]. By [10, Corollaries 5.6 & 7.6] a reducible two-complex without proper powers has the non-positive immersion property. **Theorem 1.6**.: _Let \(G\) be the fundamental group of a finite reducible two-complex without proper powers and \(K\) a field of characteristic \(0\). Then the group algebra \(K[G]\) is coherent._ Howie proved in [11, Theorem 4.2] that the fundamental groups of reducible two-complexes without proper powers are locally indicable. Our proof of Theorem 1.6 uses the existence of the division ring \(\mathcal{D}_{K[G]}\) (see Subsection 2.3). It is the Hughes-free division \(K[G]\)-ring associated with locally indicable groups whose uniqueness was proved by Hughes in [10]. When \(K\) has characteristic \(0\), its existence follows from the solution of the strong Atiyah conjecture for locally indicable groups [13]. If we knew that this division ring existed for an arbitrary \(K\), then we would have the same result for such a \(K\) as well. A new property that we have discovered in the case of the fundamental group of a reducible two-complex without proper powers is the following result. This is the key ingredient of our proof of Theorem 1.6. **Theorem 1.7**.: _Let \(K\) be a field and let \(G\) be the fundamental group of a reducible two-complex without proper powers. Assume that \(\mathcal{D}_{K[G]}\) exists. Then as a right \(K[G]\)-module, \(\mathcal{D}_{K[G]}\) is of weak dimension at most \(1\)._ This property was known for free, limit and free-by-cyclic groups \(G\). It does not hold for locally indicable groups in general: there are locally indicable groups with non-trivial second \(L^{2}\)-Betti number (for example, if \(G\) is the direct product of two non-abelian free groups). ### The rank-1 Hanna Neumann conjecture Let \(d\left(G\right)\) denote the minimal number of generators of a group \(G\) and \[\overline{d}\left(G\right)=\max\{0,d\left(G\right)-1\}.\] The rank-1 Hanna Neumann conjecture proposed by Wise in [14] and proved independently by Helfer and Wise [10] and Louder and Wilton [15], is the following statement: if \(W\) is a maximal cyclic subgroup of a free group \(F\), then for any subgroup \(U\) of \(F\), \[\sum_{x\in W\backslash F/U}d\left(xUx^{-1}\cap W\right)\leq\left\{\begin{array}{ ll}\underline{d}\left(U\right)&\text{if }U\leq\langle\!\langle W\rangle\!\rangle\\ \overline{d}\left(U\right)&\text{if }U\nleq\langle\!\langle W\rangle\! \rangle\end{array}\right.\] It is natural to ask for what family of subgroups \(W\) of \(F\) the same conclusion holds. In Subsection 2.5 we will introduce _strictly reducible_ subgroups of a free group. Their relation with reducible two-complexes without proper powers is the following: if the presentation complex of \(\langle X|R\rangle\) is reducible without proper powers, then there exists a strictly reducible subgroup \(W\) of the free group \(F(X)\) such that \(\langle X|R\rangle\cong F(X)/\langle\!\langle W\rangle\!\rangle\). Using our approach we prove the following generalisation of the rank-1 Hanna Neumann conjecture. **Theorem 1.8**.: _Let \(F\) be a free group, \(U\) a subgroup of \(F\) and \(W\) a strictly reducible subgroup of \(F\). Then_ \[\sum_{x\in W\backslash F/U}d\left(xUx^{-1}\cap W\right)\leq\left\{\begin{array} []{ll}\underline{d}\left(U\right)&\text{if }U\leq\langle\!\langle W\rangle\! \rangle\\ \overline{d}\left(U\right)&\text{if }U\nleq\langle\!\langle W\rangle\! \rangle\end{array}\right.\] The paper is organized as follows. In Section 2 we explain the preliminary results used in the paper. In particular, we introduce the notions of complexes with non-positive immersion, staggered and reducible complexes and a variation, bireducible complexes. We also review the principal facts about the Hughes-free division rings. In Section 3 we develop a tool to prove the coherence of group algebras and homological coherence. As applications we prove Theorem 1.2 and show in Theorem 3.3 that the group algebra of an ascending HNN-extension of a free group is coherent. Section 4 is devoted to the proof of Theorem 1.3. We finish this section with the proof of the first part of Theorem 1.1. In Section 5 we construct several flat modules which allow us to control different Tor functors. In particular, we prove Theorem 1.7 and combining it with results of Section 3 we obtain Theorem 1.6 and the second part of Theorem 1.1. Section 6 contains further applications. In particular, there we prove Theorems 1.4, 1.5 and 1.8. ## Acknowledgments The work of the first author is partially supported by the grant PID2020-114032GB-I00 of the Ministry of Science and Innovation of Spain and by the ICMAT Severo Ochoa project CEX2019-000904-S4. The work of the second author has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No. 850930). We would like to thank Sam Fisher for pointing us to Proposition 3.5 which simplified several of our previous arguments. We would like to thank Dawid Kielak for his valuable comments on the material in Section 4. We would also like to thank Jack Button for providing a simplification of the proof of Lemma 4.1, Thomas Delzant for helpful coversations on the work of Dicks-Dunwoody [1] and Sam Hughes for pointed out the work of Gaboriau [1]. ## 2. Preliminaries ### General notations All rings in this paper are associative and have the identity element. All ring homomorphisms send the identity to the identity. By an \(R\)**-ring** we understand a ring homomorphism \(\varphi:R\to S\). We will often refer to \(S\) as an \(R\)-ring and omit the homomorphism \(\varphi\) if \(\varphi\) is clear from context. Two \(R\)-rings \(\varphi_{1}:R\to S_{1}\) and \(\varphi_{2}:R\to S_{2}\) are said to be **isomorphic** if there exists a ring isomorphism \(\alpha:S_{1}\to S_{2}\) such that \(\alpha\circ\varphi_{1}=\varphi_{2}\). For a ring \(R\), a left \(R\)-module \(M\) and \(a\in R\), we put \[\operatorname{Ann}_{M}\left(a\right)=\{m\in M\colon a\cdot m=0\}.\] Let \(G\) be a group and \(k\) a commutative ring with \(1\). We denote by \(I_{k[G]}\) the augmentation ideal of \(k[G]\). If \(H\) is a subgroup of \(G\) we denote by \({}^{G}I_{k[H]}\) the left ideal of \(k[G]\) generated by \(I_{k[H]}\) and by \(I_{k[H]}^{G}\) the right ideal of \(k[G]\) generated by \(I_{k[H]}\). Recall the following standard result (see, for example, [1, Lemma 2.1]). **Lemma 2.1**.: _Let \(H\leq T\) be subgroups of a group \(G\) and \(k\) a commutative ring with 1. Then the following holds._ 1. _The canonical map_ \[I_{k[H]}\otimes_{k[H]}k[G]\to I_{k[H]}^{G}\] _sending_ \(a\otimes b\) _to_ \(ab\)_, is an isomorphism of right_ \(k[G]\)_-modules._ 2. _The canonical map_ \(\left(I_{k[T]}/I_{k[H]}^{T}\right)\otimes_{k[T]}k[G]\to I_{k[T]}^{G}/I_{k[H]}^ {G}\)_, which sends_ \(\left(a+I_{k[H]}^{T}\right)\otimes b\) _to_ \(ab+I_{k[H]}^{G}\)_, is an isomorphism of_ \(k[G]\)_-modules._ Given two left \(k[G]\)-modules \(M\) and \(N\), \(M\otimes_{k}N\) becomes a left \(k[G]\)-module if we define \(g\left(m\otimes n\right)=gm\otimes gn\). ### Projective, global and weak dimensions Let \(R\) be a ring and \(M\) a left \(R\)-module. We say that \(M\) is of **projective dimension** at most \(n\) if \(M\) has a projective resolution of length \(n\). A ring \(R\) is of (left) **global dimension** at most \(n\), if every left \(R\)-module has a projective resolution of length \(n\). We will often use the following result about global dimensions of group algebras. **Proposition 2.2**.: _Let \(G\) be a group of cohomological dimension \(n\). Then for any field \(K\), the ring \(K[G]\) is of global dimension at most \(n\)._ Proof.: Since \(G\) is of cohomological dimension \(n\), there exists an exact sequence of left projective \(\mathbb{Z}[G]\)-modules \[0\to P_{n}\to\ldots\to P_{1}\to P_{0}\to\mathbb{Z}\to 0.\] Let \(M\) be a left \(K[G]\)-module. After applying \(\_\otimes_{\mathbb{Z}}M\) we obtain the following exact sequence of \(K[G]\)-modules. \[0\to P_{n}\otimes_{\mathbb{Z}}M\to\ldots\to P_{1}\otimes_{\mathbb{Z}}M\to P_{ 0}\otimes_{\mathbb{Z}}M\to M\to 0.\] Now observe that all modules \(P_{i}\otimes_{\mathbb{Z}}M\) are projective. Hence, the ring \(K[G]\) is of global dimension at most \(n\). The **weak dimension** of a right \(R\)-module \(M\) is the largest \(i\) for which there exists a left \(R\)-module \(N\) such that \(\operatorname{Tor}_{i}^{R}\left(M,N\right)\neq 0\). ### Hughes-free, Linnell and Dubrovin division rings Let \(G\) be a group. In this subsection we will assume that \(K\) is a field and to simplify the exposition we will only consider the group algebra \(K[G]\). However, we want to underline that all the definitions and results can be easily extended to the case of crossed products \(E*G\), where \(E\) is a division ring. Let \(\phi:K[G]\to\mathcal{D}\) be a division \(K[G]\)-ring. Let \(N\leq H\leq G\) be subgroups of \(G\). We denote by \(\mathcal{D}_{H}\) the division closure of \(\phi(K[H])\) in \(\mathcal{D}\). We say that \(\mathcal{D}\) is (left) \((N,H)\)**-free** if the map \[\mathcal{D}_{N}\otimes_{K[N]}K[H]\to\mathcal{D},\ d\otimes a\mapsto d\phi(a),\] is injective. An alternative reformulation of \((N,H)\)-freeness is the following: if \(q_{1},\ldots,q_{n}\in H\) are in different right \(N\)-cosets, then for any non-zero elements \(d_{1},\ldots,d_{n}\in\mathcal{D}_{N}\), the sum \(\sum_{i=1}^{n}d_{i}\phi(q_{i})\) is not equal to zero. It is clear that if \(N\leq H_{1}\leq H_{2}\leq G\) and \(\mathcal{D}\) is \((N,H_{2})\)-free, then it is also \((N,H_{1})\)-free. This property appeared in the work of Hughes [10] in the context of locally indicable groups. Let \(G\) be a locally indicable group and \(\phi:K[G]\to\mathcal{D}\) a division \(K[G]\)-ring. We say that \(\phi:K[G]\to\mathcal{D}\) (or simply \(\mathcal{D}\), when \(\phi\) is clear from context) is **Hughes-free** if \(\mathcal{D}\) is epic (i.e. \(\mathcal{D}=\mathcal{D}_{G}\)) and \((N,H)\)-free for any pair \(N\unlhd H\) of subgroups of \(G\) with \(H/N\cong\mathbb{Z}\). A very important contribution of Hughes is the following result proved in [10] (see also, [10] and [11, Theorem 5.2]). **Theorem 2.3** (Hughes).: _Let \(K\) be a field and \(G\) a locally indicable group. Then up to \(K[G]\)-isomorphism there exists at most one Hughes-free division \(K[G]\)-ring._ In view of this result, if \(G\) is locally indicable and the Hughes-free division \(K[G]\)-ring exists, we will denote it by \(\mathcal{D}_{K[G]}\). It was conjectured that \(\mathcal{D}_{K[G]}\) always exists and this was proven in [11, Corollary 6.7] in the case where \(K\) is of characteristic \(0\). The reader can consult [11] to see what is known at this moment about the general case. In order to generalize the notion of Hughes-free division ring to an arbitrary torsion-free group, Linnell proposed in [14] the notion of strongly Hughes-free division \(K[G]\)-ring. We say that an epic division \(K[G]\)-ring \(\phi\colon K[G]\to\mathcal{D}\) is **strongly Hughes-free** if it is \((N,H)\)-free for any pair \(N\unlhd H\) of subgroups of \(G\). There is another instance where \((N,H)\)-freeness appears. An equivalent formulation of the strong Atiyah conjecture over \(\mathbb{Q}\) for torsion-free groups \(G\) says that the division closure \(\mathcal{D}(G)\) of \(\mathbb{Q}[G]\) in the ring of affiliated operators \(\mathcal{U}(G)\) is a division ring (see, [12, Proposition 1.2]). The ring \(\mathcal{D}(G)\) is called a **Linnell** ring. By the discussion after [14, Problem 4.5], the Linnell ring (if it is a division ring) is \((N,G)\)-free for any subgroup \(N\) of \(G\). In particular, by Theorem 2.3, if \(G\) is locally indicable, \(\mathcal{D}(G)\cong\mathcal{D}_{\mathbb{Q}[G]}\) as \(\mathbb{Q}[G]\)-rings. One consequence of this fact is that for locally indicable groups the \(L^{2}\)-Betti numbers can be computed in a purely algebraic way: \[b_{k}^{(2)}(G)=\dim_{\mathcal{D}_{\mathbb{Q}[G]}}\operatorname{Tor}_{k}^{ \mathbb{Q}[G]}(\mathcal{D}_{\mathbb{Q}[G]},\mathbb{Q}).\] This also leads us to the following definition. Let \(G\) be a torsion-free group. We say that an epic division \(K[G]\)-ring \(\phi\colon K[G]\to\mathcal{D}\) is **Linnell** if it is \((N,G)\)-free for any subgroup \(N\) of \(G\). In view of the previous discussion it is tempting to propose the following variation of the strong Atiyah conjecture for torsion-free groups. **Conjecture 2**.: _Let \(K\) be a field and \(G\) a torsion-free group. Then a Linnell division \(K[G]\)-ring exists and is unique up to \(K[G]\)-isomorphism._ The strong Atiyah conjecture indicates that in the case where \(K\) is a subfield of \(\mathbb{C}\), a candidate for a Linnell division \(K[G]\)-ring is the division closure of \(K[G]\) in \(\mathcal{U}(G)\). If \(K\) is an arbitrary field and \(G\) has a right-invariant order, Dubrovin proposed such a candidate [10] (see also [1, 11]). Let \(G\) be a group with a right-invariant order \(\preceq\). **The space of Malcev-Neumann series \(\mathcal{MN}_{\preceq}(K[G])\)** is the abelian group consisting of formal infinite sums \(m=\sum_{g\in G}k_{g}g\), with \(k_{g}\in K\), such that the support of \(m\), \[\operatorname{supp}\left(\sum_{g\in G}k_{g}g\right)=\{g\in G:\ k_{g}\neq 0\}\] is a well-ordered subset of \(G\). We denote by \(\mathcal{E}_{\preceq}(K[G])\) the ring of endomorphisms of the abelian group \(\mathcal{MN}_{\preceq}(K[G])\) and we will use the notation where the elements of \(\mathcal{E}_{\preceq}(K[G])\) act on \(\mathcal{MN}_{\preceq}(K[G])\) on the right. Since the order \(\preceq\) is right-invariant, the ring \(K[G]\) is embedded into \(\mathcal{E}_{\preceq}(K[G])\) and we will identify the elements of \(K[G]\) with their images. The **Dubrovin** ring \(\mathcal{D}_{\preceq}(K[G])\) is the division closure of \(K[G]\) in \(\mathcal{E}_{\preceq}(K[G])\). Dubrovin conjectured that this ring is a division ring. The following result has appeared in the literature in slightly different contexts and we include its proof for the readers' convenience. **Proposition 2.4**.: _Let \(K\) be a field and \(G\) a group with a right-invariant order \(\preceq\). If the Dubroving ring \(\mathcal{D}_{\preceq}(K[G])\) is a division ring, then it is also a Linnell division \(K[G]\)-ring._ Proof.: Let us denote \(\mathcal{D}_{\preceq}(K[G])\) by \(\mathcal{D}\). Let \(N\) be a subgroup of \(G\). We want to show that if \(q_{1},\dots,q_{n}\in G\) are in different right \(N\)-cosets, then for any non-zero elements \(d_{1},\dots,d_{n}\in\mathcal{D}_{N}\), the sum \(\sum_{i=1}^{n}d_{i}q_{i}\in\mathcal{D}\) is not equal to zero. Assume that \(\sum_{i=1}^{n}d_{i}q_{i}\in\mathcal{D}=0\). Without loss of generality we can also assume that \(q_{1}=1\). Given a subset \(T\) of \(G\), we denote by \(\pi_{T}\) the element of \(\mathcal{E}_{\preceq}(K[G])\) defined by means of \[\left(\sum_{g\in G}k_{g}g\right)\cdot\pi_{T}=\sum_{g\in T}k_{g}g\ (k_{g}\in K).\] Observe that elements of \(K[N]\) commute with \(\pi_{N}\). Hence, the elements of \(\mathcal{D}_{N}\) commute with \(\pi_{N}\) too. Thus, for each \(i=1,\dots,n\), \[\operatorname{supp}(1\cdot d_{i}q_{i})=\operatorname{supp}(1\cdot\pi_{N}d_{i} q_{i})=\operatorname{supp}(1\cdot d_{i}\pi_{N}q_{i})\leq Nq_{i}.\] Therefore, \[1\cdot d_{1}=\left(1\cdot\sum_{i=1}^{n}d_{i}q_{i}\right)\pi_{N}=0.\] Since \(d_{1}\) is invertible in \(\mathcal{E}_{\preceq}(K[G])\), we obtain a contradiction. As we have mentioned above, if \(G\) is locally indicable and \(K\) is of characteristic zero, then the Hughes-free division ring \(\mathcal{D}_{K[G]}\) is Linnell. Grater proved in [1] the same result for an arbitrary field \(K\) if \(\mathcal{D}_{K[G]}\) exists. He used the following characterization of locally indicable groups: a group is locally indicable if and only if it admits a Conradian order. We say that an order \(\preceq\) on \(G\) is **Conradian** if for all positive elements \(f,g\succeq 1\) of \(G\), there exists a natural number \(n\) such that \(g^{n}f\succ g\). **Theorem 2.5** (Grater).: _Let \(K\) be a field and \(G\) a locally indicable group. Let \(\preceq\) be a Conradian order on \(G\). If \(\mathcal{D}_{K[G]}\) exists, then the Dubrovin ring \(\mathcal{D}_{\preceq}(K[G])\) is a division ring. In particular, \(\mathcal{D}_{K[G]}\) is the unique Linnell division \(K[G]\)-ring._ Proof.: By [1, Corollary 8.3, Theorem 8.1], \(\mathcal{D}_{\preceq}(K[G])\) is a division ring and is isomorphic to \(\mathcal{D}_{K[G]}\). By Proposition 2.4, \(\mathcal{D}_{\preceq}(K[G])\) is Linnell. Theorem 2.3 implies that \(\mathcal{D}_{K[G]}\) is the unique Linnell division \(K[G]\)-ring. ### Non-positive and not too positive immersions A two-complex is a two-dimensional CW-complex. Maps between two-complexes will always be assumed to be combinatorial. That is, \(n\)-cells map homeomorphically to \(n\)-cells. We will also always assume that attaching maps of two-cells are given by immersions. In particular, we are not allowing two-cells with boundary a point. If \(\Gamma\) is a graph and \(\lambda\colon\mathbb{S}\looparrow\Gamma\) is an immersion of a disjoint union of circles, we denote by \(X=(\Gamma,\lambda)\) the two-complex obtained by attaching a two-cell to \(\Gamma\) along the image of each component of \(\mathbb{S}\). A two-complex \(X\) has **non-positive immersions** if for every immersion \(Y\looparrowright X\) where \(Y\) is a compact connected two-complex, we either have \(\chi(Y)\leqslant 0\), or \(\pi_{1}(Y)=1\). A useful consequence of having non-positive immersions is the following, due to Wise [20]. **Theorem 2.6**.: _If \(X\) has non-positive immersions, then \(\pi_{1}(X)\) is locally indicable._ A variation of non-positive immersions is known as **weak non-positive immersions**. We say \(X\) has weak non-positive immersions if for every immersion \(Y\looparrowright X\) with \(Y\) compact and connected, we have \(\chi(Y)\leqslant 1\). **Proposition 2.7**.: _If \(X\) is a two-complex with non-positive immersions and with \(\pi_{1}(X)\neq 1\), then \(X\) has weak non-positive immersions._ Proof.: Suppose for a contradiction that \(X\) does not have weak non-positive immersions and so there is an immersion \(Y\looparrowright X\) with \(Y\) finite, connected, with \(\pi_{1}(Y)=1\) and \(\chi(Y)\geqslant 2\). Denote by \(\overline{X}\) the image of \(Y\) in \(X\). We may assume that every one-cell in \(Y\) is traversed by at least one attaching map of a two-cell. Suppose first that \(X^{(1)}\) deformation retracts to \(\overline{X}^{(1)}\) and that the map \(Y^{(1)}\to\overline{X}^{(1)}\) is a cover. Since \(Y\) was finite and \(\pi_{1}(Y)=1\), it follows that \(\pi_{1}(X)\) must be finite and so \(\pi_{1}(X)=1\) as \(\pi_{1}(X)\) is locally indicable by Theorem 2.6. Now suppose that \(X^{(1)}\) deformation retracts to \(\overline{X}^{(1)}\), but that the map \(Y^{(1)}\to\overline{X}^{(1)}\) is not a cover. Then there is some point \(v\in Y^{(0)}\) and a one-cell \(\overline{e}\) in \(\overline{X}\), adjacent to the image \(\overline{v}\) of \(v\), such that there is no one-cell adjacent to \(v\) that maps to \(\overline{e}\). Hence, we may attach a one-cell \(e\) along an endpoint to \(Y\), obtaining \(Y^{\prime}=Y\cup_{v}e\), and extend our immersion \(Y\looparrowright\overline{X}\) to \(Y^{\prime}\looparrowright\overline{X}\) by mapping \(e\) to \(\overline{e}\). Now consider the graph \(\Gamma=\overline{X}^{(1)}-\overline{e}\). If \(\Gamma\) is connected and simply connected, then \(X^{(1)}\) deformation retracts onto a copy of \(S^{1}\). Then all attaching maps of two-cells in \(X\) must be supported in this copy of \(S^{1}\) and so \(\pi_{1}(X)=1\) as \(\pi_{1}(X)\) is a locally indicable proper quotient of \(\mathbb{Z}\). Now suppose that \(\Gamma\) is connected, but not simply connected. Then there is a connected graph \(S\) with \(\chi(S)=0\) and an embedding \(S\hookrightarrow\overline{X}^{(1)}-\overline{e}\) with the image of \(S\) containing the other endpoint of \(\overline{e}\). Now define \(Y^{\prime\prime}\) to be the two-complex obtained from \(Y^{\prime}\) by attaching \(S\) to the other endpoint of \(e\). The resulting map \(Y^{\prime\prime}\looparrowright\overline{X}\) is an immersion and it is not hard to see that \(\chi(Y^{\prime\prime})\geqslant 1\) and \(\pi_{1}(Y^{\prime\prime})\cong\mathbb{Z}\), contradicting the hypothesis that \(X\) has non-positive immersions. Finally, suppose that \(\Gamma\) is not connected and denote by \(\Gamma^{\prime}\) the component not containing \(v\). Since there is a two-cell in \(\Gamma^{\prime}\) whose attaching map traverses \(\overline{e}\), it follows that \(\Gamma^{\prime}\) is not simply connected. Here we are using the fact that all attaching maps of two-cells are immersions. From here we may apply the same trick as above to obtain a contradiction. The case where \(X^{(1)}\) does not deformation retract to \(\overline{X}^{(1)}\) is handled in a similar way. Combining Proposition 2.7 with [20, Theorem 1.6], we have the following. **Corollary 2.8**.: _Let \(X\) be a two-complex with non-positive immersions. Then either \(X\) is aspherical or \(\pi_{1}(X)=1\)._ If \(X\) has non-positive immersions, it turns out that we may derive explicit bounds on the \(L^{2}\)-Betti numbers of finitely generated subgroups of \(\pi_{1}(X)\). The proof of this result below closely follows the proof of a similar result due to Louder-Wilton [14, Corollary 1.6]. We include a proof for completeness. **Proposition 2.9**.: _Let \(X\) be a two-complex with non-positive immersions. If \(H\) is a finitely generated subgroup of \(\pi_{1}(X)\), then_ \[b_{2}^{(2)}(H)\leqslant b_{1}^{(2)}(H).\] Proof.: Assume that \(H\) is non-trivial. By [14, Lemma 4.4], there is a sequence of \(\pi_{1}\)-surjective immersions of finite connected two-complexes \[Y_{0}\looparrowright Y_{1}\looparrowright\ldots\looparrowright Y_{i} \looparrowright\ldots\looparrowright X\] such that \[H=\varinjlim\pi_{1}(Y_{i}).\] Put \(H_{i}=\pi_{1}(Y_{i})\). Then there exists a finitely generated free group \(F\) and normal subgroups \(N\geq N_{i+1}\geq N_{i}\) (\(i\in\mathbb{N}\)) of \(F\) such that \[H\cong F/N,\ H_{i}\cong F/N_{i}\ \text{and}\ N=\cup_{i\in\mathbb{N}}N_{i}.\] In particular, we have that \[N_{\text{ab}}\cong\varinjlim(N_{i})_{\text{ab}}\] viewed as \(\mathbb{Q}[F]\)-modules. Thus, we have that \[b_{2}^{(2)}(H)-b_{1}^{(2)}(H) =\dim_{\mathcal{D}_{\mathbb{Q}[H]}}H_{2}(H;\mathcal{D}_{\mathbb{ Q}[H]})-\dim_{\mathcal{D}_{\mathbb{Q}[H]}}H_{1}(H;\mathcal{D}_{\mathbb{Q}[H]})\] \[=\dim_{\mathcal{D}_{\mathbb{Q}[H]}}\left(\mathcal{D}_{\mathbb{Q} [H]}\otimes_{\mathbb{Q}[F]}N_{\text{ab}}\right)-d(F)+1\] \[\leqslant\sup_{i\in\mathbb{N}}\dim_{\mathcal{D}_{\mathbb{Q}[H]}} \left(\mathcal{D}_{\mathbb{Q}[H]}\otimes_{\mathbb{Q}[F]}(N_{i})_{\text{ab}} \right)-d(F)+1\] \[=\sup_{i\in\mathbb{N}}\dim_{\mathcal{D}_{\mathbb{Q}[H]}}H_{2}(H_ {i};\mathcal{D}_{\mathbb{Q}[H]})-\dim_{\mathcal{D}_{\mathbb{Q}[H]}}H_{1}(H_{i} ;\mathcal{D}_{\mathbb{Q}[H]})\] Since \(X\) has non-positive immersions, we see that \(\chi(Y_{i})\leqslant 0\) for all \(i\). Since \(Y_{i}\) is aspherical for all \(i\) by Corollary 2.8, we have \[\chi(Y_{i})=\dim_{\mathcal{D}_{\mathbb{Q}[H]}}H_{2}(H_{i};\mathcal{D}_{\mathbb{ Q}[H]})-\dim_{\mathcal{D}_{\mathbb{Q}[H]}}H_{1}(H_{i};\mathcal{D}_{\mathbb{Q}[H]}).\] This finishes the proof. Louder-Wilton introduced a slight generalisation of non-positive immersions in [10]. If \(C_{k}\) is the standard presentation complex of \(\mathbb{Z}/k\mathbb{Z}\) and \(C_{k,l}\) is the \(l\)-fold cover of \(C_{k}\), then \(X\) has **not-too-positive immersions (NTPI)** if for every immersion \(Y\looparrowright X\) where \(Y\) is a compact connected two-complex, we have that \(Y\) is homotopy equivalent to a wedge of subcomplexes of \(C_{k,l}\)'s and a subcomplex \(Z\subset Y\) with \(\chi(Z)\leqslant 0\). The main result of [10] established NTPI for presentation complexes of all one-relator groups. Louder-Wilton [10] prove the proposition below for two-complexes with a single two-cell. Their proof only uses the fact that such a two-complex has NTPI and that the second integral homology group of finitely generated subgroups of one-relator groups must be torsion-free. Considering instead homology with rational coefficients, we may recover the following statement for all two-complexes with NTPI. The proof can be carried out as in Proposition 2.9. **Proposition 2.10**.: _Let \(X\) be a two-complex with NTPI. If \(H\) is a finitely generated subgroup of \(\pi_{1}(X)\) that is not a free product of finite cyclic groups, then_ \[b_{2}(H)\leqslant b_{1}(H)-1.\] We remark that in [10], the hypothesis that \(H\) not be a free product of finite cyclic groups is missing. ### Staggered and reducible complexes A two-complex \(X\) is **staggered** if there is a total order on its two-cells and a subset of its one-cells satisfying the following: 1. The attaching map for each two-cell traverses at least one ordered one-cell. 2. If \(\alpha<\beta\) are two-cells, then \(\min\alpha<\min\beta\) and \(\max\alpha<\max\beta\), where \(\min\delta\) (respectively, \(\max\delta\)) is the smallest (respectively, largest) edge traversed by the attaching map for the two-cell \(\delta\). A group has a **staggered presentation** if it has a presentation complex that is staggered. Groups with staggered presentations are a class of groups that arise naturally in the study of one-relator groups, see [11]. If \(X=(\Gamma,\lambda)\) is a two-complex, an edge \(e\subset\Gamma\) is a **reducing edge** (respectively, **collapsing edge**) if \(\lambda^{-1}(e)\) is contained in a single component (respectively, a single edge). If \(e\subset\Gamma\) is a reducing edge (respectively, a collapsing edge), we call the two-complex \(Z\) obtained from \(X\) by removing \(e\) and the unique two-cell whose attaching map traverses \(e\), the **reduction along \(e\)** (respectively, the **collapse along \(e\)**). In this case we will say that \(Z\subset X\) is a **reduction** (**collapse**). A two-complex is **reducible** (respectively, **collapsible**) if every subcomplex containing at least one two-cell has a reducing edge (respectively, collapsing edge). This evidently generalises staggered complexes. Reducible two-complexes were first introduced by Howie [14] and shown to be aspherical and have locally indicable \(\pi_{1}\) if without proper powers: if \(X\) is reducible, then \(X\) does not have **proper powers** if for each two-cell \(\alpha\subset X\), the attaching map of \(\alpha\) does not represent a proper power in \(\pi_{1}(X-\alpha)\). Helfer-Wise established the non-positive immersions property for reducible complexes without proper powers in [14, Corollaries 5.6 & 7.6], stated below. A more general version is due to Howie-Short [11] and, independently, Millard [15]. **Theorem 2.11**.: _If \(X\) is a reducible complex without proper powers, then \(X\) has non-positive immersions._ If \(H\) and \(G\) are groups and \(w\in G*H\) is an element that is not conjugate into \(G\) or \(H\), then we say \(G*H/\langle\!\langle w\rangle\!\rangle\) is a **one-relator product** of \(H\) and \(G\). If \(w\) is moreover not a proper power in \(G*H\), then \(G*H/\langle\!\langle w\rangle\!\rangle\) is a **one-relator product without proper powers**. With this terminology, the class of groups isomorphic to the fundamental groups of reducible two-complexes (without proper powers) can also be described as the smallest class \(\mathcal{R}\) of groups containing all free groups that is closed under free products and one-relator products (without proper powers). Let \(F\) be a free group with respect to free generators \(X\) and let \(R=\{r_{1},\ldots,r_{n}\}\) be a subset of \(F\). We say that \(R\) is **strictly reducible with respect to \(X\)** if for some \(X_{0}\subset X\) and \(\{x_{1},\ldots,x_{n}\}\subset X\) we have that for each \(1\leq i\leq n\), 1. \(r_{i}\in\langle X_{i}\rangle\cong\langle X_{i-1}\rangle*\langle x_{i}\rangle\), where \(X_{i}=X_{0}\cup\{x_{1},\ldots,x_{i}\}\); 2. for some \(s\geq 1\), \(r_{i}=a_{1}\cdot b_{1}\cdot\ldots\cdot a_{s}\cdot b_{s}\), where \(a_{i}\in\langle X_{i-1}\rangle\) and \(b_{i}\in\langle x_{i}\rangle\) and the image of each \(b_{i}\) in the group \(G_{i}=\langle X_{i-1}\rangle/\langle\!\langle r_{1},\ldots,r_{i-1}\rangle\!\rangle\) is non-trivial; 3. the image of \(r_{i}\) in \(G_{i-1}*\langle x_{i}\rangle\) is not a proper-power. Observe that if the presentation complex of \(\langle X|R\rangle\) is reducible without proper powers then we can modify \(R\) and obtain \(R^{\prime}\subset F(X)\) such that \(\langle X|R\rangle\cong\langle X|R^{\prime}\rangle\) and \(R^{\prime}\) is strictly reducible with respect to \(X\). We will say that a subgroup \(W\) of a free group \(F\) is **strictly reducible** if there is a set of free generators \(X\) of \(F\) and a strictly reducible with respect to \(X\) subset \(R\) of \(F\) such that \(W\) is generated by \(R\). ### Bireducible complexes In order to facilitate the proof of Theorem 1.5, we shall need to introduce a new class of two-complexes that lie in between staggered and reducible two-complexes. We say a two-complex \(X\) is **bireducible** if every subcomplex \(Z\subset X\) containing at least two two-cells, has at least two reducing edges associated to distinct two-cells. The aim of this section is to show that such two-complexes have NTPI. We first require a version of the classic Magnus Freiheitssatz [10] for bireducible complexes. **Lemma 2.12**.: _If \(X\) is a finite bireducible two-complex, then the following hold:_ 1. _If_ \(Z\subset X\) _is a reduction, then the homomorphism_ \(\pi_{1}(Z)\to\pi_{1}(X)\) _induced by inclusion is injective for any choice of basepoint._ 2. _If_ \(U\subset X\) _is a subcomplex containing a single two-cell whose attaching map is surjective, then_ \(\pi_{1}(U)\to\pi_{1}(X)\) _is injective._ Proof.: The proof is by induction on the number of two-cells in \(X\). We prove the two statements at once. In the base case, the first statement is the Magnus' Freiheitssatz [10] and the second statement is clear. Now suppose that \(X\) contains more than one two-cell and assume the inductive hypothesis. As \(X\) is bireducible, there is a second reduction \(Y\subset X\) such that \(X=Y\cup Z\) and \(Y\cap Z\) is a reduction of \(Y\) and \(Z\). If \(Z\) is not connected, then we attach an edge connecting the components. We do the same for \(Y\) and \(Y\cap Z\). Attaching edges modifies the fundamental group by possibly adding a free group as a free factor. By induction, \(\pi_{1}(Y\cap Z)\to\pi_{1}(Y),\pi_{1}(Z)\) are injective and so we have \[\pi_{1}(X)\cong\pi_{1}(Y)*_{\pi_{1}(Y\cap Z)}\pi_{1}(Z)\] from whence the first statement follows. Now \(U\) must either be contained in \(Y\) or in \(Z\) and so the second statement also follows by induction. Let \(\Lambda\) be a connected subgraph of a bireducible complex \(X\). We say that \(\Lambda\) is **small** if for every subcomplex \(Y\) of \(X\) containing \(\Lambda\), \(Y\) is either a graph or there exists a reduction \(Z\subset Y\) such that \(\Lambda\subseteq Z\). By Lemma 2.12, the map \(\pi_{1}(\Lambda)\to\pi_{1}(X)\) is injective and so we are justified in calling \(\pi_{1}(\Lambda)\) a **Magnus subgroup** of \(\pi_{1}(X)\). In the language of Lemma 2.12, any proper subgraph \(\Lambda\subset U\) is small. An immersion \(\lambda\colon S^{1}\looparrow\Gamma\) of a cycle is **primitive** if it does not factor through a proper cover \(S^{1}\looparrow S^{1}\). If \(\lambda\colon\mathbb{S}\looparrow\Gamma\) is an immersion of a disjoint union of circles, we call it primitive if the restriction to each component is primitive. **Lemma 2.13**.: _Let \(X=(\Gamma,\lambda)\) be a finite bireducible two-complex. Then \(X\) is a reducible two-complex without proper powers if and only if \(\lambda\) is primitive._ Proof.: Let \(S^{1}\looparrow\Gamma\) be the attaching map of one of the two-cells \(\alpha\). By Lemma 2.12, if \(U\subset X\) is the smallest subcomplex containing \(\alpha\), then \(\pi_{1}(U)\) is a subgroup of \(\pi_{1}(X)\). If \(S^{1}\looparrow\Gamma\) represents a proper power in \(\pi_{1}(X-\alpha)\), then it would be homotopic to a cycle \(S^{1}\looparrow U^{(1)}\) that is not primitive. In particular, \(\pi_{1}(U)\) would have torsion. By the characterisation of one-relator groups with torsion due to Karrass-Magnus-Solitar [10], \(\pi_{1}(U)\) is torsion-free if and only if \(S^{1}\looparrow\Gamma\) is primitive. The following theorem follows by combining Lemma 2.13 with [13, Theorem 5.5 & Corollary 7.6]. **Theorem 2.14**.: _Let \(\Gamma\) be a finite graph and let \(\lambda\colon\mathbb{S}\looparrow\Gamma\) be an immersion of primitive cycles so that \(X=(\Gamma,\lambda)\) is a bireducible two-complex. Let \(\Theta\) be a finite connected non-empty graph and \(\theta\colon\Theta\looparrow\Gamma\) an immersion. If \(\mathbb{S}^{\prime}\) denotes the union of the cycles of the pullback graph \(\Theta\times_{\Gamma}\mathbb{S}\), then either \((\Theta,\mathbb{S}^{\prime}\looparrow\Theta)\) is collapsible, or_ \[\pi_{0}(\mathbb{S}^{\prime})\leq-\chi(\Theta).\] The proof of the following corollary is identical to that of [13, Corollary 1.5], replacing the use of [13, Theorem 1.2] with Theorem 2.14. We remark here that in [13], the term'reducible' is used to mean 'has a collapsing edge' in our terminology. **Corollary 2.15**.: _If \(X\) is a bireducible two-complex, then \(X\) has NTPI._ ## 3. Criteria for the coherence of group algebras and homological coherence The proof of the following result is a variation of an argument due to Kropholler-Linnell-Luck [11, Lemma 4]. **Proposition 3.1**.: _Let \(R\) be a ring and assume that \(R\) can be embedded in a division ring \(\mathcal{D}\). Let \(M\) be a finitely generated left \(R\)-module of projective dimension at most 1. Then \(\dim_{\mathcal{D}}\operatorname{Tor}_{1}\left(\mathcal{D},M\right)\) is finite if and only if \(M\) is finitely presented._ Proof.: Since \(M\) is a finitely generated left \(R\)-module of projective dimension at most 1 there exists an exact sequence of left \(R\)-modules \[0\to P_{1}\to P_{0}\to M\to 0\] with \(P_{0}\) and \(P_{1}\) projective and \(P_{0}\) finitely generated. The calculation of \(\operatorname{Tor}_{1}^{R}\left(\mathcal{D},M\right)\) gives us the exact sequence of left \(\mathcal{D}\)-modules \[0\to\operatorname{Tor}_{1}^{R}\left(\mathcal{D},M\right)\to\mathcal{D}\otimes_ {R}P_{1}\to\mathcal{D}\otimes_{R}P_{0}.\] If \(P_{1}\) is finitely generated, then \(\dim_{\mathcal{D}}\operatorname{Tor}_{1}\left(\mathcal{D},M\right)\) is certainly finite. So now we assume that \(\dim_{\mathcal{D}}\operatorname{Tor}_{1}\left(\mathcal{D},M\right)\) is finite and show that \(P_{1}\) is finitely generated. Since \(\dim_{\mathcal{D}}\operatorname{Tor}_{1}\left(\mathcal{D},M\right)\) and \(\dim_{\mathcal{D}}\mathcal{D}\otimes_{R}P_{0}\) are finite, \(n=\dim_{\mathcal{D}}\mathcal{D}\otimes_{R}P_{1}\) is finite as well. Since \(\mathcal{D}\) is a division ring, there are elements \(m_{1},\ldots,m_{n}\in P_{1}\) such that \[\mathcal{D}\otimes_{R}P_{1}=\sum_{i=1}^{n}\mathcal{D}\left(1\otimes m_{i} \right).\] Since \(P_{1}\) is projective, there exists a free left \(R\)-module \(L\) such that \(L=P_{1}\oplus P_{2}\). Thus, any element \(l\in L\) can be uniquely written as \(l=p_{1}+p_{2}\), where \(p_{1}\in P_{1}\) and \(p_{2}\in P_{2}\). We denote \(\pi_{P_{1}}:L\to P_{1}\) by \(\pi_{P_{1}}\left(l\right)=p_{1}\). Put \(N=\sum Rm_{i}\). Since \(N\) is finitely generated, there exists a finitely generated free summand \(L_{1}\) of \(L\) that contains \(N\). Consider the natural map \(\tau:P_{1}/N\to L/L_{1}\). Since \(\mathcal{D}\otimes_{R}\left(P_{1}/N\right)=0\), \(\mathcal{D}\otimes_{R}\operatorname{Im}\tau=0\). Thus, since \(\operatorname{Im}\tau\leq L/L_{1}\) is a submodule of a free \(R\)-module, \(\operatorname{Im}\tau=\{0\}\). This implies that \(P_{1}\leq L_{1}\), and so \[P_{1}=\pi_{P_{1}}\left(P_{1}\right)=\pi_{P_{1}}\left(L_{1}\right)\] is finitely generated. **Corollary 3.2**.: _Let \(R\) be a ring and assume that \(R\) can be embedded in a division ring \(\mathcal{D}\). Assume that \(R\) is of global dimension at most 2 and the right \(R\)-module \(\mathcal{D}\) is of weak dimension at most 1. Then \(R\) is coherent._ Proof.: Let \(I\) be a finitely generated left ideal of \(R\). Let \(0\to P\to R^{k}\to I\to 0\) be an exact sequence of left \(R\)-modules. Hence we obtain the exact sequence \[0\to P\to R^{k}\to R\to R/I\to 0.\] Since \(R\) is of global dimension at most 2, the projective dimension of \(R/I\) is at most 2. Hence, by [10, Proposition 8.6(2)], \(P\) is projective, and so \(I\) is of projective dimension at most 1. Since \(\operatorname{Tor}_{1}\left(\mathcal{D},I\right)=\operatorname{Tor}_{2}\left( \mathcal{D},R/I\right)\) and the right \(R\)-module \(\mathcal{D}\) is of weak dimension at most 1, \(\operatorname{Tor}_{1}\left(\mathcal{D},I\right)=0\). Thus, by Proposition 3.1, \(I\) is finitely presented, and so, \(R\) is coherent. As an application we show that the group algebras of ascending HNN-extensions of free groups are coherent. **Theorem 3.3**.: _Let \(K\) be a field and \(G\) an ascending HNN-extension of a free group. Then the group algebra \(K[G]\) is coherent._ Proof.: Let us briefly explain the construction of \(\mathcal{D}_{K[G]}\). For details see, for example, [10]. The group \(G\) has a locally free normal subgroup \(N\) such that \(G/N\cong\mathbb{Z}\). Let \(t\in G\) be such that \(G/N\) is generated by \(tN\) and let \(\tau:K[N]\to K[N]\) be the automorphism induced by the conjugation by \(t\): \(\tau\left(a\right)=tat^{-1}\). Then \(K[G]\) is naturally isomorphic to the ring of twisted Laurent polynomials \(K[N][t^{\pm 1},\tau]\). Since \(N\) is locally free, there exists \(\mathcal{D}_{K[N]}\) (see [10, Theorem 1.1 and Theorem 3.7]). Moreover, \(\operatorname{Tor}_{2}^{K[N]}\left(\mathcal{D}_{K[N]},M\right)=0\) for any left \(K[N]\)-module \(M\). Since \(\mathcal{D}_{K[N]}\) is unique, we can extend \(\tau\) to an automorphism \(\mathcal{D}_{K[N]}\to\mathcal{D}_{K[N]}\), which we will also call \(\tau\). Then \(\mathcal{D}_{K[G]}\) is isomorphic to the Ore classical ring of fractions of \(\mathcal{D}_{K[N]}[t^{\pm 1},\tau]\). Let \(M\) be a left \(K[G]\)-module. Then by Shapiro's lemma we obtain \[\operatorname{Tor}_{2}^{K[G]}\left(\mathcal{D}_{K[N]}[t^{\pm 1},\tau],M\right)= \operatorname{Tor}_{2}^{K[N]}\left(\mathcal{D}_{K[N]},M\right)=0.\] Thus, \(\operatorname{Tor}_{2}^{K[G]}\big{(}\mathcal{D}_{K[G]},M\big{)}=0\) as well. Hence the right \(K[G]\)-module \(\mathcal{D}_{K[G]}\) is of weak dimension at most \(1\). Since \(G\) is a HNN-extension of a free group, it is of cohomological dimension at most \(2\). Hence \(K[G]\) is of global dimension at most \(2\). Therefore, \(K[G]\) is coherent by Corollary 3.2. Proposition 3.1 implies also the following criterion for a group to be of type \(\operatorname{FP}_{2}(k)\). **Corollary 3.4**.: _Let \(G\) be a finitely generated group of cohomological dimension two, let \(k\) be a commutative ring and suppose that \(k[G]\) can be embedded in a division ring \(\mathcal{D}\). Then \(G\) is of type \(\operatorname{FP}_{2}(k)\) if and only if \(\dim_{\mathcal{D}}\operatorname{Tor}_{2}^{k[G]}(\mathcal{D},k)\) is finite._ Proof.: Since \(G\) has cohomological dimension at most two, the left \(kG\)-module \(I_{k[G]}\) has projective dimension at most one. Let \(0\to P_{1}\to P_{0}\to I_{k[G]}\to 0\) be a projective resolution. By definition of \(\operatorname{Tor}\), we have the exact sequence \[\operatorname{Tor}_{1}^{k[G]}(\mathcal{D},I_{k[G]})\to\mathcal{D}\otimes_{k[G ]}P_{1}\to\mathcal{D}\otimes_{k[G]}P_{0}.\] Since \(P_{0}\) is finitely generated, we see that \(P_{1}\) is finitely generated if and only if \[\dim_{\mathcal{D}}\operatorname{Tor}_{1}^{k[G]}(\mathcal{D},I_{k[G]})=\dim_{ \mathcal{D}}\operatorname{Tor}_{2}^{k[G]}(\mathcal{D},k)\] is finite by Proposition 3.1. This completes the proof. Corollary 3.4 coupled with Proposition 2.9 yields a proof of Theorem 1.2(1). Proof of Theorem 1.2(1).: By Theorem 2.6, \(\pi_{1}(X)\) is locally indicable. Let \(H\) be a finitely generated subgroup of \(G\). By [13], \(\mathbb{Z}[H]\) is embedded in \(\mathcal{D}_{\mathbb{Q}[H]}\). By Proposition 2.9, \[b_{2}^{(2)}(H)=\dim_{\mathcal{D}_{\mathbb{Q}[H]}}\operatorname{Tor}_{2}^{ \mathbb{Q}[H]}(\mathcal{D}_{\mathbb{Q}[H]},\mathbb{Q})\] is finite. Hence, by Corollary 3.4, \(H\) is of type \(\operatorname{FP}_{2}(\mathbb{Z})\). The following proposition was communicated to us by Sam Fisher. **Proposition 3.5**.: _Let \(G\) be a locally indicable group of cohomological dimension two. Let \(K\) be a field and assume that \(\mathcal{D}_{K[G]}\) exists. If \(\operatorname{Tor}_{2}^{K[G]}(\mathcal{D}_{K[G]},K)=0\), then for any subgroup \(H\) of \(G\), \(\operatorname{Tor}_{2}^{K[H]}(\mathcal{D}_{K[H]},K)=0\)._ Proof.: Observe that, by Shapiro's lemma, \[\operatorname{Tor}_{2}^{K[H]}(\mathcal{D}_{K[H]},K)\cong\operatorname{Tor}_{2 }^{K[G]}(\mathcal{D}_{K[H]}\otimes_{K[H]}K[G],K).\] By Theorem 2.5, the right \(K[G]\)-module \(M=\mathcal{D}_{K[H]}\otimes_{K[H]}K[G]\) can be seen as a submodule of \(\mathcal{D}_{K[G]}\). Consider the exact sequence \[0\to M\to\mathcal{D}_{K[G]}\to\mathcal{D}_{K[G]}/M\to 0,\] which induces the exact sequence \[\operatorname{Tor}_{3}^{K[G]}(\mathcal{D}_{K[G]}/M,K)\to\operatorname{Tor}_{2 }^{K[G]}(M,K)\to\operatorname{Tor}_{2}^{K[G]}(\mathcal{D}_{K[G]},K).\] Since \(\operatorname{Tor}_{2}^{K[G]}(\mathcal{D}_{K[G]},K)=0\) and, by Proposition 2.2, \(K[G]\) is of global dimension at most \(2\), we conclude that \(\operatorname{Tor}_{2}^{K[G]}(M,K)=0\). Therefore, \(\operatorname{Tor}_{2}^{K[H]}(\mathcal{D}_{K[H]},K)=0\). **Corollary 3.6**.: _Let \(G\) be a locally indicable group of cohomological dimension 2 with \(b_{2}^{(2)}(G)=0\). Then for any subgroup \(H\) of \(G\), \(b_{2}^{(2)}(H)=0\)._ _Remark_.: The conclusion of the corollary is still valid without assumption that \(G\) is locally indicable. In this case \(b_{i}^{(2)}(G)\) is defined as \(\dim_{\mathcal{R}(G)}\operatorname{Tor}_{i}^{\mathbb{C}[G]}(\mathcal{R}(G), \mathbb{C})\) (see [1, Section 3]) and one can use that the multiplicative map \[\mathcal{R}(H)\otimes_{\mathbb{C}[H]}\mathbb{C}[G]\to\mathcal{R}(G)\] is also injective. A variation of this result can also be found in [1]. Combining Corollary 3.6 with Corollary 3.4, we obtain Theorem 1.2(2). ## 4. From homological coherence to coherence The reader is referred to Serre's monograph [11] for the necessary background in Bass-Serre theory. We will always assume that our trees and group actions are simplicial. If \(G\) is a group acting on a tree \(\mathcal{T}\), then we call \(\mathcal{T}\) a \(G\)**-tree**. If \(H\subseteq G\), we denote by \(\mathcal{T}/H\) the space obtained from \(\mathcal{T}\) by identifying each point \(t\in\mathcal{T}\) with \(gt\) for each \(g\in H\). A \(G\)-tree \(\mathcal{T}\) is **cocompact** if \(\mathcal{T}/G\) is compact. A map between trees \(\mathcal{S}\to\mathcal{T}\) is a **morphism** if it sends vertices to vertices and edges to (possibly trivial) edge paths. A \(G\)-tree \(\mathcal{T}\) is **dominated** by another \(G\)-tree \(\mathcal{S}\) if there is a \(G\)-equivariant morphism \(\mathcal{S}\to\mathcal{T}\). We call such a morphism a **domination map**. A **refinement** of \(G\)-trees \(\mathcal{S}\to\mathcal{T}\) is a domination map that is obtained by collapsing certain edges to vertices. **Lemma 4.1**.: _Let \(G\) be a group and let \(\mathcal{S}\) be a \(G\)-tree. If \(N_{1},\ldots,N_{k}\leq G\) are subgroups that stabilise vertices of \(\mathcal{S}\), then \(\mathcal{T}=\mathcal{S}/\langle\!\langle N_{1},\ldots,N_{k}\rangle\!\rangle\) is a \(G/\langle\!\langle N_{1},\ldots,N_{k}\rangle\!\rangle\)-tree._ Proof.: Denote by \(N=\langle\!\langle N_{1},\ldots,N_{k}\rangle\!\rangle\) and consider its action on \(\mathcal{S}\). The quotient of \(N\) by the normal closure of its elliptic elements \(K\lhd N\) is the fundamental group of \(\mathcal{S}/N=\mathcal{T}\) (see [11, Section 5]). Since \(N\) is generated by elliptic elements, it follows that \(K=N\) and so \(\mathcal{T}\) is a \(G/N\)-tree. The following result is [1, Proposition 2.17] due to Guirardel-Levitt, which generalises results of Dunwoody [13], Baumslag-Shalen [1, Theorem 1] and Fujiwara-Papasoglu [12, Proposition 5.12]. **Proposition 4.2**.: _Let \(G\) be a group and let_ \[\mathcal{T}_{1}\leftarrow\ldots\leftarrow\mathcal{T}_{k}\leftarrow\mathcal{T}_ {k+1}\leftarrow\ldots\] _be a sequence of refinements of cocompact \(G\)-trees. If \(G\) is finitely presented, then there is a cocompact \(G\)-tree \(\mathcal{S}\) with finitely generated vertex and edge stabilisers that dominates \(\mathcal{T}_{i}\) for all \(i\)._ Before proceeding, we first need the following useful lemma due to Bieri-Strebel [1, Lemma 2.1]. If \(G\) is a group, then \(G_{\mathrm{ab}}\) denotes its abelianisation. **Lemma 4.3**.: _Let \(k\) be a commutative ring with \(1\neq 0\) and let \(G\) be a group. Then \(G\) is of type \(\operatorname{FP}_{2}(k)\) if and only if there is a short exact sequence_ _where \(H\) is finitely presented and \(N_{ab}\otimes_{\mathbb{Z}}k=0\)._ We now generalise Proposition 4.2 to groups of type \(\operatorname{FP}_{2}(k)\) for an arbitrary commutative ring \(k\) with \(1\). The proof is essentially an extension of the proof of [1, Theorem A]. When \(k=\mathbb{Z}/2\mathbb{Z}\) and \(\mathcal{T}_{i}=\mathcal{T}\) for all \(i\), the following is due to Dicks-Dunwoody [13, Theorem 4.4]. **Theorem 4.4**.: _Let \(k\) be a commutative ring with \(1\neq 0\), let \(G\) be a group and let_ \[\mathcal{T}_{1}\leftarrow\ldots\leftarrow\mathcal{T}_{k}\leftarrow\mathcal{T}_{ k+1}\leftarrow\ldots\] _be a sequence of refinements of cocompact \(G\)-trees. If \(G\) is of type \(\operatorname{FP}_{2}(k)\), then there is a cocompact \(G\)-tree \(\mathcal{S}\) with finitely generated vertex and edge stabilisers that dominates \(\mathcal{T}_{i}\) for all \(i\)._ Proof.: Since \(G\) is of type \(\operatorname{FP}_{2}(k)\), there is a short exact sequence where \(H\) is finitely presented and \(N_{\operatorname{ab}}\otimes_{\mathbb{Z}}k=0\) by Lemma 4.3. Now \(\pi\) turns \(\mathcal{T}_{i}\) into an \(H\)-tree. Since \(H\) is finitely presented, by Proposition 4.2 there is a cocompact \(H\)-tree \(\mathcal{S}^{\prime}\) with finitely generated vertex and edge stabilisers that dominates the \(H\)-trees \(\mathcal{T}_{i}\) for all \(i\). Denote by \(v_{1},\ldots,v_{n}\) orbit representatives of vertices in \(\mathcal{S}^{\prime}\). Denote by \(H_{1},\ldots,H_{n}\ldots\leq H\) the corresponding stabiliser subgroups, which are finitely generated. Let \(N_{i}=\ker\left(\pi|_{H_{i}}\right)\). We now consider the quotients \[Q=H/\langle\!\langle N_{1},\ldots,N_{n}\rangle\!\rangle,\] \[\mathcal{S}=\mathcal{S}^{\prime}/\langle\!\langle N_{1},\ldots,N_ {n}\rangle\!\rangle.\] By Lemma 4.1, \(\mathcal{S}\) is a \(Q\)-tree. Since \(\mathcal{S}/Q\cong\mathcal{S}^{\prime}/H\), we see that \(\mathcal{S}\) is a cocompact \(Q\)-tree. As \(\pi\) factors through \(H\to Q\), we obtain domination maps of cocompact \(Q\)-trees \(\mathcal{S}\to\mathcal{T}_{i}\). Denote by \(N_{Q}\) the image of \(N\) in \(Q\). By definition, \(N_{Q}\) intersects each vertex stabiliser of the \(Q\)-tree \(\mathcal{S}\) trivially and thus acts freely on \(\mathcal{S}\). A group acting freely on a tree is free and so \(N_{Q}\) must be free. Since \(N_{\operatorname{ab}}\otimes_{\mathbb{Z}}k=0\) and \(N_{Q}\) is a quotient of \(N\), this implies that \((N_{Q})_{\operatorname{ab}}\otimes_{\mathbb{Z}}k=0\). If \(F\) is a free group, then \(F_{\operatorname{ab}}\otimes_{\mathbb{Z}}k=0\) if and only if \(F\) is trivial. Thus, \(N_{Q}=1\) and \(Q\cong G\). This makes \(\mathcal{S}\) a cocompact \(G\)-tree, dominating the \(G\)-trees \(\mathcal{T}_{i}\). The vertex stabilisers of the \(G\)-tree \(\mathcal{S}\) are conjugates of \(H_{1}/N_{1},\ldots,H_{n}/N_{n}\) and so are finitely generated. Similarly, the edge stabilisers are conjugates of quotients of the edge stabilisers of the \(H\)-tree \(\mathcal{S}^{\prime}\) and so are also finitely generated. This completes the proof. Now we prove Theorem 1.3. Proof of Theorem 1.3.: Let \(H\leq G\) be a finitely generated subgroup. Since \(H\) is finitely generated, there is an \(H\)-invariant subtree \(\mathcal{S}\subset\mathcal{T}\) such that \(\mathcal{S}\) is a cocompact \(H\)-tree. Now each vertex stabiliser for the action of \(H\) on \(\mathcal{S}\) is contained in a vertex stabiliser for the action of \(G\) on \(\mathcal{T}\). Since \(H\) has type \(\operatorname{FP}_{2}(k)\), by Theorem 4.4, \(H\) splits as a finite graph of groups with finitely generated vertex and edge groups, each conjugate into vertex stabilisers for the action of \(G\) on \(\mathcal{T}\). Since the vertex stabilisers for the action of \(G\) on \(\mathcal{T}\) are coherent, it follows that \(H\) splits as a finite graph of groups with finitely presented vertex and edge groups. Hence, \(H\) is finitely presented. We record the following immediate consequence of Theorem 1.3 for future use. **Theorem 4.5**.: _Denote by \(\mathcal{G}_{0}\) the class of all coherent groups. For each \(i\geqslant 1\), define \(\mathcal{G}_{i}\) to consist of all groups that split as graphs of groups with vertex groups lying in \(\mathcal{G}_{i-1}\). Denote by \(\mathcal{G}=\bigcup_{i\geqslant 0}\mathcal{G}_{i}\) and let \(k\) be a ring with \(1\neq 0\). If \(G\in\mathcal{G}\) has the property that every subgroup is of type \(\operatorname{FP}_{2}(k)\), then \(G\) is coherent._ We are now ready to settle Baumslag's conjecture. Proof of Theorem 1.1(1).: One-relator groups have vanishing second \(L^{2}\)-Betti number by work of Dicks-Linnell [6] and hence, as do all of their finite index subgroups. One-relator groups are virtually torsion-free by Fisher-Karrass-Solitar [10] and hence, by work of Brodskii [11] (see also [11, Corollary 3.2]), they are virtually locally indicable. Since one-relator groups virtually have cohomological dimension at most two, applying Theorem 1.2(2) we see that one-relator groups are homologically coherent. Alternatively, \(G\) has a finite index subgroup that is the fundamental group of a two-complex with non-positive immersions [12, Theorem 6.1] and so we could also use Theorem 1.2(1) to conclude that \(G\) is homologically coherent. We could also deduce homological coherence of \(G\) from the coherence of \(\mathbb{Q}G\). If \(G\) is a one-relator group, the Magnus-Moldavanskii hierarchy (in the form of [13] or [14, Theorem 5.2]) tells us that there is a finite sequence of one-relator subgroups \[G_{N}\leq\ldots\leq G_{1}\leq G_{0}=G\] such that \(G_{N}\) is a free product of a free group and a finite cyclic group and \(G_{i}\) splits as a HNN-extension over \(G_{i+1}\). Since \(G\) is homologically coherent, Theorem 4.5 implies that \(G\) is coherent. **Corollary 4.6**.: _Let \(X\) be the presentation complex of a one-relator group and let \(H\leq\pi_{1}(X)\) be a torsion-free finitely generated subgroup. There is a \(\pi_{1}\)-injective immersion \(Y\looparrowright X\) with \(Y\) a finite two-complex with non-positive immersions such that \(\pi_{1}(Y)=H\) and \(\pi_{1}(Y)\to\pi_{1}(X)\) realises the inclusion \(H\to\pi_{1}(X)\). In particular, finitely generated torsion-free subgroups of one-relator groups have finite aspherical presentations._ Proof.: Suppose that \(X\) is the presentation complex for \(\langle S\mid w^{n}\rangle\) where \(w\) is not a proper power in \(F(S)\). Using [12, Lemma 6.12], we see that there is an immersion \(Y\looparrowright X\) with \(Y\) finite such that \(\pi_{1}(Y)=H\) and \(\pi_{1}(Y)\to\pi_{1}(X)\) realises the inclusion \(H\to\pi_{1}(X)\). Since \(\pi_{1}(Y)\) is a torsion-free subgroup of \(\pi_{1}(X)\), it cannot contain any conjugate of a power of \(w^{k}\) for any \(1\leqslant k<|n|\). Since \(X\) has NTPI by [12, Corollary 1.5], it follows that any two-complex \(Z\) immersing into \(Y\) must be homotopy equivalent to a wedge of discs and a subcomplex with non-positive Euler characteristic. Hence, either \(\pi_{1}(Z)=1\) or \(\chi(Z)\leqslant 0\) and so \(Y\) has non-positive immersions. The final statement follows from Corollary 2.8. ## 5. Construction of flat modules In this section we will show that certain modules are flat. This will be used later in calculations of different Tor functors. ### Some auxiliary results Let \(R\) be a ring. A left \(R\)-module \(M\) is **torsion-free** if for any \(0\neq r\in R\), \(m\in M\), we have that \(r\cdot m=0\) implies that \(m=0\). **Lemma 5.1**.: _Let \(K\) be a field and let \(F\) be a free group. Let \(V\) be a left \(K[F]\)-module and let \(G=F/N\) be a locally indicable group. Let_ \[0\neq\alpha=\sum_{i=1}^{n}c_{i}\cdot f_{i}\in K[F]\;\left(0\neq c_{i}\in K,\ f_{i}\in F \right).\] _Assume that all \(g_{i}=f_{i}N\in G\) are different. If \(\mathcal{D}_{K[G]}\) exists, then_ \[\operatorname{Ann}_{V\otimes_{K}\mathcal{D}_{K[G]}}\left(\alpha\right)=\left\{ m\in V\otimes_{K}\mathcal{D}_{K[G]}\colon\alpha\cdot m=0\right\}\] _is trivial. In particular, if \(V\) is a left \(K[G]\)-module, then \(V\otimes_{K}\mathcal{D}_{K[G]}\) is torsion-free as a left \(K[G]\)-module._ Proof.: We prove the lemma by induction on \(n\). If \(n=1\), the statement is clear. Consider now the case where \(n>1\) and assume that \(\operatorname{Ann}_{V\otimes_{K}\mathcal{D}_{K[G]}}\left(\alpha\right)\neq 0\). Without loss of generality we can assume that \(f_{1}=1\). Let \(H=\left\langle g_{2},\ldots,g_{n}\right\rangle\). Since \(n>1\) and all \(g_{i}\) are different, \(H\) is not trivial. Let \(\widetilde{H}=\left\langle f_{2},\ldots,f_{n}\right\rangle\). Since \(G\) is locally indicable, there exists an epimorphism \(\phi:H\to\mathbb{Z}\), which induces an epimorphism \(\widetilde{\phi}:\widetilde{H}\to\mathbb{Z}\) satisfying \(\widetilde{\phi}\left(x\right)=\phi\left(xN\right)\). Let \(s\in\widetilde{H}\) be such that \(\widetilde{\phi}\left(s\right)=1\). Then we can write \[\alpha=\sum_{j=a}^{b}\alpha_{j}\cdot s^{j},\text{ with }\alpha_{j}\in R[ \ker\widetilde{\phi}]\text{ and }\alpha_{b}\neq 0.\] Observe that if we write \[\alpha_{b}=\sum_{k=1}^{l}d_{k}\cdot h_{k}\ \left(0\neq d_{k}\in R,\ h_{k}\in F \right),\] then all \(h_{k}N\) are different and \(l<n\). For simplicity we will write \(\mathcal{D}\) instead of \(\mathcal{D}_{K[G]}\). We can see \(V\otimes_{K}\mathcal{D}\) as an \((K[F],\mathcal{D})\)-bimodule. Therefore, given a basis \(B\) of \(\mathcal{D}\) as a left \(\mathcal{D}_{H}\)-module we obtain that \[V\otimes_{K}\mathcal{D}=\oplus_{q\in B}\left(V\otimes_{K}\mathcal{D}_{H}\,q \right)=\oplus_{q\in B}\left(V\otimes_{K}\mathcal{D}_{H}\right)q.\] Thus, since \(\operatorname{Ann}_{V\otimes_{K}\mathcal{D}}\left(\alpha\right)\neq 0\), \(\operatorname{Ann}_{V\otimes_{K}\mathcal{D}_{H}}\left(\alpha\right)\neq 0\) as well. Let \(t=sN\in H\). Since \(\mathcal{D}\) is Hughes-free, the ring \(S\) generated by \(\mathcal{D}_{\ker\phi}\) and \(t\) is isomorphic to the ring of twisted polynomials \(\mathcal{D}_{\ker\phi}[t,\tau]\), where \(\tau\) is the automorphism of \(\mathcal{D}_{\ker\phi}\) induced by conjugation by \(t\). Observe that \(\mathcal{D}_{H}\) is the Ore ring of fractions of \(S\). Thus, since \(\operatorname{Ann}_{V\otimes_{K}\mathcal{D}_{H}}\left(\alpha\right)\neq 0\), we also obtain that \(\operatorname{Ann}_{V\otimes_{K}S}\left(\alpha\right)\neq 0\). Let \(0\neq m\in\operatorname{Ann}_{V\otimes_{K}S}\left(\alpha\right)\). We can write \[m=\sum_{j=c}^{d}m_{j}\cdot t^{j},\text{ with }m_{j}\in V\otimes_{K}\mathcal{D} _{\ker\phi}\text{ and }m_{d}\neq 0.\] Observe that for every \(j\), \[s^{j}\left(V\otimes_{K}\mathcal{D}_{\ker\phi}\right)=V\otimes_{K}t^{j}\, \mathcal{D}_{\ker\phi}=\left(V\otimes_{K}\mathcal{D}_{\ker\phi}\right)t^{j}.\] Thus, since, \(\alpha\cdot m=0\), \(\alpha_{b}s^{b}\cdot m_{d}t^{d}=0\). Since \(m_{d}\neq 0\), \(s^{b}\cdot m_{d}t^{d}\neq 0\). Hence \(\operatorname{Ann}_{V\otimes_{K}\mathcal{D}}\left(\alpha_{b}\right)\neq 0\). But this contradicts the inductive hypothesis. Given a left \(K[G]\)-module \(M\), let \(M^{*}\) be the right \(K[G]\)-module that coincides with \(M\) as a \(K\)-vector space and the action of \(G\) is given by \(m\cdot g=g^{-1}m\). In the same way, given a right \(K[G]\) module \(M\) we can define the left \(K[G]\)-module \(M^{*}\). **Lemma 5.2**.: _Let \(G\) be a group. The following properties hold._ 1. _Let_ \(W\leq G\)_. Then we have that_ \(\left(I_{K[G]}/\left({}^{G}I_{K[W]}\right)\right)^{*}\cong I_{K[G]}/I_{K[W]}^{G}\)_._ 2. _Given left_ \(K[G]\)_-modules_ \(M\) _and_ \(N\) _and a right_ \(K[G]\)_-module_ \(L\)_, we have that for every_ \(k\in\mathbb{N}\)_,_ \[\operatorname{Tor}_{k}^{K[G]}\left(L,M\otimes_{K}N\right)\cong \operatorname{Tor}_{k}^{K[G]}\left(N^{*},M\otimes_{K}L^{*}\right).\] Proof.: (1) is clear and the proof of (2) is the same as of [1, Proposition III.2.2]. ### Proof of Theorems 1.7, 1.6 and 1.1(2) In this subsection we construct a flat \(K[G]\)-module for the fundamental group of a reducible two-complex without proper powers. This result implies Theorem 1.7. **Theorem 5.3**.: _Let \(K\) be a field and \(G=\pi_{1}(X)\), where \(X\) is a reducible two-complex without proper powers. Assume that \(\mathcal{D}_{K[G]}\) exists. Then the left \(K[G]\)-module \(\mathcal{D}_{K[G]}\otimes_{K}I_{K[G]}\) is flat._ Proof.: We prove the theorem by induction on the number of two-cells in \(X\). If there are no two-cells, then the module \(I_{K[G]}\) is free, and so \(\mathcal{D}_{K[G]}\otimes_{K}I_{K[G]}\) is free as well. Now suppose that \(G=G_{1}*G_{2}/\langle\!\langle w\rangle\!\rangle\), where \(G_{1}\) and \(G_{2}\) are in \(\mathcal{R}\) and \(w\) is either \(1\) or \(w\) is not conjugate into \(G_{1}\) or \(G_{2}\) within \(G_{1}*G_{2}\) and is not equal to a proper power in \(G_{1}*G_{2}\). We have that by the inductive hypothesis \(\mathcal{D}_{K[G_{i}]}\otimes_{K}I_{K[G_{i}]}\) is flat as a left \(K[G_{i}]\)-module. Observe also that by [10, Theorem 4.3] we can view \(G_{i}\) (\(i=1,2\)) as subgroups of \(G\). **Claim 5.4**.: _For \(i=1,2\), \(\mathcal{D}_{K[G]}\otimes_{K}\left({}^{G}I_{K[G_{i}]}\right)\) is flat as a left \(K[G]\)-module._ Proof.: Observe that since the division subalgebra of \(\mathcal{D}_{K[G]}\) generated by \(K[G_{i}]\) is isomorphic (as a \(K[G_{i}]\)-ring) to \(\mathcal{D}_{K[G_{i}]}\), \(\mathcal{D}_{K[G]}\otimes_{K}I_{K[G_{i}]}\) is also flat as a left \(K[G_{i}]\)-module. Let \(L\) be a right \(K[G]\)-module. Then by Lemma 5.2, Lemma 2.1 and Shapiro's lemma, \[\operatorname{Tor}_{1}^{K[G]}\left(L,\mathcal{D}_{K[G]}\otimes_{K}\left({}^{ G}I_{K[G_{i}]}\right)\right)\cong\operatorname{Tor}_{1}^{K[G]}\left(I_{K[G_{i}]}^{G}, \mathcal{D}_{K[G]}\otimes_{K}L^{*}\right)\cong\\ \operatorname{Tor}_{1}^{K[G_{i}]}\left(I_{K[G_{i}]},\mathcal{D}_{ K[G]}\otimes_{K}L^{*}\right)\cong\operatorname{Tor}_{1}^{K[G_{i}]}\left(L, \mathcal{D}_{K[G]}\otimes_{K}I_{K[G_{i}]}\right)=\{0\}.\] Therefore, \(\mathcal{D}_{K[G]}\otimes_{K}\left({}^{G}I_{K[G_{i}]}\right)\) is flat. From the previous claim the theorem follows in the case \(w=1\). Thus, from now on we assume that \(w\neq 1\). We can write \[w-1=\alpha_{1}+\alpha_{2}\text{ with }\alpha_{1}\in I_{K[G_{1}]}^{G_{1}*G_{2}} \text{ and }\alpha_{2}\in I_{K[G_{2}]}^{G_{1}*G_{2}}.\] **Claim 5.5**.: _For \(i=1,2\) the image \(\overline{\alpha_{i}}\) of \(\alpha_{i}\) in \(K[G]\) is not trivial._ Proof.: Changing \(w\) by its conjugate if needed, we can write \(w=a_{1}b_{1}\dots a_{n}b_{n}\), with \(1\neq a_{i}\in G_{1}\) and \(1\neq b_{i}\in G_{2}\). We have that \[\alpha_{1}=\sum_{i=1}^{n}\left(a_{i}-1\right)b_{i}a_{i+1}b_{i+1}\dots a_{n}b_{ n}\text{ and }\alpha_{2}=\sum_{i=1}^{n}\left(b_{i}-1\right)a_{i+1}b_{i+1}\dots a_{n}b_{n}.\] By [10, Corollary 3.4], we see that there are no prefixes \(u,v\) of \(w\) (as a word over \(G_{1}\) and \(G_{2}\)) such that \(u=v\) in \(G\). This implies the claim. We will write \(\mathcal{D}\) instead of \(\mathcal{D}_{K[G]}\) for ease of notation. Let \(L\) be a right \(K[G]\)-module. Then by Lemma 5.2, \[\operatorname{Tor}_{1}^{K[G]}\left(L,\mathcal{D}\otimes_{K}I_{K[G]}\right) \cong\operatorname{Tor}_{1}^{K[G]}\left(I_{K[G]},\mathcal{D}\otimes_{K}L^{*} \right).\] We have the following exact sequence of right \(K[G]\)-modules: \[0\to K[G]\xrightarrow{\gamma}I_{K[G_{1}]}^{G}\oplus I_{K[G_{2}]}^{G}\to I_{K[G ]}\to 0,\] where \(\gamma\left(a\right)=(\overline{\alpha_{1}}a,\overline{\alpha_{2}}a)\). Thus, we obtain the exact sequence \[\operatorname{Tor}_{1}^{K[G]}\left(I_{K[G_{1}]}^{G}\oplus I_{K[G_{ 2}]}^{G},\mathcal{D}\otimes_{K}L^{*}\right)\to\operatorname{Tor}_{1}^{K[G]} \left(I_{K[G]},\mathcal{D}\otimes_{K}L^{*}\right)\to\\ \mathcal{D}\otimes_{K}L^{*}\xrightarrow{\widetilde{\gamma}}\left( \left(I_{K[G_{1}]}^{G}\right)\otimes_{K[G]}\mathcal{D}\otimes_{K}L^{*}\right) \oplus\left(\left(I_{K[G_{2}]}^{G}\right)\otimes_{K[G]}\mathcal{D}\otimes_{K}L ^{*}\right),\] where \(\widetilde{\gamma}\left(m\right)=(\overline{\alpha_{1}}\otimes m,\overline{ \alpha_{2}}\otimes m)\). Observe that the composition \[m\mapsto\overline{\alpha_{1}}\otimes m\mapsto\overline{\alpha_{1}}m\] is the multiplication by \(\overline{\alpha_{1}}\). By Claim 5.5, \(\overline{\alpha_{1}}\neq 0\) and by Lemma 5.1, \(\mathcal{D}\otimes_{K}L^{*}\) is torsion-free. Thus, \(\ker\widetilde{\gamma}=\{0\}\). On the other hand by Claim 5.4, \[\operatorname{Tor}_{1}^{K[G]}\left(I_{K[G_{1}]}^{G}\oplus I_{K[G_{2}]}^{G}, \mathcal{D}\otimes_{K}L^{*}\right)\] is trivial. Thus, \[\operatorname{Tor}_{1}^{K[G]}\left(L,\mathcal{D}\otimes_{K}I_{K[G]}\right) \cong\operatorname{Tor}_{1}^{K[G]}\left(I_{K[G]},\mathcal{D}\otimes_{K}L^{*} \right)=\{0\},\] and so \(\mathcal{D}\otimes_{K}I_{K[G]}\) is flat. Theorem 1.7 now follows directly from the previous theorem. Proof of Theorem 1.7.: By symmetry we also have that the right \(K[G]\)-module \[\mathcal{D}_{K[G]}\otimes_{K}I_{K[G]}\] is flat. Therefore, using Lemma 5.2, we obtain that for any left \(K[G]\)-module \(L\), \[\operatorname{Tor}_{2}^{K[G]}\left(\mathcal{D}_{K[G]},L\right) \cong\operatorname{Tor}_{2}^{K[G]}\left(\mathcal{D}_{K[G]}\otimes_{K}L^{*},K \right)\cong\\ \operatorname{Tor}_{1}^{K[G]}\left(\mathcal{D}_{K[G]}\otimes_{K} L^{*},I_{K[G]}\right)\cong\operatorname{Tor}_{1}^{K[G]}\left(\mathcal{D}_{K[G]} \otimes_{K}I_{K[G]},L\right)=\{0\}.\] Hence, the right \(K[G]\)-module \(\mathcal{D}_{K[G]}\) is of weak dimension at most \(1\). Proof of Theorem 1.6.: By Theorem 2.11 and Corollary 2.8, we see that \(X\) is an aspherical two-complex and \(\pi_{1}(X)\) is locally indicable (see also [10]). Thus, \(\pi_{1}(X)\) has cohomological dimension at most two, and so, by Proposition 2.2, \(K[G]\) has global dimension at most two. Since \(\pi_{1}(X)\) is locally indicable, we see that \(\mathcal{D}_{K[G]}\) exists by [13]. By Theorem 1.7, \(\mathcal{D}_{K[G]}\) has weak dimension at most one. Now Corollary 3.2 tells us that \(K[G]\) is coherent. Proof of Theorem 1.1(2).: In the case that \(G\) is torsion-free the result follows from Theorem 1.6. If \(G\) has torsion, by work of Kielak and the second author [12], \(G\) is virtually free-by-cyclic. The fact that \(K[G]\) is coherent follows from [11, Proposition 2.9] or, alternatively, we could use Theorem 3.3. ### Flat modules associated with Magnus subgroups Let \(X=(\Gamma,\lambda)\) be a finite bireducible two-complex. Denote by \(\overline{X}=(\Gamma,\overline{\lambda})\) the bireducible two-complex such that \(\overline{\lambda}\colon\mathbb{S}\looparrow\Gamma\) is primitive and there exists a cover \(\mu\colon\mathbb{S}\looparrow\mathbb{S}\) with \(\lambda=\overline{\lambda}\circ\mu\). By Lemma 2.13, \(\overline{X}\) is without proper powers. As we have already mentioned Howie proved in [10] that \(\overline{G}=\pi_{1}(\overline{X})\) is locally indicable. The group \(\overline{G}\) is also the maximal torsion-free quotient of \(G\). **Theorem 5.6**.: _Let \(G=\pi_{1}(X)\), where \(X\) is a bireducible two-complex, \(\overline{G}=\pi_{1}(\overline{X})\) and \(H=\pi_{1}(\Lambda)\) a Magnus subgroup of \(G\) where \(\Lambda\subset X\) is small. Let \(K\) be a field of characteristic coprime with the orders of finite elements of \(G\) and assume that \(\mathcal{D}_{K[\overline{G}]}\) exists. Then the left \(K[G]\)-module_ \[\mathcal{D}_{K[\overline{G}]}\otimes\left(I_{K[G]}/\left({}^{G}I_{K[H]} \right)\right)\] _is flat._ Proof.: We prove the theorem by induction on the number of \(2\)-cells of \(X\). The case where there are no \(2\)-cells is clear. Assume now that \(X\) has at least one \(2\)-cell. Let \(Z\subset X\) be a reduction such that \(\Lambda\subset Z\) and let \(\alpha\) be the two-cell in \(X-Z\). We can attach an edge \(e\) to \(X\) so that \(Z^{\prime}=Z\cup e\) is connected. Put \(X^{\prime}=X\cup e\). Then \(G^{\prime}=\pi_{1}(X^{\prime})\cong G*\mathbb{Z}\). **Claim 5.7**.: _If the theorem holds for the pair \(H\leq G^{\prime}\), then it holds also for the pair \(H\leq G\)._ Proof.: We assume that the left \(K[G^{\prime}]\)-module \(M=\mathcal{D}_{K[\overline{G^{\prime}}]}\otimes\left(I_{K[G^{\prime}]}/\left({} ^{G}I_{K[H]}\right)\right)\) is flat. Let \(L\) be a right \(K[G]\)-module. Then by Lemma 5.2, Lemma 2.1 and Shapiro's lemma, \[\operatorname{Tor}_{1}^{K[G]}\left(L,\mathcal{D}_{K[\overline{G^{ \prime}}]}\otimes_{K}\left(I_{K[G]}/{}^{G}I_{K[H]}\right)\right)\cong\\ \operatorname{Tor}_{1}^{K[G]}\left(I_{K[G]}/I_{K[H]}^{G}, \mathcal{D}_{K[\overline{G^{\prime}}]}\otimes_{K}L^{*}\right)\cong \operatorname{Tor}_{1}^{K[G^{\prime}]}\left(I_{K[G]}^{G^{\prime}}/I_{K[H]}^{G^ {\prime}},\mathcal{D}_{K[\overline{G^{\prime}}]}\otimes_{K}L^{*}\right)\cong\\ \operatorname{Tor}_{1}^{K[G^{\prime}]}\left(L,\mathcal{D}_{K[ \overline{G^{\prime}}]}\otimes_{K}\left({}^{G^{\prime}}I_{K[G]}/\left({}^{G^{ \prime}}I_{K[H]}\right)\right)\right).\] The \(K[G^{\prime}]\)-module \(\left({}^{G^{\prime}}I_{K[G]}\right)/\left({}^{G^{\prime}}I_{K[H]}\right)\) is a direct summand of \(I_{K[G^{\prime}]}/\left({}^{G^{\prime}}I_{K[H]}\right)\). Hence, \(\mathcal{D}_{K[\overline{G^{\prime}}]}\otimes_{K}\left({}^{G^{\prime}}I_{K[G] }/\left({}^{G^{\prime}}I_{K[H]}\right)\right)\) is a direct sum of \(M\), which is flat. We conclude that \(\operatorname{Tor}_{1}^{K[G]}\left(L,\mathcal{D}_{K[\overline{G^{\prime}}]} \otimes_{K}\left(I_{K[G]}/{}^{G}I_{K[H]}\right)\right)=\{0\}.\) Therefore the left \(K[G]\)-module \(\mathcal{D}_{K[\overline{G^{\prime}}]}\otimes_{K}\left(I_{K[G]}/{}^{G}I_{K[H]}\right)\) is flat. On the other hand, the left \(K[G]\)-module \(\mathcal{D}_{K[\overline{G}]}\) is a direct summand of \(\mathcal{D}_{K[\overline{G^{\prime}}]}\). Thus, \(\mathcal{D}_{K[\overline{G}]}\otimes\left(I_{K[G]}/\left({}^{G}I_{K[H]} \right)\right)\) is also flat. Therefore, we can assume that \(Z\) is connected. Hence, we are in the following situation: 1. \(A=\pi_{1}(Z)\). 2. \(G=A*\langle t\rangle/\langle\!\langle w\rangle\!\rangle\), where \(w=u^{l}\) is a word over the free product \(A*\langle t\rangle\). 3. \(H\) is a Magnus subgroup of \(A\). By the induction hypothesis we have that \(\mathcal{D}_{K[\overline{A}]}\otimes_{K}\left(I_{K[A]}/\left({}^{A}I_{K[H]} \right)\right)\) is flat as a left \(K[A]\)-module. Observe also that by [11, Theorem 4.3] we can view \(\overline{A}\) as a subgroup of \(\overline{G}\). **Claim 5.8**.: _We have that \(\mathcal{D}_{K[\overline{G}]}\otimes_{K}\left({}^{G}I_{K[A]}/{}^{G}I_{K[H]}\right)\) is flat as a left \(K[G]\)-module._ Proof.: Observe that since the division subalgebra of \(\mathcal{D}_{K[\overline{G}]}\) generated by \(K[\overline{A}]\) is isomorphic (as a \(K[\overline{A}]\)-ring) to \(\mathcal{D}_{K[\overline{A}]}\), \(\mathcal{D}_{K[\overline{G}]}\otimes_{K}\left(I_{K[A]}/\left({}^{A}I_{K[H]} \right)\right)\) is also flat as a left \(K[A]\)-module. Let \(L\) be a right \(K[G]\)-module. Then by Lemma 5.2, Lemma 2.1 and Shapiro's lemma, \[\operatorname{Tor}_{1}^{K[G]}\left(L,\mathcal{D}_{K[\overline{G}]} \otimes_{K}\left({}^{G}I_{K[A]}/{}^{G}I_{K[H]}\right)\right)\cong\\ \operatorname{Tor}_{1}^{K[G]}\left(I_{K[A]}^{G}/I_{K[H]}^{G}, \mathcal{D}_{K[\overline{G}]}\otimes_{K}L^{*}\right)\cong\operatorname{Tor}_{1 }^{K[A]}\left(I_{K[A]}/I_{K[H]}^{A},\mathcal{D}_{K[\overline{G}]}\otimes_{K}L ^{*}\right)\cong\\ \operatorname{Tor}_{1}^{K[G_{i}]}\left(L,\mathcal{D}_{K[\overline {G}]}\otimes_{K}\left(I_{K[A]}/{}^{A}I_{K[H]}\right)\right)=0.\] Therefore, \(\mathcal{D}_{K[\overline{G}]}\otimes_{K}\left({}^{G}I_{K[A]}/{}^{G}I_{K[H]}\right)\) is flat. We can write \[u-1=\alpha_{1}+(t-1)\beta\text{ with }\alpha_{1}\in I_{K[A]}^{A*(t)}\text{ and }\beta \in K[A*\langle t\rangle].\] **Claim 5.9**.: _If we write \(\beta=\sum_{i=1}^{n}c_{i}\cdot f_{i}\in K[G]\;\;(0\neq c_{i}\in K,\ f_{i}\in G)\). Then all the images of \(f_{i}\) in \(\overline{G}\) are different._ Proof.: Denote by \(X_{\alpha}\subset X\) the smallest subcomplex containing \(\alpha\). By Lemma 2.12, \(X_{\alpha}\) is a \(\pi_{1}\)-injective subcomplex of \(X\). Observe also that \(\overline{X_{\alpha}}\) is a \(\pi_{1}\)-injective subcomplex of \(\overline{X}\). Thus, in this claim we can assume that \(X=X_{\alpha}\). Hence \(\pi_{1}(X)=\langle t,x_{1},\dots,x_{d}|u^{t}\rangle\), where \(u\) is not a proper power in the free group \(F\) generated freely by \(\{t,x_{1},\dots,x_{d}\}\). Put \(x_{0}=t\) and let \(u=x_{i_{1}}^{\epsilon_{1}}\dots x_{i_{k}}^{\epsilon_{k}}\) be the reduced form of \(u\) (here \(\epsilon_{i}=\pm 1\)). We can also asume that \(u\) is cyclically reduced. Since \(u\) is not conjugate to a word in \(\langle x_{1},\dots,x_{d}\rangle\), we have that \(x_{0}\) appears in the expression of \(u\). Write \[u-1=(x_{0}-1)\alpha_{0}+\dots+(x_{d}-1)\alpha_{d},\text{ with }\alpha_{i}\in K [F].\] The support of \(\alpha_{0}\) consists of words \(x_{i_{m}}^{\epsilon_{m}}\dots x_{i_{k}}^{\epsilon_{k}}\), where \[1\leq m\leq k+1\text{ and }x_{i_{m}}^{\epsilon_{m}}=x_{0}^{-1}\text{ or }x_{i_{m-1}}^{\epsilon_{m-1}}=x_{0}.\] Observe that, since \(u\) is reduced, both cases cannot occur and since \(u\) is cyclically reduced, both \(1\) and \(u\) cannot be in the support of \(\alpha_{1}\). By [10] the image in \(\overline{G}\cong\langle x_{0},x_{1},\dots,x_{d}|u\rangle\) of every proper non-trivial subword of \(u\) is non-trivial. This implies the claim. **Claim 5.10**.: _We have the following exact sequence of right \(K[G]\)-modules:_ \[0\to K[G]/(u-1)K[G]\xrightarrow{\gamma}I_{K[A]}^{G}\oplus K[G]\to I_{K[G]} \to 0,\] _where \(\gamma=(\gamma_{1},\gamma_{2})\) and \(\gamma_{2}(x)=\beta\cdot\frac{u^{l}-1}{u-1}\cdot x\)._ Proof.: This is well-known when \(X=X_{\alpha}\). In the general case we have that \[X=Z\cup_{\Lambda}X_{\alpha}\] where \(\Lambda=Z\cap X_{\alpha}\). Put \(T=\pi_{1}(\Lambda)\) and \(S=\pi_{1}(X_{\alpha})\). This leads to the exact sequences \[0\to I_{K[T]}^{G}\to I_{K[A]}^{G}\oplus I_{K[S]}^{G} \to I_{K[G]}\to 0\text{ and }\\ 0\to K[G]/(u-1)K[G]\xrightarrow{\gamma}I_{K[T]}^{G}\oplus K[G]\to I_{K[S]}^{G} \to 0,\] where \(\gamma=(\gamma_{1},\gamma_{2})\) and \(\gamma_{2}(a)=\beta\cdot\frac{u^{l}-1}{u-1}\cdot a\). Combining these two exact sequences we obtain the claim. We will write \(\mathcal{D}\) instead of \(\mathcal{D}_{K[\overline{G}]}\). Let \(L\) be a right \(K[G]\)-module. Then by Lemma 5.2, \[\operatorname{Tor}_{1}^{K[G]}\left(L,\mathcal{D}\otimes_{K}\left(I_{K[G]}/ \left({}^{G}I_{K[H]}\right)\right)\right)\cong\operatorname{Tor}_{1}^{K[G]} \left(I_{K[G]}/\left(I_{K[H]}^{G}\right),\mathcal{D}\otimes_{K}L^{*}\right).\] By Claim 5.10, we have the following exact sequence of right \(K[G]\)-modules: \[\ker\overline{\gamma}\to K[G]/(u-1)K[G]\xrightarrow{\overline{\gamma}} \left(I_{K[A]}^{G}/\left(I_{K[H]}^{G}\right)\right)\oplus K[G]\to I_{K[G]}/ \left(I_{K[H]}^{G}\right)\to 0,\] where \(\overline{\gamma}=(\overline{\gamma}_{1},\gamma_{2})\) and \(\gamma_{2}(a)=\beta\cdot\frac{u^{l}-1}{u-1}\cdot a\). Since \(\gamma_{2}\) is injective, \(\ker\overline{\gamma}=0\). Thus, we obtain the exact sequence \[\operatorname{Tor}_{1}^{K[G]}\left(\left(I_{K[A]}^{G}/\left(I_{K[ H]}^{G}\right)\right)\oplus K[G]\;,\mathcal{D}\otimes_{K}L^{*}\right)\to\\ \operatorname{Tor}_{1}^{K[G]}\left(I_{K[G]}/\left(I_{K[H]}^{G} \right),\mathcal{D}\otimes_{K}L^{*}\right)\to\\ \mathcal{D}\otimes_{K}L^{*}/(u-1)\left(\mathcal{D}\otimes_{K}L^{ *}\right)\xrightarrow{\widetilde{\gamma}}\left(I_{K[A]}^{G}/\left(I_{K[H]}^{G }\right)\otimes_{K[G]}\mathcal{D}\otimes_{K}L^{*}\right)\oplus\left(\mathcal{ D}\otimes_{K}L^{*}\right),\] where \(\widetilde{\gamma}=(\widetilde{\gamma}_{1},\widetilde{\gamma}_{2})\) with \(\widetilde{\gamma}_{2}(m)=\beta\cdot\frac{u^{l}-1}{u-1}\cdot m\). By Claim 5.9 and by Lemma 5.1, if \(m\in\mathcal{D}\otimes_{K}L^{*}\) and \(\beta\cdot m=0\), then \(m=0\). Also, since the characteristic of \(K\) is coprime with \(l\), if \(\frac{u^{l}-1}{u-1}\cdot m=0\), we have that \(m\in(u-1)\left(\mathcal{D}\otimes_{K}L^{*}\right)\). Thus, \(\ker\widetilde{\gamma}=\{0\}\). On the other hand by Claim 5.8, \[\operatorname{Tor}_{1}^{K[G]}\left(\left(I_{K[A]}^{G}/\left(I_{K[H]}^{G} \right)\right)\oplus K[G]\;,\mathcal{D}\otimes_{K}L^{*}\right)\] is trivial. Thus, \[\operatorname{Tor}_{1}^{K[G]}\left(I_{K[G]}/\left(I_{K[H]}^{G}\right),\mathcal{ D}\otimes_{K}L^{*}\right)=\{0\},\] and so \(\mathcal{D}_{K[\overline{G}]}\otimes\left(I_{K[G]}/\left({}^{G}I_{K[H]}\right)\right)\) is flat. ### Flat modules for free group algebras Observe that if a ring \(T\) is a quotient of a ring \(S\) and \(M\) is a left flat \(S\)-module, then \(T\otimes_{S}M\) is a left flat \(T\)-module. In the following theorem we show that in the case where the presentation complex of \(G=\langle X|R\rangle\) is reducible without proper powers, the left flat \(K[G]\)-module constructed in Theorem 5.3 can be lifted to a left flat \(K[F]\)-module. **Theorem 5.11**.: _Let \(K\) be a field, \(F\) a free group, \(W\) a strictly reducible subgroup of \(F\) and put \(G=F/\langle\!\langle W\rangle\!\rangle\). Assume that \(\mathcal{D}_{K[G]}\) exists. Then the left \(K[F]\)-module is flat._ _Remark_.: Notice that \[K[G]\otimes_{K[F]}\left(\mathcal{D}_{K[G]}\otimes_{K}\left(I_{K[F]}/\left({}^{F }I_{K[W]}\right)\right)\right)\cong\mathcal{D}_{K[G]}\otimes_{K}I_{K[G]}\] as left \(K[G]\)-modules. Proof.: We prove the theorem by induction on the rank of \(W\). If \(W=\{1\}\), then the module \(I_{K[F]}/I_{K[W]}^{F}\cong I_{K[F]}\) is free, and so, \(\mathcal{D}_{K[F]}\otimes_{K}I_{K[F]}\) is free as well. Now suppose that \(F\) is generated by \(X=X_{1}\sqcup\{x\}\), \(W\) is generated by \(R=R_{1}\sqcup\{w\}\), the complex associated with presentations \(G_{1}=\langle X_{1}|R_{1}\rangle\) is reducible without proper powers and \(w\) is either \(1\) or the image \(\overline{w}\) of \(w\) within \(G_{1}*\langle x\rangle\) is not conjugate in to \(G_{1}\) or \(\langle x\rangle\) and it is not equal to a proper power in \(G_{1}*\langle x\rangle\). We leave the case \(w=1\) to the reader and assume that \(w\neq 1\). We put \(F_{1}=\langle X_{1}\rangle\), \(F_{2}=\langle x\rangle\) and \(W_{1}=\langle R_{1}\rangle\). We have that by the inductive hypothesis \(\mathcal{D}_{K[G_{1}]}\otimes_{K}\left(I_{K[F_{1}]}/\left({}^{F_{1}}I_{K[W_{1 }]}\right)\right)\) is flat as a left \(K[F_{1}]\)-module. **Claim 5.12**.: _The left \(K[F]\)-module \(\mathcal{D}_{K[G]}\otimes_{K}\left(\left({}^{F}I_{K[F_{1}]}\right)/\left({}^ {F}I_{K[W_{1}]}\right)\right)\) is flat._ Proof.: It is proved in the same way as Claim 5.4. Since the group \(F\) is a free product of \(F_{1}\) and \(F_{2}\), we can write \[w-1=\alpha_{1}+\ \beta\left(x-1\right)\text{ with }\alpha_{1}\in{}^{F}I_{F_{1}} \text{ and }\beta\in K[F].\] **Claim 5.13**.: _If we write \(\beta=\sum_{i=1}^{n}c_{i}\cdot f_{i}\in K[F]\ \left(0\neq c_{i}\in K,\ f_{i}\in F\right)\). Then all \(g_{i}=f_{i}\langle\!\langle W\rangle\!\rangle\in G\) are different._ Proof.: It is proved in the same way as Claim 5.5. We will write \(\mathcal{D}\) instead of \(\mathcal{D}_{K[G]}\). Let \(L\) be a right \(K[F]\)-module. Then by Lemma 5.2, \[\operatorname{Tor}_{1}^{K[F]}\left(L,\mathcal{D}\otimes_{K}\left(I_{K[F]}/ \left({}^{F}I_{K[W]}\right)\right)\right)\cong\operatorname{Tor}_{1}^{K[F]} \left(I_{K[F]}/I_{K[W]}^{F},\mathcal{D}\otimes_{K}L^{*}\right).\] We have the following exact sequence of right \(K[F]\)-modules: \[0\to K[F]\xrightarrow{\gamma}\left(I_{K[F_{1}]}^{F}\right)/\left(I_{K[W_{1}] }^{F}\right)\oplus K[F]\to I_{K[F]}/\left(I_{K[W]}^{F}\right)\to 0,\] where \(\gamma(a)=\left(\alpha_{1}a+I_{K[W_{1}]}^{F},\beta a\right)\). Thus, we obtain the exact sequence \[\operatorname{Tor}_{1}^{K[F]}\left(\left(I_{K[F_{1}]}^{F}\right)/ \left(I_{K[W_{1}]}^{F}\right)\oplus K[F],\mathcal{D}\otimes_{K}L^{*}\right)\to\\ \operatorname{Tor}_{1}^{K[F]}\left(I_{K[F]}/\left(I_{K[W]}^{F} \right),\mathcal{D}\otimes_{K}L^{*}\right)\to\mathcal{D}\otimes_{K}L^{*} \xrightarrow{\widetilde{\gamma}}\\ \left(\left(\left(I_{K[F_{1}]}^{F}\right)/\left(I_{K[W_{1}]}^{F} \right)\right)\otimes_{K[F]}\left(\mathcal{D}\otimes_{K}L^{*}\right)\right) \oplus\left(\mathcal{D}\otimes_{K}L^{*}\right),\] where \(\widetilde{\gamma}=(\widetilde{\gamma}_{1},\widetilde{\gamma}_{2})\) and \(\widetilde{\gamma}_{2}\left(m\right)=\beta m\). By Claim 5.13, \(\beta\) satisfies the condition of Lemma 5.1. Therefore, \(\ker\widetilde{\gamma}=\{0\}\). On the other hand, \(\operatorname{Tor}_{1}^{K[F]}\left(\left(I_{K[F_{1}]}^{F}\right)/\left(I_{K[W_ {1}]}^{F}\right)\oplus K[F],\mathcal{D}\otimes_{K}L^{*}\right)\) is trivial by Claim 5.12. Thus, the left \(K[F]\)-module \(\mathcal{D}_{K[G]}\otimes_{K}\left(I_{K[F]}/\left({}^{F}I_{K[W]}\right)\right)\) is flat. ## 6. Applications The remainder of the article will be dedicated to proving a few applications. ### Mapping tori of free groups The first application we mention is a new proof of the coherence of an ascending HNN-extension of a free group. **Corollary 6.1** ([10]).: _Let \(G\) be an ascending HNN-extension of a free group. Then \(G\) is coherent._ Proof.: By Theorem 3.3, \(\mathbb{Q}[G]\) is coherent. Hence all finitely generated subgroups of \(G\) are of type \(\operatorname{FP}_{2}(\mathbb{Q})\). Now the result follows from Theorem 1.3. ### Right angled Artin groups and Coxeter groups Let \(\Gamma\) be a simplicial graph, then the **right angled Artin group** (RAAG) \(A(\Gamma)\) is the group with presentation \[A(\Gamma)=\langle V(\Gamma)\mid[v,w]=1,\text{ if }(v,w)\in E(\Gamma)\rangle.\] The classification of coherent right angled Artin groups was carried out by Droms [10]. Despite the simple description of the presentation of a right angled Artin group, their subgroup structure is extremely rich. Indeed, the first examples of groups of type \(\operatorname{FP}(\mathbb{Z})\) that are not finitely presented, due to Bestvina-Brady, are subgroups of RAAGs [1]. Although the Bestvina-Brady groups are homologically finitely presented, they must be homologically incoherent. **Proposition 6.2**.: _Let \(k\) be a ring and let \(G\) be a subgroup of a right angled Artin group. If every finitely generated subgroup of \(G\) is of type \(\operatorname{FP}_{2}(k)\), then \(G\) is coherent._ Proof.: It suffices to prove the result in the case that \(G\) is finitely generated. Hence we may assume that \(G\leq A(\Gamma)\) where \(\Gamma\) is a finite simplicial graph. If \(v\in V(\Gamma)\) is a vertex, denote by \(\Lambda_{v}\subset V(\Gamma)\) the subgraph on the set of vertices adjacent to \(v\) and by \(\Gamma_{v}\subset\Gamma\) the subgraph on the vertices \(V(\Gamma)-\{v\}\). It is clear that \(A(\Gamma)\cong A(\Gamma_{v})*_{\psi}\) where \(\psi\) is the identity isomorphism on \(A(\Lambda_{v})\). By induction, we see that there is a sequence of subgraphs \(\Gamma_{0}\subset\ldots\subset\Gamma_{n}=\Gamma\) such that \(A(\Gamma_{0})\cong\mathbb{Z}\) and such that \(A(\Gamma_{i})\) splits as a HNN-extension over \(A(\Gamma_{i-1})\) for all \(i\geqslant 1\). Now the result follows from Theorem 4.5. If \(\Gamma\) is a simplicial graph and \(m\colon E(\Gamma)\to\mathbb{N}_{\geqslant 2}\) is a map, the **Coxeter group**\(C(\Gamma)\) is the group given by the presentation \[C(\Gamma)=\langle V(\Gamma)\mid v_{i}^{2},(v_{i}v_{j})^{m(e)},e=(v_{i},v_{j}) \in E(\Gamma)\rangle.\] A Coxeter subgroup of \(C(\Gamma)\) is a subgroup generated by a subset of the vertex generators of \(C(\Gamma)\). This subgroup will be isomorphic to the Coxeter group on the full subgraph containing these vertices. If \(G=C(\Gamma)\), following [20, Definition 11.25], define: \[\overline{\chi}(G)=1-|V(\Gamma)|+\sum_{e\in E(\Gamma)}\frac{1}{m(e)}.\] Many Coxeter groups were shown to be incoherent in [11]. We may apply our techniques to establish coherence of a large subclass of Coxeter groups. This is one direction of [11, Conjecture 4.6] (see also [20, Conjectures 9.29 & 11.29]). Proof of Theorem 1.4.: By [20, Theorems 11.12 & 11.27], there is a two-complex \(X\) with non-positive immersions such that \(\pi_{1}(X)\) is a finite index subgroup of \(G\). Hence, \(G\) is homologically coherent by Theorem 1.2. By work of Haglund-Wise [11, Corollary 1.3], \(G\) has a finite index subgroup that is a subgroup of a right angled Artin group. Applying Proposition 6.2 yields the result. ### Groups with staggered presentations The aim of this section is to prove Theorem 1.5. Since the presentation complex of a finite staggered presentation is a bireducible complex, Theorem 1.5 is a corollary of the following more general theorem. **Theorem 6.3**.: _If \(X\) is a finite bireducible complex, then \(\pi_{1}(X)\) is coherent._ A finitely generated subgroup \(H\leq G\) has the **finitely generated intersection property (f.g.i.p.)** if \(H\cap K\) is finitely generated for all finitely generated subgroups \(K\leq G\). A key ingredient in the proof of Theorem 6.3 is the following theorem, due to Karrass-Solitar [12, 13]. See also [21, Theorem 5.4]. **Theorem 6.4**.: _If \(G\) splits as a graph of groups whose vertex groups are coherent and whose edge groups have f.g.i.p., then \(G\) is coherent._ In order to apply this theorem to bireducible two-complexes, we need to show that Magnus subgroups have f.g.i.p. **Theorem 6.5**.: _Magnus subgroups of fundamental groups of finite bireducible complexes have f.g.i.p._ We will give two proofs of this theorem. The first proof uses Theorem 5.6 and the second uses the fact that bireducible complexes have NTPI. _The first proof of Theorem 6.5._ Let \(H\) be a Magnus subgroup of the fundamental group of a bireducible two-complex \(X\). Let \(U\) be a finitely generated subgroup of \(G=\pi_{1}(X)\). As in Subsection 5.3 we put \(\overline{G}=\pi_{1}(\overline{X})\) and \(\mathcal{D}=\mathcal{D}_{\mathbb{Q}[\overline{G}]}\). Denote by \(N\) the kernel of the canonical map \(G\to\overline{G}\) and for any subgroup \(K\) of \(G\) let \[\delta_{K}=\left\{\begin{array}{ll}0&\text{if }K\leq N\\ 1&\text{if }K\nleq N\end{array}\right..\] Taking into account the exact sequence \[0\to\operatorname{Tor}_{1}^{\mathbb{Q}[G]}(\mathcal{D},\mathbb{Q}[G/K])\to \mathcal{D}\otimes_{\mathbb{Q}[K]}I_{\mathbb{Q}[K]}\to\mathcal{D}\otimes_{ \mathbb{Q}[G]}\mathbb{Q}[G]\to\mathcal{D}\otimes_{\mathbb{Q}[G]}\mathbb{Q}[G/ K]\to 0,\] we obtain that \[d(K)\geqslant\dim_{\mathcal{D}}\left(\mathcal{D}\otimes_{ \mathbb{Q}[K]}I_{\mathbb{Q}[K]}\right)=\\ \dim_{\mathcal{D}}\operatorname{Tor}_{1}^{\mathbb{Q}[G]}(\mathcal{D}, \mathbb{Q}[G/K])+1-\dim_{\mathcal{D}}\left(\mathcal{D}\otimes_{\mathbb{Q}[G]} \mathbb{Q}[G/K]\right)=\\ \dim_{\mathcal{D}}\operatorname{Tor}_{1}^{\mathbb{Q}[G]}(\mathcal{D}, \mathbb{Q}[G/K])+\delta_{K},\] and we have the equality if \(K\) is free. Since \(H\) is free, we have \[d(H\cap U) =\dim_{\mathcal{D}}\operatorname{Tor}_{1}^{\mathbb{Q}[G]}( \mathcal{D},\mathbb{Q}[G/\left(H\cap U\right)])+\delta_{H\cap U}\] \[\leqslant\dim_{\mathcal{D}}\operatorname{Tor}_{1}^{\mathbb{Q}[G]} (\mathcal{D},\mathbb{Q}[G/H]\otimes_{\mathbb{Q}}\mathbb{Q}[G/U])+\delta_{H \cap U}\] \[\overset{\text{Theorem \ref{thm:main is bireducible, where \(X^{\prime}\) is an isomorphic copy of \(X\). Indeed let \(U\subset Y\) be a subcomplex which we may assume contains at least one two-cell in \(X\) and at least one two-cell in \(X^{\prime}\). By assumption, there exists a reduction of \((U\cap X)\cup\Lambda\) containing \(\Lambda\) and so there is a reducing edge in \((U\cap X)-\Lambda\). Similarly for \(U\cap X^{\prime}\). Hence \(Y\) is bireducible. If \(H\leq\pi_{1}(X)\) is a finitely generated subgroup, then we have \[G=H*_{H\cap M}H^{\prime}\leq\pi_{1}(Y)\] where \(H^{\prime}\) is an isomorphic copy of \(H\). Since \(G\) is finitely generated, we see that \(b_{1}(G)<\infty\). Applying the Mayer-Vietoris sequence due to Swan [14, Theorem 2.3] to the amalgamated free product decomposition of \(G\), we see that \(b_{1}(H\cap U)<\infty\) by Proposition 2.10. As \(M\) is free, \(H\cap M\) is free also and so must be finitely generated. We have proved that \(M\) has the finitely generated intersection property in \(\pi_{1}(X)\). Proof of Theorem 6.3.: The proof is by induction on the number of two-cells in \(X\). The base case is Theorem 1.1(1). Now suppose that \(X\) contains at least two two-cells and assume the inductive hypothesis. Let \(Z\subset X\) be a reduction and let \(\alpha\) be the two-cell in \(X-Z\). Denote by \(X_{\alpha}\subset X\) the smallest subcomplex containing \(\alpha\). We have \[X=Z\cup_{\Lambda}X_{\alpha}\] where \(\Lambda=Z\cap X_{\alpha}\). Since \(X\) is bireducible, \(\Lambda\) is small. We may add edges to \(X\) and \(X_{\alpha}\) to ensure that \(Z\) and \(\Lambda\) are connected as the resulting two-complex will have coherent fundamental group if and only if the original one does. Lemma 2.12 tells us that \(\pi_{1}(\Lambda)\), \(\pi_{1}(Z)\) and \(\pi_{1}(X_{\alpha})\) are subgroups of \(\pi_{1}(X)\). In particular, we have \[\pi_{1}(X)\cong\pi_{1}(Z)*_{\pi_{1}(\Lambda)}\pi_{1}(X_{\alpha}).\] In Theorem 6.5, we have proved that \(\pi_{1}(\Lambda)\) has the finitely generated intersection property in \(\pi_{1}(X)\). Since \(\pi_{1}(Z)\) and \(\pi_{1}(X_{\alpha})\) are coherent by induction, we apply Theorem 6.4 to obtain the result. Proof of Theorem 1.5.: Let \(G\) be a group with a staggered presentation. If \(G\) is finitely presented, the result follows from Theorem 6.3. Now suppose that \(G\) has an infinite staggered presentation \[\langle S,\ldots,x_{-1},x_{0},x_{1},\ldots|\,\ldots,r_{-1},r_{0},r_{1},\ldots\rangle\] where the ordering of the generators is given by their indexing and \(S\) is the set of unordered generators. Denote by \(m_{i}\) and \(M_{i}\) the smallest and largest integers such that \(r_{i}\) mentions \(x_{m_{i}}\) and \(x_{M_{i}}\) respectively. Let \(H_{i}=\langle S,x_{m_{i}},\ldots,x_{M_{i}}\mid r_{i}\rangle\) and \(A_{i}=F(S,x_{m_{i+1}},\ldots,x_{M_{i}})\). By the Freiheitssatz [13] we see that \[G\cong\ldots*_{A_{i-1}}H_{i}*_{A_{i}}H_{i+1}*_{A_{i+1}}\cdots\] If \(H\leq G\) is a finitely generated subgroup, there exist integers \(i\leq j\) such that \[H\leq H_{i}*_{A_{i}}\ldots*_{A_{j-1}}H_{j}\leq G\] Now Theorem 6.3 finishes the proof. We close this section with an example of a finitely generated group with a staggered presentation which is not virtually torsion-free, demonstrating that we could not simply appeal to Theorem 1.2 in the proof of Theorem 1.5. **Example 6.6**.: Consider the following group known as the Baumslag-Gersten group: \[G=\langle a,t\mid[a^{t},a^{-1}]=a\rangle.\] Baumslag proved that every finite quotient of \(G\) is finite cyclic [1]. Hence, if \(G\to H\) is a homomorphism to a finite group, \(a\) is in the kernel as \(a\in[G,G]\). So now consider the following example of a group with a staggered presentation: \[K=\langle a,t\mid[a^{t},a^{-1}]=a\rangle*_{\langle a\rangle=\langle b\rangle} \langle b,c\mid[b,c]^{2}\rangle\] Let \(\phi\colon K\to H\) be a homomorphism with \(H\) finite. By the above, we see that \(a\in\ker(\phi)\) and thus \(b\in\ker(\phi)\). But this implies that \(\phi([b,c])=1\) and so \(\ker(\phi)\) has elements of finite order. Since \(\phi\) was arbitrary, this shows that \(K\) is not virtually torsion-free. ### An alternative proof of the rank one Hanna Neumann conjecture Given two finitely generated subgroups \(U\) and \(W\) of a free group \(F\) the Friedman-Mineyev theorem [10, 11] (previously known as the Strengthened Hanna Neumann conjecture) states that \[\sum_{x\in W\setminus F/U}\overline{d}\left(xUx^{-1}\cap W\right)\leqslant \overline{d}\left(U\right)\overline{d}\left(W\right).\] This result says nothing about the cyclic intersections \(xUx^{-1}\cap W\). Wise proposed a rank-\(1\) version of this conjecture, which was proved independently by Helfer and Wise [12] and Louder and Wilton [12]. **Theorem 6.7** (Helfer-Wise, Louder-Wilton).: _Let \(U\) be a subgroup of a free group \(F\), \(w\in F\) a non-proper power element and \(W=\langle w\rangle\). Then_ \[\sum_{x\in W\setminus F/U}d\left(xUx^{-1}\cap W\right)\leq\left\{\begin{array} []{ll}\frac{d\left(U\right)}{d\left(U\right)}&\text{if }U\leq\langle\!\langle W\rangle\! \rangle\\ \frac{d\left(U\right)}{d\left(U\right)}&\text{if }U\not\leq\langle\!\langle W \rangle\!\rangle\end{array}\right..\] In this section we will prove a generalisation of this result. In order to formulate it, we need another interpretation for the sum \(\sum_{x\in W\setminus F/U}d(xUx^{-1}\cap W)\). We follow the approach developed in [1, 2]. Let \(W\) be an arbitrary subgroup of \(F\). Consider \(\mathbb{Q}[F/U]\) as a left \(\mathbb{Q}[W]\)-module. Then \[\mathbb{Q}[F/U]\cong\bigoplus_{x\in W\setminus F/U}\mathbb{Q}[W/(xUx^{-1}\cap W )].\] Observe that \(d(xUx^{-1}\cap W)=\dim_{\mathbb{Q}}\operatorname{Tor}_{1}^{\mathbb{Q}[W]}( \mathbb{Q},\mathbb{Q}[W/(xUx^{-1}\cap W)])\). Therefore, the sum that appears in Theorem 6.7 has the following interpretaion \[\sum_{x\in W\setminus F/U}d(xUx^{-1}\cap W)=\dim_{\mathbb{Q}}\operatorname{ Tor}_{1}^{\mathbb{Q}[W]}(\mathbb{Q},\mathbb{Q}[F/U]).\] If \(F\) is a free group, all left ideals of \(\mathbb{Q}[F]\) are left free \(\mathbb{Q}[F]\)-modules of a unique rank. We denote by \(\operatorname{rk}(L)\) the rank of a free \(\mathbb{Q}[F]\)-module \(L\) and we also put \(\overline{\operatorname{rk}}\left(L\right)=\max\{\operatorname{rk}\left(L \right)-1,0\}\). Notice that \[\mathbb{Q}[F/U]\cong\mathbb{Q}[F]/\left({}^{F}I_{\mathbb{Q}[U]}\right)\text{ and } \operatorname{rk}\left({}^{F}I_{\mathbb{Q}[U]}\right)=d\left(U\right).\] Thus, the following result is a generalization of Theorem 6.7 and Theorem 1.8. **Theorem 6.8**.: _Let \(F\) be a free group, \(W\) a strictly reducible subgroup of \(F\), \(I\) the ideal of \(\mathbb{Q}[F]\) generated by \(\{w-1\colon w\in W\}\) and \(L\) a left ideal of \(\mathbb{Q}[F]\), then_ \[\dim_{\mathbb{Q}}\operatorname{Tor}_{1}^{\mathbb{Q}[W]}(\mathbb{Q},\mathbb{Q}[ F]/L)\leq\left\{\begin{array}{ll}\operatorname{rk}(L)&\text{if }L\leq I\\ \overline{\operatorname{rk}}(L)&\text{if }L\not\leq I\end{array}\right..\] Proof.: Let \(G=F/\langle\!\langle W\rangle\!\rangle\), \(\mathcal{D}=\mathcal{D}_{\mathbb{Q}[G]}\) and \(M=\mathbb{Q}[F]/L\). **Claim 6.9**.: _The induced left \(\mathbb{Q}[F]\)-module \(\mathbb{Q}[F]\otimes_{\mathbb{Q}[W]}M\) is isomorphic to the left module \(\mathbb{Q}[F/W]\otimes_{\mathbb{Q}}M\)._ Proof.: Let \(T\) be a left transversal of \(F\) with respect to \(W\). Define a \(\mathbb{Q}\)-linear map \(\tau:\mathbb{Q}[F]\otimes_{\mathbb{Q}[W]}M\to\mathbb{Q}[F/W]\otimes_{\mathbb{ Q}}M\) by \[\tau:t\otimes m\mapsto tW\otimes tm\ (t\in T,m\in M).\] It is bijective because we can define the inverse map by \(\tau^{-1}(tW\otimes m)=t\otimes t^{-1}m\). If \(f\in F\) and \(t\in T\), then there are \(t^{\prime}\in T\) and \(w\in W\) such that \(ft=t^{\prime}w\). Thus, we obtain \[\tau(f(t\otimes m))=\tau(ft\otimes m)=\tau(t^{\prime}w\otimes m)= \tau(t^{\prime}\otimes wm)=\] \[t^{\prime}W\otimes t^{\prime}wm=ftW\otimes ftm=f\left(tW\otimes tm \right)=f\tau(t\otimes m).\] Therefore, \(\tau\) is also a \(\mathbb{Q}[F]\)-homomorphism. Note that \[\dim_{\mathbb{Q}}\operatorname{Tor}_{1}^{\mathbb{Q}[W]}(\mathbb{Q },M)=\dim_{\mathcal{D}}\operatorname{Tor}_{1}^{\mathbb{Q}[W]}(\mathcal{D},M) \stackrel{{\text{Shapiro's lemma}}}{{=}}\\ \dim_{\mathcal{D}}\operatorname{Tor}_{1}^{\mathbb{Q}[F]}(\mathcal{D}, \mathbb{Q}[F]\otimes_{\mathbb{Q}[W]}M)\stackrel{{\text{Claim \ref{eq:def